Compare commits

..

63 Commits

Author SHA1 Message Date
b3b0be832a Fix: input (#10386)
### What problem does this PR solve?

Fix input of some parser.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 15:39:09 +08:00
20b577a72c Fix: Merge main branch (#10377)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-09-30 13:13:15 +08:00
4d6ff672eb Fix: Added read-only mode support and optimized navigation logic #9869 (#10370)
### What problem does this PR solve?

Fix: Added read-only mode support and optimized navigation logic #9869

- Added the `isReadonly` property to the parseResult component to
control the enabled state of editing and interactive features
- Added the `navigateToDataFile` navigation method to navigate to the
data file details page
- Refactored the `navigateToDataflowResult` method to use an object
parameter to support more flexible query parameter configuration
- Unified the `var(--accent-primary)` CSS variable format to
`rgb(var(--accent-primary))` to accommodate more styling scenarios
- Extracted the parser initialization logic into a separate hook
(`useParserInit`)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 12:00:29 +08:00
fb19e24f8a Feat: Delete flow related code. #9869 (#10371)
### What problem does this PR solve?

Feat: Delete flow related code. #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-30 12:00:17 +08:00
9989e06abb Fix: debug PDF positions.. (#10365)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 09:24:44 +08:00
c49e81882c Feat: Remove the copy icon from the toolbar for the Splitter and Parser nodes #9869 (#10367)
### What problem does this PR solve?
Feat: Remove the copy icon from the toolbar for the Splitter and Parser
nodes #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 18:55:53 +08:00
63cdce660e Feat: Limit the number of Splitter and Parser operators on the canvas to only one #9869 (#10362)
### What problem does this PR solve?

Feat: Limit the number of Splitter and Parser operators on the canvas to
only one #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 17:22:40 +08:00
8bc8126848 Feat: Move the github icon to the right #9869 (#10355)
### What problem does this PR solve?

Feat: Move the github icon to the right #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 11:50:58 +08:00
71f69cdb75 Fix: debug hierachical merging... (#10337)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-29 09:29:33 +08:00
664bc0b961 Feat: Displays the loading status of the data flow log #9869 (#10347)
### What problem does this PR solve?

Feat: Displays the loading status of the data flow log #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 19:38:46 +08:00
f4cc4dbd30 Fix: Interoperate with the pipeline rerun and unbindTask interfaces. #9869 (#10346)
### What problem does this PR solve?

Fix: Interoperate with the pipeline rerun and unbindTask interfaces.
#9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 19:32:19 +08:00
cce361d774 Feat: Filter the agent list by owner and category #9869 (#10344)
### What problem does this PR solve?

Feat: Filter the agent list by owner and category #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 18:43:20 +08:00
7a63b6386e Feat: limit pipeline operation logs to 1000 records (#10341)
### What problem does this PR solve?

 Limit pipeline operation logs to 1000 records.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 18:42:19 +08:00
4996dcb0eb Fix bug of image parser and prompt of parser supports customization (#10319)
### What problem does this PR solve?
BugFix: ERROR: KeyError: 'llm_id'
Feat: The prompt of the describe picture in cv_model supports
customization #10320


### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 12:47:36 +08:00
3521eb61fe Feat: add support for deleting KB tasks (#10335)
### What problem does this PR solve?

Add support for deleting KB tasks.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 12:46:00 +08:00
6b9b785b5c Feat: Fixed the issue where the cursor would go to the end when changing its own data #9869 (#10316)
### What problem does this PR solve?

Feat: Fixed the issue where the cursor would go to the end when changing
its own data #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 19:55:42 +08:00
4c0a89f262 Feat: add initial support for Mindmap (#10310)
### What problem does this PR solve?

Add initial support for Mindmap.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-09-26 19:45:01 +08:00
76b1ee2a00 Fix: debug pipeline... (#10311)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-26 19:11:30 +08:00
771a38434f Feat: Bring the parser operator when creating a new data flow #9869 (#10309)
### What problem does this PR solve?

Feat: Bring the parser operator when creating a new data flow #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 19:09:27 +08:00
886d38620e Fix: Improved knowledge base configuration and related logic #9869 (#10315)
### What problem does this PR solve?

Fix: Improved knowledge base configuration and related logic #9869
- Optimized the display logic of the Generate Log button to support
displaying completion time and task ID
- Implemented the ability to pause task generation and connect to the
data flow cancellation interface
- Fixed issues with type definitions and optional chaining calls in some
components
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-26 19:09:11 +08:00
c7efaab30e Feat: debug extractor... (#10294)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 10:51:05 +08:00
ff49454501 Feat: fetch KB config for GraphRAG and RAPTOR (#10288)
### What problem does this PR solve?

Fetch KB config for GraphRAG and RAPTOR.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 09:39:58 +08:00
14273b4595 Fix: Optimized knowledge base file parsing and display #9869 (#10292)
### What problem does this PR solve?

Fix: Optimized knowledge base file parsing and display #9869

- Optimized the ChunkMethodDialog component logic and adjusted
FormSchema validation rules
- Updated the document information interface definition, adding
pipeline_id, pipeline_name, and suffix fields
- Refactored the ChunkResultBar component, removing filter-related logic
and simplifying the input box and chunk creation functionality
- Improved FormatPreserveEditor to support text mode switching
(full/omitted) display control
- Updated timeline node titles to more accurate semantic descriptions
(e.g., character splitters)
- Optimized the data flow result page structure and style, dynamically
adjusting height and content display
- Fixed the table sorting function on the dataset overview page and
enhanced the display of task type icons and status mapping.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 19:53:49 +08:00
abe7132630 Feat: Change the corresponding prompt word according to the value of fieldName #9869 (#10291)
### What problem does this PR solve?

Feat: Change the corresponding prompt word according to the value of
fieldName #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 19:53:37 +08:00
c1151519a0 Feat: add foundational support for RAPTOR dataset pipeline logs (#10277)
### What problem does this PR solve?

Add foundational support for RAPTOR dataset pipeline logs.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 16:46:24 +08:00
a1147ce609 Feat: Allows the extractor operator's prompt to reference the output of an upstream operator #9869 (#10279)
### What problem does this PR solve?

Feat: Allows the extractor operator's prompt to reference the output of
an upstream operator #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 15:24:24 +08:00
d907e79893 Refa: fake doc ID. (#10276)
### What problem does this PR solve?
#10273
### Type of change

- [x] Refactoring
2025-09-25 13:52:50 +08:00
1b19d302c5 Feat: add extractor component. (#10271)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 11:34:47 +08:00
840b2b5809 Feat: add foundational support for GraphRAG dataset pipeline logs (#10264)
### What problem does this PR solve?

Add foundational support for GraphRAG dataset pipeline logs

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 09:35:50 +08:00
a6039cf563 Fix: Optimized the timeline component and parser editing features #9869 (#10268)
### What problem does this PR solve?

Fix: Optimized the timeline component and parser editing features #9869

- Introduced the TimelineNodeType type, restructured the timeline node
structure, and supported dynamic node generation
- Enhanced the FormatPreserveEditor component to support editing and
line wrapping of JSON-formatted content
- Added a rerun function and loading state to the parser and splitter
components
- Adjusted the timeline style and interaction logic to enhance the user
experience
- Improved the modal component and added a destroy method to support
more flexible control
- Optimized the chunk result display and operation logic, supporting
batch deletion and selection
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-24 19:58:30 +08:00
8be7380b79 Feat: Added the context operator form for data flow #9869 (#10270)
### What problem does this PR solve?
Feat: Added the context operator form for data flow #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 19:58:16 +08:00
afb8a84f7b Feat: Add context node #9869 (#10266)
### What problem does this PR solve?

Feat: Add context node #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 18:48:31 +08:00
6bf0cda16f Feat: Cancel a running data flow test #9869 (#10257)
### What problem does this PR solve?

Feat: Cancel a running data flow test #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 16:33:33 +08:00
5715ca6b74 Fix: pipeline debug... (#10206)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 11:12:08 +08:00
8f465525f7 Feat: Display the log after the data flow runs #9869 (#10232)
### What problem does this PR solve?

Feat: Display the log after the data flow runs #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-23 19:30:47 +08:00
f20dca2895 Fix: Interface integration for the file log page in the overview #9869 (#10222)
### What problem does this PR solve?

Fix: Interface integration for the file log page in the overview

- Support for selecting data pipeline parsing types
- Use the RunningStatus enumeration instead of numeric status
- Obtain and display data pipeline file log details
- Replace existing mock data with new interface data on the page
- Link the file log list to the real data source
- Optimize log information display
- Fixed a typo in the field name "pipeline_id" → "pipeline_id"

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-23 10:33:17 +08:00
0c557e37ad Feat: add support for pipeline logs operation (#10207)
### What problem does this PR solve?

Add support for pipeline logs operation

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-23 09:46:31 +08:00
d0bfe8b10c Feat: Display the data flow log on the far right. #9869 (#10214)
### What problem does this PR solve?

Feat: Display the data flow log on the far right. #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 19:13:18 +08:00
28afc7e67d Feat: Exporting the results of data flow tests #9869 (#10209)
### What problem does this PR solve?

Feat: Exporting the results of data flow tests #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 18:08:04 +08:00
73c33bc8d2 Fix: Fixed the issue where the drop-down box could not be displayed after selecting a large model #9869 (#10205)
### What problem does this PR solve?

Fix: Fixed the issue where the drop-down box could not be displayed
after selecting a large model #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-22 17:16:34 +08:00
476852e8f1 Feat: Remove useless files from the data flow #9869 (#10198)
### What problem does this PR solve?

Feat: Remove useless files from the data flow #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 15:48:39 +08:00
e6cf00cb33 Feat: Add suffix field to all operators #9869 (#10195)
### What problem does this PR solve?

Feat: Add suffix field to all operators #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 14:37:06 +08:00
d039d1e73d fix: Added dataset generation logging functionality #9869 (#10180)
### What problem does this PR solve?

fix: Added dataset generation logging functionality #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-22 10:01:34 +08:00
d050ef568d Feat: support dataflow run. (#10182)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 09:36:21 +08:00
028c2d83e9 Feat: parse email (#10181)
### What problem does this PR solve?

- Dataflow support email.
- Fix old email parser.
- Add new depends to parse msg file.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [x] Other (please describe): add new depends.
2025-09-22 09:29:38 +08:00
b5d6a6e8f2 Feat: Remove unnecessary data from the dsl #9869 (#10177)
### What problem does this PR solve?
Feat: Remove unnecessary data from the dsl #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 19:06:33 +08:00
5dfdbcce3a Feat: pipeline supports PPTX (#10167)
### What problem does this PR solve?

Pipeline supports parsing PPTX naively (text only).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 12:14:35 +08:00
4fae40f66a Feat: Translate the splitter operator field #9869 (#10166)
### What problem does this PR solve?

Feat: Translate the splitter operator field #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 11:11:22 +08:00
a1b947ffd6 Feat: add splitter (#10161)
### What problem does this PR solve?


### Type of change
- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
2025-09-19 10:15:19 +08:00
f9c7404bee Fix: Updated color parsing functions and optimized component logic. (#10159)
### What problem does this PR solve?

refactor(timeline, modal, dataflow-result, dataset-overview): Updated
color parsing functions and optimized component logic.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 09:57:44 +08:00
5c1791d7f0 Feat: Upload files on the data flow page #9869 (#10153)
### What problem does this PR solve?

Feat: Upload files on the data flow page #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 16:19:53 +08:00
e82617f6de feat(dataset): Added data pipeline configuration functionality #9869 (#10132)
### What problem does this PR solve?

feat(dataset): Added data pipeline configuration functionality #9869

- Added a data pipeline selection component to link data pipelines with
knowledge bases
- Added file filtering functionality, supporting custom file filtering
rules
- Optimized the configuration interface layout, adjusting style and
spacing
- Introduced new icons and buttons to enhance the user experience

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 09:31:57 +08:00
a7abc57f68 Feat: Add SliderInputFormField story #9869 (#10138)
### What problem does this PR solve?

Feat: Add SliderInputFormField story #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 09:29:33 +08:00
cf1f523d03 Feat: Create a data flow #9869 (#10131)
### What problem does this PR solve?

Feat: Create a data flow #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-17 17:54:21 +08:00
ccb255919a Feat: Add HierarchicalMergerForm #9869 (#10122)
### What problem does this PR solve?
Feat:  Add HierarchicalMergerForm #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-17 13:47:50 +08:00
b68c84b52e Feat: Add splitter form #9869 (#10115)
### What problem does this PR solve?

Feat: Add splitter form #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-17 09:36:54 +08:00
93cf0258c3 Feat: Add splitter node component #9869 (#10114)
### What problem does this PR solve?

Feat: Add splitter node component #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 17:53:48 +08:00
b79fef1ca8 fix: Modify icon file, knowledge base display style (#10104)
### What problem does this PR solve?

fix: Modify icon file, knowledge base display style #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-16 10:37:08 +08:00
2b50de3186 Feat: Translate the fields of the parsing operator #9869 (#10079)
### What problem does this PR solve?

Feat: Translate the fields of the parsing operator #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-15 11:24:19 +08:00
d8ef22db68 Fix(dataset): Optimized the dataset configuration page UI #9869 (#10066)
### What problem does this PR solve?
fix(dataset): Optimized the dataset configuration page UI

- Added the DataPipelineSelect component for selecting data pipelines
- Restructured the layout and style of the dataset settings page
- Removed unnecessary components and code
- Optimized data pipeline configuration
- Adjusted the Create Dataset dialog box
- Updated the processing log modal style

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-12 16:01:37 +08:00
592f3b1555 Feat: Bind options to the parser operator form. #9869 (#10069)
### What problem does this PR solve?

Feat: Bind options to the parser operator form. #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-12 16:01:24 +08:00
3404469e2a Feat: Dynamically increase the configuration of the parser operator #9869 (#10060)
### What problem does this PR solve?

Feat: Dynamically increase the configuration of the parser operator
#9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-12 10:14:26 +08:00
63d7382dc9 fix: Displays the dataset creation and settings page #9869 (#10052)
### What problem does this PR solve?

[_Briefly describe what this PR aims to solve. Include background
context that will help reviewers understand the purpose of the
PR._](fix: Displays the dataset creation and settings page #9869)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 17:25:07 +08:00
2160 changed files with 661458 additions and 183322 deletions

View File

@ -1,22 +0,0 @@
# Project instructions for Copilot
## How to run (minimum)
- Install:
- python -m venv .venv && source .venv/bin/activate
- pip install -r requirements.txt
- Run:
- (fill) e.g. uvicorn app.main:app --reload
- Verify:
- (fill) curl http://127.0.0.1:8000/health
## Project layout (what matters)
- app/: API entrypoints + routers
- services/: business logic
- configs/: config loading (.env)
- docs/: documents
- tests/: pytest
## Conventions
- Prefer small, incremental changes.
- Add logging for new flows.
- Add/adjust tests for behavior changes.

View File

@ -3,18 +3,11 @@ name: release
on: on:
schedule: schedule:
- cron: '0 13 * * *' # This schedule runs every 13:00:00Z(21:00:00+08:00) - cron: '0 13 * * *' # This schedule runs every 13:00:00Z(21:00:00+08:00)
# https://github.com/orgs/community/discussions/26286?utm_source=chatgpt.com#discussioncomment-3251208
# "The create event does not support branch filter and tag filter."
# The "create tags" trigger is specifically focused on the creation of new tags, while the "push tags" trigger is activated when tags are pushed, including both new tag creations and updates to existing tags. # The "create tags" trigger is specifically focused on the creation of new tags, while the "push tags" trigger is activated when tags are pushed, including both new tag creations and updates to existing tags.
push: create:
tags: tags:
- "v*.*.*" # normal release - "v*.*.*" # normal release
- "nightly" # the only one mutable tag
permissions:
contents: write
actions: read
checks: read
statuses: read
# https://docs.github.com/en/actions/using-jobs/using-concurrency # https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency: concurrency:
@ -23,52 +16,52 @@ concurrency:
jobs: jobs:
release: release:
runs-on: [ "self-hosted", "ragflow-test" ] runs-on: [ "self-hosted", "overseas" ]
steps: steps:
- name: Ensure workspace ownership - name: Ensure workspace ownership
run: echo "chown -R ${USER} ${GITHUB_WORKSPACE}" && sudo chown -R ${USER} ${GITHUB_WORKSPACE} run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/blob/v6/README.md # https://github.com/actions/checkout/blob/v3/README.md
- name: Check out code - name: Check out code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
fetch-depth: 0 fetch-depth: 0
fetch-tags: true fetch-tags: true
- name: Prepare release body - name: Prepare release body
run: | run: |
if [[ ${GITHUB_EVENT_NAME} != "schedule" ]]; then if [[ $GITHUB_EVENT_NAME == 'create' ]]; then
RELEASE_TAG=${GITHUB_REF#refs/tags/} RELEASE_TAG=${GITHUB_REF#refs/tags/}
if [[ ${RELEASE_TAG} == v* ]]; then if [[ $RELEASE_TAG == 'nightly' ]]; then
PRERELEASE=false
else
PRERELEASE=true PRERELEASE=true
else
PRERELEASE=false
fi fi
echo "Workflow triggered by create tag: ${RELEASE_TAG}" echo "Workflow triggered by create tag: $RELEASE_TAG"
else else
RELEASE_TAG=nightly RELEASE_TAG=nightly
PRERELEASE=true PRERELEASE=true
echo "Workflow triggered by schedule" echo "Workflow triggered by schedule"
fi fi
echo "RELEASE_TAG=${RELEASE_TAG}" >> ${GITHUB_ENV} echo "RELEASE_TAG=$RELEASE_TAG" >> $GITHUB_ENV
echo "PRERELEASE=${PRERELEASE}" >> ${GITHUB_ENV} echo "PRERELEASE=$PRERELEASE" >> $GITHUB_ENV
RELEASE_DATETIME=$(date --rfc-3339=seconds) RELEASE_DATETIME=$(date --rfc-3339=seconds)
echo Release ${RELEASE_TAG} created from ${GITHUB_SHA} at ${RELEASE_DATETIME} > release_body.md echo Release $RELEASE_TAG created from $GITHUB_SHA at $RELEASE_DATETIME > release_body.md
- name: Move the existing mutable tag - name: Move the existing mutable tag
# https://github.com/softprops/action-gh-release/issues/171 # https://github.com/softprops/action-gh-release/issues/171
run: | run: |
git fetch --tags git fetch --tags
if [[ ${GITHUB_EVENT_NAME} == "schedule" ]]; then if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
# Determine if a given tag exists and matches a specific Git commit. # Determine if a given tag exists and matches a specific Git commit.
# actions/checkout@v6 fetch-tags doesn't work when triggered by schedule # actions/checkout@v4 fetch-tags doesn't work when triggered by schedule
if [ "$(git rev-parse -q --verify "refs/tags/${RELEASE_TAG}")" = "${GITHUB_SHA}" ]; then if [ "$(git rev-parse -q --verify "refs/tags/$RELEASE_TAG")" = "$GITHUB_SHA" ]; then
echo "mutable tag ${RELEASE_TAG} exists and matches ${GITHUB_SHA}" echo "mutable tag $RELEASE_TAG exists and matches $GITHUB_SHA"
else else
git tag -f ${RELEASE_TAG} ${GITHUB_SHA} git tag -f $RELEASE_TAG $GITHUB_SHA
git push -f origin ${RELEASE_TAG}:refs/tags/${RELEASE_TAG} git push -f origin $RELEASE_TAG:refs/tags/$RELEASE_TAG
echo "created/moved mutable tag ${RELEASE_TAG} to ${GITHUB_SHA}" echo "created/moved mutable tag $RELEASE_TAG to $GITHUB_SHA"
fi fi
fi fi
@ -76,26 +69,54 @@ jobs:
# https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release # https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
prerelease: ${{ env.PRERELEASE }} prerelease: ${{ env.PRERELEASE }}
tag_name: ${{ env.RELEASE_TAG }} tag_name: ${{ env.RELEASE_TAG }}
# The body field does not support environment variable substitution directly. # The body field does not support environment variable substitution directly.
body_path: release_body.md body_path: release_body.md
- name: Build and push image # https://github.com/marketplace/actions/docker-login
run: | - name: Login to Docker Hub
sudo docker login --username infiniflow --password-stdin <<< ${{ secrets.DOCKERHUB_TOKEN }} uses: docker/login-action@v3
sudo docker build --build-arg NEED_MIRROR=1 --build-arg HTTPS_PROXY=${HTTPS_PROXY} --build-arg HTTP_PROXY=${HTTP_PROXY} -t infiniflow/ragflow:${RELEASE_TAG} -f Dockerfile . with:
sudo docker tag infiniflow/ragflow:${RELEASE_TAG} infiniflow/ragflow:latest username: infiniflow
sudo docker push infiniflow/ragflow:${RELEASE_TAG} password: ${{ secrets.DOCKERHUB_TOKEN }}
sudo docker push infiniflow/ragflow:latest
- name: Build and push ragflow-sdk # https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push full image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
infiniflow/ragflow:${{ env.RELEASE_TAG }}
infiniflow/ragflow:latest-full
file: Dockerfile
platforms: linux/amd64
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push slim image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
infiniflow/ragflow:${{ env.RELEASE_TAG }}-slim
infiniflow/ragflow:latest-slim
file: Dockerfile
build-args: LIGHTEN=1
platforms: linux/amd64
- name: Build ragflow-sdk
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
run: | run: |
cd sdk/python && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }} cd sdk/python && \
uv build
- name: Build and push ragflow-cli - name: Publish package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v') if: startsWith(github.ref, 'refs/tags/v')
run: | uses: pypa/gh-action-pypi-publish@release/v1
cd admin/client && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }} with:
packages-dir: sdk/python/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true

View File

@ -1,6 +1,4 @@
name: tests name: tests
permissions:
contents: read
on: on:
push: push:
@ -11,11 +9,8 @@ on:
- 'docs/**' - 'docs/**'
- '*.md' - '*.md'
- '*.mdx' - '*.mdx'
# The only difference between pull_request and pull_request_target is the context in which the workflow runs:
# — pull_request_target workflows use the workflow files from the default branch, and secrets are available.
# — pull_request workflows use the workflow files from the pull request branch, and secrets are unavailable.
pull_request: pull_request:
types: [ synchronize, ready_for_review ] types: [ opened, synchronize, reopened, labeled ]
paths-ignore: paths-ignore:
- 'docs/**' - 'docs/**'
- '*.md' - '*.md'
@ -33,63 +28,26 @@ jobs:
name: ragflow_tests name: ragflow_tests
# https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution # https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution
# https://github.com/orgs/community/discussions/26261 # https://github.com/orgs/community/discussions/26261
if: ${{ github.event_name != 'pull_request' || (github.event.pull_request.draft == false && contains(github.event.pull_request.labels.*.name, 'ci')) }} if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }}
runs-on: [ "self-hosted", "ragflow-test" ] runs-on: [ "self-hosted", "debug" ]
steps: steps:
- name: Ensure workspace ownership # https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2
- name: Show who triggered this workflow
run: | run: |
echo "Workflow triggered by ${{ github.event_name }}" echo "Workflow triggered by ${{ github.event_name }}"
echo "chown -R ${USER} ${GITHUB_WORKSPACE}" && sudo chown -R ${USER} ${GITHUB_WORKSPACE}
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/issues/1781 # https://github.com/actions/checkout/issues/1781
- name: Check out code - name: Check out code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ (github.event_name == 'pull_request' || github.event_name == 'pull_request_target') && format('refs/pull/{0}/merge', github.event.pull_request.number) || github.sha }}
fetch-depth: 0 fetch-depth: 0
fetch-tags: true fetch-tags: true
- name: Check workflow duplication
if: ${{ !cancelled() && !failure() }}
run: |
if [[ ${GITHUB_EVENT_NAME} != "pull_request" && ${GITHUB_EVENT_NAME} != "schedule" ]]; then
HEAD=$(git rev-parse HEAD)
# Find a PR that introduced a given commit
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
PR_NUMBER=$(gh pr list --search ${HEAD} --state merged --json number --jq .[0].number)
echo "HEAD=${HEAD}"
echo "PR_NUMBER=${PR_NUMBER}"
if [[ -n "${PR_NUMBER}" ]]; then
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
if [[ -f "${PR_SHA_FP}" ]]; then
read -r PR_SHA PR_RUN_ID < "${PR_SHA_FP}"
# Calculate the hash of the current workspace content
HEAD_SHA=$(git rev-parse HEAD^{tree})
if [[ "${HEAD_SHA}" == "${PR_SHA}" ]]; then
echo "Cancel myself since the workspace content hash is the same with PR #${PR_NUMBER} merged. See ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${PR_RUN_ID} for details."
gh run cancel ${GITHUB_RUN_ID}
while true; do
status=$(gh run view ${GITHUB_RUN_ID} --json status -q .status)
[ "${status}" = "completed" ] && break
sleep 5
done
exit 1
fi
fi
fi
elif [[ ${GITHUB_EVENT_NAME} == "pull_request" ]]; then
PR_NUMBER=${{ github.event.pull_request.number }}
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
# Calculate the hash of the current workspace content
PR_SHA=$(git rev-parse HEAD^{tree})
echo "PR #${PR_NUMBER} workspace content hash: ${PR_SHA}"
mkdir -p ${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}
echo "${PR_SHA} ${GITHUB_RUN_ID}" > ${PR_SHA_FP}
fi
ARTIFACTS_DIR=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/${GITHUB_RUN_ID}
echo "ARTIFACTS_DIR=${ARTIFACTS_DIR}" >> ${GITHUB_ENV}
rm -rf ${ARTIFACTS_DIR} && mkdir -p ${ARTIFACTS_DIR}
# https://github.com/astral-sh/ruff-action # https://github.com/astral-sh/ruff-action
- name: Static check with Ruff - name: Static check with Ruff
uses: astral-sh/ruff-action@v3 uses: astral-sh/ruff-action@v3
@ -97,489 +55,122 @@ jobs:
version: ">=0.11.x" version: ">=0.11.x"
args: "check" args: "check"
- name: Check comments of changed Python files - name: Build ragflow:nightly-slim
if: ${{ false }}
run: | run: |
if [[ ${{ github.event_name }} == 'pull_request' || ${{ github.event_name }} == 'pull_request_target' ]]; then RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
CHANGED_FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }} \ sudo docker pull ubuntu:22.04
| grep -E '\.(py)$' || true) sudo docker build --progress=plain --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
if [ -n "$CHANGED_FILES" ]; then
echo "Check comments of changed Python files with check_comment_ascii.py"
readarray -t files <<< "$CHANGED_FILES"
HAS_ERROR=0
for file in "${files[@]}"; do
if [ -f "$file" ]; then
if python3 check_comment_ascii.py "$file"; then
echo "✅ $file"
else
echo "❌ $file"
HAS_ERROR=1
fi
fi
done
if [ $HAS_ERROR -ne 0 ]; then
exit 1
fi
else
echo "No Python files changed"
fi
fi
- name: Run unit test
run: |
uv sync --python 3.12 --group test --frozen
source .venv/bin/activate
which pytest || echo "pytest not in PATH"
echo "Start to run unit test"
python3 run_tests.py
- name: Build ragflow:nightly - name: Build ragflow:nightly
run: | run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-${HOME}} sudo docker build --progress=plain --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
RAGFLOW_IMAGE=infiniflow/ragflow:${GITHUB_RUN_ID}
echo "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> ${GITHUB_ENV} - name: Start ragflow:nightly-slim
sudo docker pull ubuntu:22.04 run: |
sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 --build-arg HTTPS_PROXY=${HTTPS_PROXY} --build-arg HTTP_PROXY=${HTTP_PROXY} -f Dockerfile -t ${RAGFLOW_IMAGE} . sudo docker compose -f docker/docker-compose.yml down --volumes --remove-orphans
if [[ ${GITHUB_EVENT_NAME} == "schedule" ]]; then echo -e "\nRAGFLOW_IMAGE=infiniflow/ragflow:nightly-slim" >> docker/.env
export HTTP_API_TEST_LEVEL=p3 sudo docker compose -f docker/docker-compose.yml up -d
else
export HTTP_API_TEST_LEVEL=p2 - name: Stop ragflow:nightly-slim
fi if: always() # always run this step even if previous steps failed
echo "HTTP_API_TEST_LEVEL=${HTTP_API_TEST_LEVEL}" >> ${GITHUB_ENV} run: |
echo "RAGFLOW_CONTAINER=${GITHUB_RUN_ID}-ragflow-cpu-1" >> ${GITHUB_ENV} sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:nightly - name: Start ragflow:nightly
run: | run: |
# Determine runner number (default to 1 if not found) echo -e "\nRAGFLOW_IMAGE=infiniflow/ragflow:nightly" >> docker/.env
RUNNER_NUM=$(sudo docker inspect $(hostname) --format '{{index .Config.Labels "com.docker.compose.container-number"}}' 2>/dev/null || true) sudo docker compose -f docker/docker-compose.yml up -d
RUNNER_NUM=${RUNNER_NUM:-1}
# Compute port numbers using bash arithmetic
ES_PORT=$((1200 + RUNNER_NUM * 10))
OS_PORT=$((1201 + RUNNER_NUM * 10))
INFINITY_THRIFT_PORT=$((23817 + RUNNER_NUM * 10))
INFINITY_HTTP_PORT=$((23820 + RUNNER_NUM * 10))
INFINITY_PSQL_PORT=$((5432 + RUNNER_NUM * 10))
EXPOSE_MYSQL_PORT=$((5455 + RUNNER_NUM * 10))
MINIO_PORT=$((9000 + RUNNER_NUM * 10))
MINIO_CONSOLE_PORT=$((9001 + RUNNER_NUM * 10))
REDIS_PORT=$((6379 + RUNNER_NUM * 10))
TEI_PORT=$((6380 + RUNNER_NUM * 10))
KIBANA_PORT=$((6601 + RUNNER_NUM * 10))
SVR_HTTP_PORT=$((9380 + RUNNER_NUM * 10))
ADMIN_SVR_HTTP_PORT=$((9381 + RUNNER_NUM * 10))
SVR_MCP_PORT=$((9382 + RUNNER_NUM * 10))
SANDBOX_EXECUTOR_MANAGER_PORT=$((9385 + RUNNER_NUM * 10))
SVR_WEB_HTTP_PORT=$((80 + RUNNER_NUM * 10))
SVR_WEB_HTTPS_PORT=$((443 + RUNNER_NUM * 10))
# Persist computed ports into docker/.env so docker-compose uses the correct host bindings
echo "" >> docker/.env
echo -e "ES_PORT=${ES_PORT}" >> docker/.env
echo -e "OS_PORT=${OS_PORT}" >> docker/.env
echo -e "INFINITY_THRIFT_PORT=${INFINITY_THRIFT_PORT}" >> docker/.env
echo -e "INFINITY_HTTP_PORT=${INFINITY_HTTP_PORT}" >> docker/.env
echo -e "INFINITY_PSQL_PORT=${INFINITY_PSQL_PORT}" >> docker/.env
echo -e "EXPOSE_MYSQL_PORT=${EXPOSE_MYSQL_PORT}" >> docker/.env
echo -e "MINIO_PORT=${MINIO_PORT}" >> docker/.env
echo -e "MINIO_CONSOLE_PORT=${MINIO_CONSOLE_PORT}" >> docker/.env
echo -e "REDIS_PORT=${REDIS_PORT}" >> docker/.env
echo -e "TEI_PORT=${TEI_PORT}" >> docker/.env
echo -e "KIBANA_PORT=${KIBANA_PORT}" >> docker/.env
echo -e "SVR_HTTP_PORT=${SVR_HTTP_PORT}" >> docker/.env
echo -e "ADMIN_SVR_HTTP_PORT=${ADMIN_SVR_HTTP_PORT}" >> docker/.env
echo -e "SVR_MCP_PORT=${SVR_MCP_PORT}" >> docker/.env
echo -e "SANDBOX_EXECUTOR_MANAGER_PORT=${SANDBOX_EXECUTOR_MANAGER_PORT}" >> docker/.env
echo -e "SVR_WEB_HTTP_PORT=${SVR_WEB_HTTP_PORT}" >> docker/.env
echo -e "SVR_WEB_HTTPS_PORT=${SVR_WEB_HTTPS_PORT}" >> docker/.env
echo -e "COMPOSE_PROFILES=\${COMPOSE_PROFILES},tei-cpu" >> docker/.env
echo -e "TEI_MODEL=BAAI/bge-small-en-v1.5" >> docker/.env
echo -e "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> docker/.env
echo "HOST_ADDRESS=http://host.docker.internal:${SVR_HTTP_PORT}" >> ${GITHUB_ENV}
# Patch entrypoint.sh for coverage
sed -i '/"\$PY" api\/ragflow_server.py \${INIT_SUPERUSER_ARGS} &/c\ echo "Ensuring coverage is installed..."\n "$PY" -m pip install coverage\n export COVERAGE_FILE=/ragflow/logs/.coverage\n echo "Starting ragflow_server with coverage..."\n "$PY" -m coverage run --source=./api/apps --omit="*/tests/*,*/migrations/*" -a api/ragflow_server.py ${INIT_SUPERUSER_ARGS} &' docker/entrypoint.sh
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
uv sync --python 3.12 --group test --frozen && uv pip install -e sdk/python
- name: Run sdk tests against Elasticsearch - name: Run sdk tests against Elasticsearch
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && set -o pipefail; pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} --junitxml=pytest-infinity-sdk.xml --cov=sdk/python/ragflow_sdk --cov-branch --cov-report=xml:coverage-es-sdk.xml test/testcases/test_sdk_api 2>&1 | tee es_sdk_test.log if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python && uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
- name: Run web api tests against Elasticsearch - name: Run frontend api tests against Elasticsearch
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && set -o pipefail; pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_web_api 2>&1 | tee es_web_api_test.log cd sdk/python && UV_LINK_MODE=copy uv sync --python 3.10 --group test --frozen && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Run http api tests against Elasticsearch - name: Run http api tests against Elasticsearch
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && set -o pipefail; pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api 2>&1 | tee es_http_api_test.log if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
- name: RAGFlow CLI retrieval test Elasticsearch
env:
PYTHONPATH: ${{ github.workspace }}
run: |
set -euo pipefail
source .venv/bin/activate
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
EMAIL="ci-${GITHUB_RUN_ID}@example.com"
PASS="ci-pass-${GITHUB_RUN_ID}"
DATASET="ci_dataset_${GITHUB_RUN_ID}"
CLI="python admin/client/ragflow_cli.py"
LOG_FILE="es_cli_test.log"
: > "${LOG_FILE}"
ERROR_RE='Traceback|ModuleNotFoundError|ImportError|Parse error|Bad response|Fail to|code:\\s*[1-9]'
run_cli() {
local logfile="$1"
shift
local allow_re=""
if [[ "${1:-}" == "--allow" ]]; then
allow_re="$2"
shift 2
fi
local cmd_display="$*"
echo "===== $(date -u +\"%Y-%m-%dT%H:%M:%SZ\") CMD: ${cmd_display} =====" | tee -a "${logfile}"
local tmp_log
tmp_log="$(mktemp)"
set +e
timeout 180s "$@" 2>&1 | tee "${tmp_log}"
local status=${PIPESTATUS[0]}
set -e
cat "${tmp_log}" >> "${logfile}"
if grep -qiE "${ERROR_RE}" "${tmp_log}"; then
if [[ -n "${allow_re}" ]] && grep -qiE "${allow_re}" "${tmp_log}"; then
echo "Allowed CLI error markers in ${logfile}"
rm -f "${tmp_log}"
return 0
fi
echo "Detected CLI error markers in ${logfile}"
rm -f "${tmp_log}"
exit 1
fi
rm -f "${tmp_log}"
return ${status}
}
set -a
source docker/.env
set +a
HOST_ADDRESS="http://host.docker.internal:${SVR_HTTP_PORT}"
USER_HOST="$(echo "${HOST_ADDRESS}" | sed -E 's#^https?://([^:/]+).*#\1#')"
USER_PORT="${SVR_HTTP_PORT}"
ADMIN_HOST="${USER_HOST}"
ADMIN_PORT="${ADMIN_SVR_HTTP_PORT}"
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
admin_ready=0
for i in $(seq 1 30); do
if run_cli "${LOG_FILE}" $CLI --type admin --host "$ADMIN_HOST" --port "$ADMIN_PORT" --username "admin@ragflow.io" --password "admin" command "ping"; then
admin_ready=1
break
fi
sleep 1
done
if [[ "${admin_ready}" -ne 1 ]]; then
echo "Admin service did not become ready"
exit 1
fi
run_cli "${LOG_FILE}" $CLI --type admin --host "$ADMIN_HOST" --port "$ADMIN_PORT" --username "admin@ragflow.io" --password "admin" command "show version"
ALLOW_USER_EXISTS_RE='already exists|already exist|duplicate|already.*registered|exist(s)?'
run_cli "${LOG_FILE}" --allow "${ALLOW_USER_EXISTS_RE}" $CLI --type admin --host "$ADMIN_HOST" --port "$ADMIN_PORT" --username "admin@ragflow.io" --password "admin" command "create user '$EMAIL' '$PASS'"
user_ready=0
for i in $(seq 1 30); do
if run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "ping"; then
user_ready=1
break
fi
sleep 1
done
if [[ "${user_ready}" -ne 1 ]]; then
echo "User service did not become ready"
exit 1
fi
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "show version"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "create dataset '$DATASET' with embedding 'BAAI/bge-small-en-v1.5@Builtin' parser 'auto'"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "import 'test/benchmark/test_docs/Doc1.pdf,test/benchmark/test_docs/Doc2.pdf' into dataset '$DATASET'"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "parse dataset '$DATASET' sync"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "Benchmark 16 100 search 'what are these documents about' on datasets '$DATASET'"
- name: Stop ragflow to save coverage Elasticsearch
if: ${{ !cancelled() }}
run: |
# Send SIGINT to ragflow_server.py to trigger coverage save
PID=$(sudo docker exec ${RAGFLOW_CONTAINER} ps aux | grep "ragflow_server.py" | grep -v grep | awk '{print $2}' | head -n 1)
if [ -n "$PID" ]; then
echo "Sending SIGINT to ragflow_server.py (PID: $PID)..."
sudo docker exec ${RAGFLOW_CONTAINER} kill -INT $PID
# Wait for process to exit and coverage file to be written
sleep 10
else else
echo "ragflow_server.py not found!" export HTTP_API_TEST_LEVEL=p2
fi fi
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} stop UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
- name: Generate server coverage report Elasticsearch
if: ${{ !cancelled() }}
run: |
# .coverage file should be in docker/ragflow-logs/.coverage
if [ -f docker/ragflow-logs/.coverage ]; then
echo "Found .coverage file"
cp docker/ragflow-logs/.coverage .coverage
source .venv/bin/activate
# Create .coveragerc to map container paths to host paths
echo "[paths]" > .coveragerc
echo "source =" >> .coveragerc
echo " ." >> .coveragerc
echo " /ragflow" >> .coveragerc
coverage xml -o coverage-es-server.xml
rm .coveragerc
# Clean up for next run
sudo rm docker/ragflow-logs/.coverage
else
echo ".coverage file not found!"
fi
- name: Collect ragflow log Elasticsearch
if: ${{ !cancelled() }}
run: |
if [ -d docker/ragflow-logs ]; then
cp -r docker/ragflow-logs ${ARTIFACTS_DIR}/ragflow-logs-es
echo "ragflow log" && tail -n 200 docker/ragflow-logs/ragflow_server.log || true
else
echo "No docker/ragflow-logs directory found; skipping log collection"
fi
sudo rm -rf docker/ragflow-logs || true
- name: Stop ragflow:nightly - name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed if: always() # always run this step even if previous steps failed
run: | run: |
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v || true sudo docker compose -f docker/docker-compose.yml down -v
sudo docker ps -a --filter "label=com.docker.compose.project=${GITHUB_RUN_ID}" -q | xargs -r sudo docker rm -f
- name: Start ragflow:nightly - name: Start ragflow:nightly
run: | run: |
sed -i '1i DOC_ENGINE=infinity' docker/.env sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml up -d
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
- name: Run sdk tests against Infinity - name: Run sdk tests against Infinity
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && set -o pipefail; DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} --junitxml=pytest-infinity-sdk.xml --cov=sdk/python/ragflow_sdk --cov-branch --cov-report=xml:coverage-infinity-sdk.xml test/testcases/test_sdk_api 2>&1 | tee infinity_sdk_test.log if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python && DOC_ENGINE=infinity uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
- name: Run web api tests against Infinity - name: Run frontend api tests against Infinity
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && set -o pipefail; DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_web_api/test_api_app 2>&1 | tee infinity_web_api_test.log cd sdk/python && UV_LINK_MODE=copy uv sync --python 3.10 --group test --frozen && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Run http api tests against Infinity - name: Run http api tests against Infinity
run: | run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY="" export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..." echo "Waiting for service to be available..."
sleep 5 sleep 5
done done
source .venv/bin/activate && set -o pipefail; DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api 2>&1 | tee infinity_http_api_test.log if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
- name: RAGFlow CLI retrieval test Infinity
env:
PYTHONPATH: ${{ github.workspace }}
run: |
set -euo pipefail
source .venv/bin/activate
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
EMAIL="ci-${GITHUB_RUN_ID}@example.com"
PASS="ci-pass-${GITHUB_RUN_ID}"
DATASET="ci_dataset_${GITHUB_RUN_ID}"
CLI="python admin/client/ragflow_cli.py"
LOG_FILE="infinity_cli_test.log"
: > "${LOG_FILE}"
ERROR_RE='Traceback|ModuleNotFoundError|ImportError|Parse error|Bad response|Fail to|code:\\s*[1-9]'
run_cli() {
local logfile="$1"
shift
local allow_re=""
if [[ "${1:-}" == "--allow" ]]; then
allow_re="$2"
shift 2
fi
local cmd_display="$*"
echo "===== $(date -u +\"%Y-%m-%dT%H:%M:%SZ\") CMD: ${cmd_display} =====" | tee -a "${logfile}"
local tmp_log
tmp_log="$(mktemp)"
set +e
timeout 180s "$@" 2>&1 | tee "${tmp_log}"
local status=${PIPESTATUS[0]}
set -e
cat "${tmp_log}" >> "${logfile}"
if grep -qiE "${ERROR_RE}" "${tmp_log}"; then
if [[ -n "${allow_re}" ]] && grep -qiE "${allow_re}" "${tmp_log}"; then
echo "Allowed CLI error markers in ${logfile}"
rm -f "${tmp_log}"
return 0
fi
echo "Detected CLI error markers in ${logfile}"
rm -f "${tmp_log}"
exit 1
fi
rm -f "${tmp_log}"
return ${status}
}
set -a
source docker/.env
set +a
HOST_ADDRESS="http://host.docker.internal:${SVR_HTTP_PORT}"
USER_HOST="$(echo "${HOST_ADDRESS}" | sed -E 's#^https?://([^:/]+).*#\1#')"
USER_PORT="${SVR_HTTP_PORT}"
ADMIN_HOST="${USER_HOST}"
ADMIN_PORT="${ADMIN_SVR_HTTP_PORT}"
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS}/v1/system/ping > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
admin_ready=0
for i in $(seq 1 30); do
if run_cli "${LOG_FILE}" $CLI --type admin --host "$ADMIN_HOST" --port "$ADMIN_PORT" --username "admin@ragflow.io" --password "admin" command "ping"; then
admin_ready=1
break
fi
sleep 1
done
if [[ "${admin_ready}" -ne 1 ]]; then
echo "Admin service did not become ready"
exit 1
fi
run_cli "${LOG_FILE}" $CLI --type admin --host "$ADMIN_HOST" --port "$ADMIN_PORT" --username "admin@ragflow.io" --password "admin" command "show version"
ALLOW_USER_EXISTS_RE='already exists|already exist|duplicate|already.*registered|exist(s)?'
run_cli "${LOG_FILE}" --allow "${ALLOW_USER_EXISTS_RE}" $CLI --type admin --host "$ADMIN_HOST" --port "$ADMIN_PORT" --username "admin@ragflow.io" --password "admin" command "create user '$EMAIL' '$PASS'"
user_ready=0
for i in $(seq 1 30); do
if run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "ping"; then
user_ready=1
break
fi
sleep 1
done
if [[ "${user_ready}" -ne 1 ]]; then
echo "User service did not become ready"
exit 1
fi
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "show version"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "create dataset '$DATASET' with embedding 'BAAI/bge-small-en-v1.5@Builtin' parser 'auto'"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "import 'test/benchmark/test_docs/Doc1.pdf,test/benchmark/test_docs/Doc2.pdf' into dataset '$DATASET'"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "parse dataset '$DATASET' sync"
run_cli "${LOG_FILE}" $CLI --type user --host "$USER_HOST" --port "$USER_PORT" --username "$EMAIL" --password "$PASS" command "Benchmark 16 100 search 'what are these documents about' on datasets '$DATASET'"
- name: Stop ragflow to save coverage Infinity
if: ${{ !cancelled() }}
run: |
# Send SIGINT to ragflow_server.py to trigger coverage save
PID=$(sudo docker exec ${RAGFLOW_CONTAINER} ps aux | grep "ragflow_server.py" | grep -v grep | awk '{print $2}' | head -n 1)
if [ -n "$PID" ]; then
echo "Sending SIGINT to ragflow_server.py (PID: $PID)..."
sudo docker exec ${RAGFLOW_CONTAINER} kill -INT $PID
# Wait for process to exit and coverage file to be written
sleep 10
else else
echo "ragflow_server.py not found!" export HTTP_API_TEST_LEVEL=p2
fi fi
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} stop UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && DOC_ENGINE=infinity uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
- name: Generate server coverage report Infinity
if: ${{ !cancelled() }}
run: |
# .coverage file should be in docker/ragflow-logs/.coverage
if [ -f docker/ragflow-logs/.coverage ]; then
echo "Found .coverage file"
cp docker/ragflow-logs/.coverage .coverage
source .venv/bin/activate
# Create .coveragerc to map container paths to host paths
echo "[paths]" > .coveragerc
echo "source =" >> .coveragerc
echo " ." >> .coveragerc
echo " /ragflow" >> .coveragerc
coverage xml -o coverage-infinity-server.xml
rm .coveragerc
else
echo ".coverage file not found!"
fi
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
if: ${{ !cancelled() }}
with:
token: ${{ secrets.CODECOV_TOKEN }}
fail_ci_if_error: false
- name: Collect ragflow log
if: ${{ !cancelled() }}
run: |
if [ -d docker/ragflow-logs ]; then
cp -r docker/ragflow-logs ${ARTIFACTS_DIR}/ragflow-logs-infinity
echo "ragflow log" && tail -n 200 docker/ragflow-logs/ragflow_server.log || true
else
echo "No docker/ragflow-logs directory found; skipping log collection"
fi
sudo rm -rf docker/ragflow-logs || true
- name: Stop ragflow:nightly - name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed if: always() # always run this step even if previous steps failed
run: | run: |
# Sometimes `docker compose down` fail due to hang container, heavy load etc. Need to remove such containers to release resources(for example, listen ports). sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml down -v
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v || true
sudo docker ps -a --filter "label=com.docker.compose.project=${GITHUB_RUN_ID}" -q | xargs -r sudo docker rm -f
if [[ -n ${RAGFLOW_IMAGE} ]]; then
sudo docker rmi -f ${RAGFLOW_IMAGE}
fi

18
.gitignore vendored
View File

@ -44,7 +44,6 @@ cl100k_base.tiktoken
chrome* chrome*
huggingface.co/ huggingface.co/
nltk_data/ nltk_data/
uv-x86_64*.tar.gz
# Exclude hash-like temporary files like 9b5ad71b2ce5302211f9c61530b329a4922fc6a4 # Exclude hash-like temporary files like 9b5ad71b2ce5302211f9c61530b329a4922fc6a4
*[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]* *[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]*
@ -52,13 +51,6 @@ uv-x86_64*.tar.gz
.venv .venv
docker/data docker/data
# OceanBase data and conf
docker/oceanbase/conf
docker/oceanbase/data
# SeekDB data and conf
docker/seekdb
#--------------------------------------------------# #--------------------------------------------------#
# The following was generated with gitignore.nvim: # # The following was generated with gitignore.nvim: #
@ -157,7 +149,7 @@ out
# Nuxt.js build / generate output # Nuxt.js build / generate output
.nuxt .nuxt
dist dist
ragflow_cli.egg-info
# Gatsby files # Gatsby files
.cache/ .cache/
# Comment in the public line in if your project uses Gatsby and not Next.js # Comment in the public line in if your project uses Gatsby and not Next.js
@ -203,11 +195,3 @@ ragflow_cli.egg-info
# Default backup dir # Default backup dir
backup backup
.hypothesis
# Added by cargo
/target

110
AGENTS.md
View File

@ -1,110 +0,0 @@
# RAGFlow Project Instructions for GitHub Copilot
This file provides context, build instructions, and coding standards for the RAGFlow project.
It is structured to follow GitHub Copilot's [customization guidelines](https://docs.github.com/en/copilot/concepts/prompting/response-customization).
## 1. Project Overview
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It is a full-stack application with a Python backend and a React/TypeScript frontend.
- **Backend**: Python 3.10+ (Flask/Quart)
- **Frontend**: TypeScript, React, UmiJS
- **Architecture**: Microservices based on Docker.
- `api/`: Backend API server.
- `rag/`: Core RAG logic (indexing, retrieval).
- `deepdoc/`: Document parsing and OCR.
- `web/`: Frontend application.
## 2. Directory Structure
- `api/`: Backend API server (Flask/Quart).
- `apps/`: API Blueprints (Knowledge Base, Chat, etc.).
- `db/`: Database models and services.
- `rag/`: Core RAG logic.
- `llm/`: LLM, Embedding, and Rerank model abstractions.
- `deepdoc/`: Document parsing and OCR modules.
- `agent/`: Agentic reasoning components.
- `web/`: Frontend application (React + UmiJS).
- `docker/`: Docker deployment configurations.
- `sdk/`: Python SDK.
- `test/`: Backend tests.
## 3. Build Instructions
### Backend (Python)
The project uses **uv** for dependency management.
1. **Setup Environment**:
```bash
uv sync --python 3.12 --all-extras
uv run download_deps.py
```
2. **Run Server**:
- **Pre-requisite**: Start dependent services (MySQL, ES/Infinity, Redis, MinIO).
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
- **Launch**:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
### Frontend (TypeScript/React)
Located in `web/`.
1. **Install Dependencies**:
```bash
cd web
npm install
```
2. **Run Dev Server**:
```bash
npm run dev
```
Runs on port 8000 by default.
### Docker Deployment
To run the full stack using Docker:
```bash
cd docker
docker compose -f docker-compose.yml up -d
```
## 4. Testing Instructions
### Backend Tests
- **Run All Tests**:
```bash
uv run pytest
```
- **Run Specific Test**:
```bash
uv run pytest test/test_api.py
```
### Frontend Tests
- **Run Tests**:
```bash
cd web
npm run test
```
## 5. Coding Standards & Guidelines
- **Python Formatting**: Use `ruff` for linting and formatting.
```bash
ruff check
ruff format
```
- **Frontend Linting**:
```bash
cd web
npm run lint
```
- **Pre-commit**: Ensure pre-commit hooks are installed.
```bash
pre-commit install
pre-commit run --all-files
```

116
CLAUDE.md
View File

@ -1,116 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It's a full-stack application with:
- Python backend (Flask-based API server)
- React/TypeScript frontend (built with UmiJS)
- Microservices architecture with Docker deployment
- Multiple data stores (MySQL, Elasticsearch/Infinity, Redis, MinIO)
## Architecture
### Backend (`/api/`)
- **Main Server**: `api/ragflow_server.py` - Flask application entry point
- **Apps**: Modular Flask blueprints in `api/apps/` for different functionalities:
- `kb_app.py` - Knowledge base management
- `dialog_app.py` - Chat/conversation handling
- `document_app.py` - Document processing
- `canvas_app.py` - Agent workflow canvas
- `file_app.py` - File upload/management
- **Services**: Business logic in `api/db/services/`
- **Models**: Database models in `api/db/db_models.py`
### Core Processing (`/rag/`)
- **Document Processing**: `deepdoc/` - PDF parsing, OCR, layout analysis
- **LLM Integration**: `rag/llm/` - Model abstractions for chat, embedding, reranking
- **RAG Pipeline**: `rag/flow/` - Chunking, parsing, tokenization
- **Graph RAG**: `rag/graphrag/` - Knowledge graph construction and querying
### Agent System (`/agent/`)
- **Components**: Modular workflow components (LLM, retrieval, categorize, etc.)
- **Templates**: Pre-built agent workflows in `agent/templates/`
- **Tools**: External API integrations (Tavily, Wikipedia, SQL execution, etc.)
### Frontend (`/web/`)
- React/TypeScript with UmiJS framework
- Ant Design + shadcn/ui components
- State management with Zustand
- Tailwind CSS for styling
## Common Development Commands
### Backend Development
```bash
# Install Python dependencies
uv sync --python 3.12 --all-extras
uv run download_deps.py
pre-commit install
# Start dependent services
docker compose -f docker/docker-compose-base.yml up -d
# Run backend (requires services to be running)
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
# Run tests
uv run pytest
# Linting
ruff check
ruff format
```
### Frontend Development
```bash
cd web
npm install
npm run dev # Development server
npm run build # Production build
npm run lint # ESLint
npm run test # Jest tests
```
### Docker Development
```bash
# Full stack with Docker
cd docker
docker compose -f docker-compose.yml up -d
# Check server status
docker logs -f ragflow-server
# Rebuild images
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## Key Configuration Files
- `docker/.env` - Environment variables for Docker deployment
- `docker/service_conf.yaml.template` - Backend service configuration
- `pyproject.toml` - Python dependencies and project configuration
- `web/package.json` - Frontend dependencies and scripts
## Testing
- **Python**: pytest with markers (p1/p2/p3 priority levels)
- **Frontend**: Jest with React Testing Library
- **API Tests**: HTTP API and SDK tests in `test/` and `sdk/python/test/`
## Database Engines
RAGFlow supports switching between Elasticsearch (default) and Infinity:
- Set `DOC_ENGINE=infinity` in `docker/.env` to use Infinity
- Requires container restart: `docker compose down -v && docker compose up -d`
## Development Environment Requirements
- Python 3.10-3.12
- Node.js >=18.20.4
- Docker & Docker Compose
- uv package manager
- 16GB+ RAM, 50GB+ disk space

View File

@ -1,75 +1,76 @@
# base stage # base stage
FROM ubuntu:24.04 AS base FROM ubuntu:22.04 AS base
USER root USER root
SHELL ["/bin/bash", "-c"] SHELL ["/bin/bash", "-c"]
ARG NEED_MIRROR=0 ARG NEED_MIRROR=0
ARG LIGHTEN=0
ENV LIGHTEN=${LIGHTEN}
WORKDIR /ragflow WORKDIR /ragflow
# Copy models downloaded via download_deps.py # Copy models downloaded via download_deps.py
RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow RUN mkdir -p /ragflow/rag/res/deepdoc /root/.ragflow
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
cp /huggingface.co/InfiniFlow/huqie/huqie.txt.trie /ragflow/rag/res/ && \
tar --exclude='.*' -cf - \ tar --exclude='.*' -cf - \
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \ /huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \ /huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc | tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
if [ "$LIGHTEN" != "1" ]; then \
(tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow) \
fi
# https://github.com/chrismattmann/tika-python # https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache. # This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
cp -r /deps/nltk_data /root/ && \ cp -r /deps/nltk_data /root/ && \
cp /deps/tika-server-standard-3.2.3.jar /deps/tika-server-standard-3.2.3.jar.md5 /ragflow/ && \ cp /deps/tika-server-standard-3.0.0.jar /deps/tika-server-standard-3.0.0.jar.md5 /ragflow/ && \
cp /deps/cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4 cp /deps/cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.2.3.jar" ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.0.0.jar"
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
# Setup apt # Setup apt
# Python package and implicit dependencies: # Python package and implicit dependencies:
# opencv-python: libglib2.0-0 libglx-mesa0 libgl1 # opencv-python: libglib2.0-0 libglx-mesa0 libgl1
# python-pptx: default-jdk tika-server-standard-3.2.3.jar # aspose-slides: pkg-config libicu-dev libgdiplus libssl1.1_1.1.1f-1ubuntu2_amd64.deb
# python-pptx: default-jdk tika-server-standard-3.0.0.jar
# selenium: libatk-bridge2.0-0 chrome-linux64-121-0-6167-85 # selenium: libatk-bridge2.0-0 chrome-linux64-121-0-6167-85
# Building C extensions: libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev # Building C extensions: libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \ RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
apt update && \
apt --no-install-recommends install -y ca-certificates; \
if [ "$NEED_MIRROR" == "1" ]; then \ if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|http://archive.ubuntu.com/ubuntu|https://mirrors.tuna.tsinghua.edu.cn/ubuntu|g' /etc/apt/sources.list.d/ubuntu.sources; \ sed -i 's|http://ports.ubuntu.com|http://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list; \
sed -i 's|http://security.ubuntu.com/ubuntu|https://mirrors.tuna.tsinghua.edu.cn/ubuntu|g' /etc/apt/sources.list.d/ubuntu.sources; \ sed -i 's|http://archive.ubuntu.com|http://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list; \
fi; \ fi; \
rm -f /etc/apt/apt.conf.d/docker-clean && \ rm -f /etc/apt/apt.conf.d/docker-clean && \
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache && \ echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache && \
chmod 1777 /tmp && \ chmod 1777 /tmp && \
apt update && \ apt update && \
apt --no-install-recommends install -y ca-certificates && \
apt update && \
apt install -y libglib2.0-0 libglx-mesa0 libgl1 && \ apt install -y libglib2.0-0 libglx-mesa0 libgl1 && \
apt install -y pkg-config libicu-dev libgdiplus && \ apt install -y pkg-config libicu-dev libgdiplus && \
apt install -y default-jdk && \ apt install -y default-jdk && \
apt install -y libatk-bridge2.0-0 && \ apt install -y libatk-bridge2.0-0 && \
apt install -y libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev && \ apt install -y libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev && \
apt install -y libjemalloc-dev && \ apt install -y libjemalloc-dev && \
apt install -y nginx unzip curl wget git vim less && \ apt install -y python3-pip pipx nginx unzip curl wget git vim less && \
apt install -y ghostscript && \ apt install -y ghostscript
apt install -y pandoc && \
apt install -y texlive && \
apt install -y fonts-freefont-ttf fonts-noto-cjk && \
apt install -y postgresql-client
# Install uv RUN if [ "$NEED_MIRROR" == "1" ]; then \
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \ pip3 config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
if [ "$NEED_MIRROR" == "1" ]; then \ pip3 config set global.trusted-host mirrors.aliyun.com; \
mkdir -p /etc/uv && \ mkdir -p /etc/uv && \
echo 'python-install-mirror = "https://registry.npmmirror.com/-/binary/python-build-standalone/"' > /etc/uv/uv.toml && \ echo "[[index]]" > /etc/uv/uv.toml && \
echo '[[index]]' >> /etc/uv/uv.toml && \ echo 'url = "https://mirrors.aliyun.com/pypi/simple"' >> /etc/uv/uv.toml && \
echo 'url = "https://pypi.tuna.tsinghua.edu.cn/simple"' >> /etc/uv/uv.toml && \ echo "default = true" >> /etc/uv/uv.toml; \
echo 'default = true' >> /etc/uv/uv.toml; \
fi; \ fi; \
arch="$(uname -m)"; \ pipx install uv
if [ "$arch" = "x86_64" ]; then uv_arch="x86_64"; else uv_arch="aarch64"; fi; \
tar xzf "/deps/uv-${uv_arch}-unknown-linux-gnu.tar.gz" \
&& cp "uv-${uv_arch}-unknown-linux-gnu/"* /usr/local/bin/ \
&& rm -rf "uv-${uv_arch}-unknown-linux-gnu" \
&& uv python install 3.12
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1 ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
ENV PATH=/root/.local/bin:$PATH ENV PATH=/root/.local/bin:$PATH
@ -85,12 +86,12 @@ RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
# A modern version of cargo is needed for the latest version of the Rust compiler. # A modern version of cargo is needed for the latest version of the Rust compiler.
RUN apt update && apt install -y curl build-essential \ RUN apt update && apt install -y curl build-essential \
&& if [ "$NEED_MIRROR" == "1" ]; then \ && if [ "$NEED_MIRROR" == "1" ]; then \
# Use TUNA mirrors for rustup/rust dist files \ # Use TUNA mirrors for rustup/rust dist files
export RUSTUP_DIST_SERVER="https://mirrors.tuna.tsinghua.edu.cn/rustup"; \ export RUSTUP_DIST_SERVER="https://mirrors.tuna.tsinghua.edu.cn/rustup"; \
export RUSTUP_UPDATE_ROOT="https://mirrors.tuna.tsinghua.edu.cn/rustup/rustup"; \ export RUSTUP_UPDATE_ROOT="https://mirrors.tuna.tsinghua.edu.cn/rustup/rustup"; \
echo "Using TUNA mirrors for Rustup."; \ echo "Using TUNA mirrors for Rustup."; \
fi; \ fi; \
# Force curl to use HTTP/1.1 \ # Force curl to use HTTP/1.1
curl --proto '=https' --tlsv1.2 --http1.1 -sSf https://sh.rustup.rs | bash -s -- -y --profile minimal \ curl --proto '=https' --tlsv1.2 --http1.1 -sSf https://sh.rustup.rs | bash -s -- -y --profile minimal \
&& echo 'export PATH="/root/.cargo/bin:${PATH}"' >> /root/.bashrc && echo 'export PATH="/root/.cargo/bin:${PATH}"' >> /root/.bashrc
@ -107,10 +108,10 @@ RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
apt update && \ apt update && \
arch="$(uname -m)"; \ arch="$(uname -m)"; \
if [ "$arch" = "arm64" ] || [ "$arch" = "aarch64" ]; then \ if [ "$arch" = "arm64" ] || [ "$arch" = "aarch64" ]; then \
# ARM64 (macOS/Apple Silicon or Linux aarch64) \ # ARM64 (macOS/Apple Silicon or Linux aarch64)
ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql18; \ ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql18; \
else \ else \
# x86_64 or others \ # x86_64 or others
ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql17; \ ACCEPT_EULA=Y apt install -y unixodbc-dev msodbcsql17; \
fi || \ fi || \
{ echo "Failed to install ODBC driver"; exit 1; } { echo "Failed to install ODBC driver"; exit 1; }
@ -127,6 +128,8 @@ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/chromedriver-l
mv chromedriver /usr/local/bin/ && \ mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome rm -f /usr/bin/google-chrome
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
if [ "$(uname -m)" = "x86_64" ]; then \ if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /deps/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \ dpkg -i /deps/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
@ -148,24 +151,29 @@ COPY pyproject.toml uv.lock ./
# uv records index url into uv.lock but doesn't failover among multiple indexes # uv records index url into uv.lock but doesn't failover among multiple indexes
RUN --mount=type=cache,id=ragflow_uv,target=/root/.cache/uv,sharing=locked \ RUN --mount=type=cache,id=ragflow_uv,target=/root/.cache/uv,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \ if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|pypi.org|pypi.tuna.tsinghua.edu.cn|g' uv.lock; \ sed -i 's|pypi.org|mirrors.aliyun.com/pypi|g' uv.lock; \
else \ else \
sed -i 's|pypi.tuna.tsinghua.edu.cn|pypi.org|g' uv.lock; \ sed -i 's|mirrors.aliyun.com/pypi|pypi.org|g' uv.lock; \
fi; \ fi; \
uv sync --python 3.12 --frozen && \ if [ "$LIGHTEN" == "1" ]; then \
# Ensure pip is available in the venv for runtime package installation (fixes #12651) uv sync --python 3.10 --frozen; \
.venv/bin/python3 -m ensurepip --upgrade else \
uv sync --python 3.10 --frozen --all-extras; \
fi
COPY web web COPY web web
COPY docs docs COPY docs docs
RUN --mount=type=cache,id=ragflow_npm,target=/root/.npm,sharing=locked \ RUN --mount=type=cache,id=ragflow_npm,target=/root/.npm,sharing=locked \
export NODE_OPTIONS="--max-old-space-size=4096" && \
cd web && npm install && npm run build cd web && npm install && npm run build
COPY .git /ragflow/.git COPY .git /ragflow/.git
RUN version_info=$(git describe --tags --match=v* --first-parent --always); \ RUN version_info=$(git describe --tags --match=v* --first-parent --always); \
version_info="$version_info"; \ if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
echo "RAGFlow version: $version_info"; \ echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION echo $version_info > /ragflow/VERSION
@ -183,16 +191,16 @@ ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/ ENV PYTHONPATH=/ragflow/
COPY web web COPY web web
COPY admin admin
COPY api api COPY api api
COPY conf conf COPY conf conf
COPY deepdoc deepdoc COPY deepdoc deepdoc
COPY rag rag COPY rag rag
COPY agent agent COPY agent agent
COPY graphrag graphrag
COPY agentic_reasoning agentic_reasoning
COPY pyproject.toml uv.lock ./ COPY pyproject.toml uv.lock ./
COPY mcp mcp COPY mcp mcp
COPY common common COPY plugin plugin
COPY memory memory
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./ COPY docker/entrypoint.sh ./

View File

@ -3,7 +3,7 @@
FROM scratch FROM scratch
# Copy resources downloaded via download_deps.py # Copy resources downloaded via download_deps.py
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.2.3.jar tika-server-standard-3.2.3.jar.md5 libssl*.deb uv-x86_64-unknown-linux-gnu.tar.gz uv-aarch64-unknown-linux-gnu.tar.gz / COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.0.0.jar tika-server-standard-3.0.0.jar.md5 libssl*.deb /
COPY nltk_data /nltk_data COPY nltk_data /nltk_data

View File

@ -1,14 +0,0 @@
FROM ghcr.io/huggingface/text-embeddings-inference:cpu-1.8
# uv tool install huggingface_hub
# hf download --local-dir tei_data/BAAI/bge-small-en-v1.5 BAAI/bge-small-en-v1.5
# hf download --local-dir tei_data/BAAI/bge-m3 BAAI/bge-m3
# hf download --local-dir tei_data/Qwen/Qwen3-Embedding-0.6B Qwen/Qwen3-Embedding-0.6B
COPY tei_data /data
# curl -X POST http://localhost:6380/embed -H "Content-Type: application/json" -d '{"inputs": "Hello, world! This is a test sentence."}'
# curl -X POST http://tei:80/embed -H "Content-Type: application/json" -d '{"inputs": "Hello, world! This is a test sentence."}'
# [[-0.058816575,0.019564206,0.026697718,...]]
# curl -X POST http://localhost:6380/v1/embeddings -H "Content-Type: application/json" -d '{"input": "Hello, world! This is a test sentence."}'
# {"object":"list","data":[{"object":"embedding","embedding":[-0.058816575,0.019564206,...],"index":0}],"model":"BAAI/bge-small-en-v1.5","usage":{"prompt_tokens":12,"total_tokens":12}}

107
README.md
View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,15 +37,13 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -61,7 +59,8 @@
- 🔎 [System Architecture](#-system-architecture) - 🔎 [System Architecture](#-system-architecture)
- 🎬 [Get Started](#-get-started) - 🎬 [Get Started](#-get-started)
- 🔧 [Configurations](#-configurations) - 🔧 [Configurations](#-configurations)
- 🔧 [Build a Docker image](#-build-a-docker-image) - 🔧 [Build a docker image without embedding models](#-build-a-docker-image-without-embedding-models)
- 🔧 [Build a docker image including embedding models](#-build-a-docker-image-including-embedding-models)
- 🔨 [Launch service from source for development](#-launch-service-from-source-for-development) - 🔨 [Launch service from source for development](#-launch-service-from-source-for-development)
- 📚 [Documentation](#-documentation) - 📚 [Documentation](#-documentation)
- 📜 [Roadmap](#-roadmap) - 📜 [Roadmap](#-roadmap)
@ -72,7 +71,7 @@
## 💡 What is RAGFlow? ## 💡 What is RAGFlow?
[RAGFlow](https://ragflow.io/) is a leading open-source Retrieval-Augmented Generation ([RAG](https://ragflow.io/basics/what-is-rag)) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged [context engine](https://ragflow.io/basics/what-is-agent-context-engine) and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision. [RAGFlow](https://ragflow.io/) is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged context engine and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
## 🎮 Demo ## 🎮 Demo
@ -85,16 +84,15 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates ## 🔥 Latest Updates
- 2025-12-26 Supports 'Memory' for AI agent.
- 2025-11-19 Supports Gemini 3 Pro.
- 2025-11-12 Supports data synchronization from Confluence, S3, Notion, Discord, Google Drive.
- 2025-10-23 Supports MinerU & Docling as document parsing methods.
- 2025-10-15 Supports orchestrable ingestion pipeline.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models. - 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-04 Supports new models, including Kimi K2 and Grok 4.
- 2025-08-01 Supports agentic workflow and MCP. - 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent. - 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query. - 2025-05-05 Supports cross-language query.
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files. - 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
- 2025-02-28 Combined with Internet search (Tavily), supports reasoning like Deep Research for any LLMs.
- 2024-12-18 Upgrades Document Layout Analysis model in DeepDoc.
- 2024-08-22 Support text to SQL statements through RAG.
## 🎉 Stay Tuned ## 🎉 Stay Tuned
@ -137,7 +135,7 @@ releases! 🌟
## 🔎 System Architecture ## 🔎 System Architecture
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 Get Started ## 🎬 Get Started
@ -176,48 +174,41 @@ releases! 🌟
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. Clone the repo: 2. Clone the repo:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. Start up the server using the pre-built Docker images: 3. Start up the server using the pre-built Docker images:
> [!CAUTION] > [!CAUTION]
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. > All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system. > If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.23.1` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.23.1`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. > The command below downloads the `v0.20.5-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.5-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` for the full edition `v0.20.5`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# git checkout v0.23.1
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# This step ensures the **entrypoint.sh** file in the code matches the Docker image version.
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
> Note: Prior to `v0.22.0`, we provided both images with embedding models and slim images without embedding models. Details as follows: | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------|
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
|-------------------|-----------------|-----------------------|----------------| | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
4. Check the server status after having the server up and running: 4. Check the server status after having the server up and running:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_The following output confirms a successful launch of the system:_ _The following output confirms a successful launch of the system:_
@ -233,19 +224,16 @@ releases! 🌟
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
``` ```
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network abnormal` > If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal`
> error because, at that moment, your RAGFlow may not be fully initialized. > error because, at that moment, your RAGFlow may not be fully initialized.
>
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default > With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default
> HTTP serving port `80` can be omitted when using the default configurations. > HTTP serving port `80` can be omitted when using the default configurations.
>
6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update 6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update
the `API_KEY` field with the corresponding API key. the `API_KEY` field with the corresponding API key.
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information. > See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
>
_The show is on!_ _The show is on!_
@ -284,6 +272,7 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
> `-v` will delete the docker container volumes, and the existing data will be cleared. > `-v` will delete the docker container volumes, and the existing data will be cleared.
2. Set `DOC_ENGINE` in **docker/.env** to `infinity`. 2. Set `DOC_ENGINE` in **docker/.env** to `infinity`.
3. Start the containers: 3. Start the containers:
```bash ```bash
@ -293,23 +282,24 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
> [!WARNING] > [!WARNING]
> Switching to Infinity on a Linux/arm64 machine is not yet officially supported. > Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
## 🔧 Build a Docker image ## 🔧 Build a Docker image without embedding models
This image is approximately 2 GB in size and relies on external LLM and embedding services. This image is approximately 2 GB in size and relies on external LLM and embedding services.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
Or if you are behind a proxy, you can pass proxy arguments: ## 🔧 Build a Docker image including embedding models
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 Launch service from source for development ## 🔨 Launch service from source for development
@ -319,15 +309,17 @@ docker build --platform linux/amd64 \
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. Clone the source code and install Python dependencies: 2. Clone the source code and install Python dependencies:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose: 3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash ```bash
@ -339,23 +331,24 @@ docker build --platform linux/amd64 \
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site: 4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. If your operating system does not have jemalloc, please install it as follows: 5. If your operating system does not have jemalloc, please install it as follows:
```bash ```bash
# Ubuntu # ubuntu
sudo apt-get install libjemalloc-dev sudo apt-get install libjemalloc-dev
# CentOS # centos
sudo yum install jemalloc sudo yum install jemalloc
# OpenSUSE # mac
sudo zypper install jemalloc
# macOS
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. Launch backend service: 6. Launch backend service:
```bash ```bash
@ -363,12 +356,14 @@ docker build --platform linux/amd64 \
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. Install frontend dependencies: 7. Install frontend dependencies:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. Launch frontend service: 8. Launch frontend service:
```bash ```bash
@ -378,12 +373,14 @@ docker build --platform linux/amd64 \
_The following output confirms a successful launch of the system:_ _The following output confirms a successful launch of the system:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Stop RAGFlow front-end and back-end service after development is complete: 9. Stop RAGFlow front-end and back-end service after development is complete:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 Documentation ## 📚 Documentation
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
@ -396,7 +393,7 @@ docker build --platform linux/amd64 \
## 📜 Roadmap ## 📜 Roadmap
See the [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241) See the [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214)
## 🏄 Community ## 🏄 Community

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="Logo ragflow"> <img src="web/src/assets/logo-with-text.png" width="520" alt="Logo ragflow">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -37,19 +37,13 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Dokumentasi</a> | <a href="https://ragflow.io/docs/dev/">Dokumentasi</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Peta Jalan</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Peta Jalan</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<details open> <details open>
<summary><b>📕 Daftar Isi </b> </summary> <summary><b>📕 Daftar Isi </b> </summary>
@ -61,7 +55,8 @@
- 🔎 [Arsitektur Sistem](#-arsitektur-sistem) - 🔎 [Arsitektur Sistem](#-arsitektur-sistem)
- 🎬 [Mulai](#-mulai) - 🎬 [Mulai](#-mulai)
- 🔧 [Konfigurasi](#-konfigurasi) - 🔧 [Konfigurasi](#-konfigurasi)
- 🔧 [Membangun Image Docker](#-membangun-docker-image) - 🔧 [Membangun Image Docker tanpa Model Embedding](#-membangun-image-docker-tanpa-model-embedding)
- 🔧 [Membangun Image Docker dengan Model Embedding](#-membangun-image-docker-dengan-model-embedding)
- 🔨 [Meluncurkan aplikasi dari Sumber untuk Pengembangan](#-meluncurkan-aplikasi-dari-sumber-untuk-pengembangan) - 🔨 [Meluncurkan aplikasi dari Sumber untuk Pengembangan](#-meluncurkan-aplikasi-dari-sumber-untuk-pengembangan)
- 📚 [Dokumentasi](#-dokumentasi) - 📚 [Dokumentasi](#-dokumentasi)
- 📜 [Peta Jalan](#-peta-jalan) - 📜 [Peta Jalan](#-peta-jalan)
@ -72,7 +67,7 @@
## 💡 Apa Itu RAGFlow? ## 💡 Apa Itu RAGFlow?
[RAGFlow](https://ragflow.io/) adalah mesin [RAG](https://ragflow.io/basics/what-is-rag) (Retrieval-Augmented Generation) open-source terkemuka yang mengintegrasikan teknologi RAG mutakhir dengan kemampuan Agent untuk menciptakan lapisan kontekstual superior bagi LLM. Menyediakan alur kerja RAG yang efisien dan dapat diadaptasi untuk perusahaan segala skala. Didukung oleh mesin konteks terkonvergensi dan template Agent yang telah dipra-bangun, RAGFlow memungkinkan pengembang mengubah data kompleks menjadi sistem AI kesetiaan-tinggi dan siap-produksi dengan efisiensi dan presisi yang luar biasa. [RAGFlow](https://ragflow.io/) adalah mesin RAG (Retrieval-Augmented Generation) open-source terkemuka yang mengintegrasikan teknologi RAG mutakhir dengan kemampuan Agent untuk menciptakan lapisan kontekstual superior bagi LLM. Menyediakan alur kerja RAG yang efisien dan dapat diadaptasi untuk perusahaan segala skala. Didukung oleh mesin konteks terkonvergensi dan template Agent yang telah dipra-bangun, RAGFlow memungkinkan pengembang mengubah data kompleks menjadi sistem AI kesetiaan-tinggi dan siap-produksi dengan efisiensi dan presisi yang luar biasa.
## 🎮 Demo ## 🎮 Demo
@ -85,16 +80,13 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru ## 🔥 Pembaruan Terbaru
- 2025-12-26 Mendukung 'Memori' untuk agen AI.
- 2025-11-19 Mendukung Gemini 3 Pro.
- 2025-11-12 Mendukung sinkronisasi data dari Confluence, S3, Notion, Discord, Google Drive.
- 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen.
- 2025-10-15 Dukungan untuk jalur data yang terorkestrasi.
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI. - 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
- 2025-08-04 Mendukung model baru, termasuk Kimi K2 dan Grok 4.
- 2025-08-01 Mendukung alur kerja agen dan MCP. - 2025-08-01 Mendukung alur kerja agen dan MCP.
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen. - 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
- 2025-05-05 Mendukung kueri lintas bahasa. - 2025-05-05 Mendukung kueri lintas bahasa.
- 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX. - 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX.
- 2025-02-28 dikombinasikan dengan pencarian Internet (TAVILY), mendukung penelitian mendalam untuk LLM apa pun.
- 2024-12-18 Meningkatkan model Analisis Tata Letak Dokumen di DeepDoc. - 2024-12-18 Meningkatkan model Analisis Tata Letak Dokumen di DeepDoc.
- 2024-08-22 Dukungan untuk teks ke pernyataan SQL melalui RAG. - 2024-08-22 Dukungan untuk teks ke pernyataan SQL melalui RAG.
@ -137,7 +129,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔎 Arsitektur Sistem ## 🔎 Arsitektur Sistem
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 Mulai ## 🎬 Mulai
@ -176,48 +168,41 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. Clone repositori: 2. Clone repositori:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. Bangun image Docker pre-built dan jalankan server: 3. Bangun image Docker pre-built dan jalankan server:
> [!CAUTION] > [!CAUTION]
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64. > Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image). > Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.23.1 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.23.1, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. > Perintah di bawah ini mengunduh edisi v0.20.5-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.5-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5 untuk edisi lengkap v0.20.5.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# git checkout v0.23.1 # To use GPU to accelerate embedding and DeepDoc tasks:
# Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases) # docker compose -f docker-compose-gpu.yml up -d
# This steps ensures the **entrypoint.sh** file in the code matches the Docker image version.
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
``` ```
> Catatan: Sebelum `v0.22.0`, kami menyediakan image dengan model embedding dan image slim tanpa model embedding. Detailnya sebagai berikut: | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
|-------------------|-----------------|-----------------------|----------------| | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> Mulai dari `v0.22.0`, kami hanya menyediakan edisi slim dan tidak lagi menambahkan akhiran **-slim** pada tag image.
1. Periksa status server setelah server aktif dan berjalan: 1. Periksa status server setelah server aktif dan berjalan:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_ _Output berikut menandakan bahwa sistem berhasil diluncurkan:_
@ -233,19 +218,16 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
``` ```
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network abnormal` > Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal`
> karena RAGFlow mungkin belum sepenuhnya siap. > karena RAGFlow mungkin belum sepenuhnya siap.
>
2. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
2. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
> Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena > Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena
> port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default. > port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default.
>
3. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui 3. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
bidang `API_KEY` dengan kunci API yang sesuai. bidang `API_KEY` dengan kunci API yang sesuai.
> Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut. > Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut.
>
_Sistem telah siap digunakan!_ _Sistem telah siap digunakan!_
@ -267,23 +249,24 @@ Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
> $ docker compose -f docker-compose.yml up -d > $ docker compose -f docker-compose.yml up -d
> ``` > ```
## 🔧 Membangun Docker Image ## 🔧 Membangun Docker Image tanpa Model Embedding
Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding. Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
Jika berada di belakang proxy, Anda dapat melewatkan argumen proxy: ## 🔧 Membangun Docker Image Termasuk Model Embedding
Image ini berukuran sekitar 9 GB. Karena sudah termasuk model embedding, ia hanya bergantung pada aplikasi LLM eksternal.
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 Menjalankan Aplikasi dari untuk Pengembangan ## 🔨 Menjalankan Aplikasi dari untuk Pengembangan
@ -293,15 +276,17 @@ docker build --platform linux/amd64 \
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. Clone kode sumber dan instal dependensi Python: 2. Clone kode sumber dan instal dependensi Python:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose: 3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose:
```bash ```bash
@ -313,11 +298,13 @@ docker build --platform linux/amd64 \
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror: 4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. Jika sistem operasi Anda tidak memiliki jemalloc, instal sebagai berikut: 5. Jika sistem operasi Anda tidak memiliki jemalloc, instal sebagai berikut:
```bash ```bash
@ -328,6 +315,7 @@ docker build --platform linux/amd64 \
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. Jalankan aplikasi backend: 6. Jalankan aplikasi backend:
```bash ```bash
@ -335,12 +323,14 @@ docker build --platform linux/amd64 \
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. Instal dependensi frontend: 7. Instal dependensi frontend:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. Jalankan aplikasi frontend: 8. Jalankan aplikasi frontend:
```bash ```bash
@ -350,12 +340,15 @@ docker build --platform linux/amd64 \
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_ _Output berikut menandakan bahwa sistem berhasil diluncurkan:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Hentikan layanan front-end dan back-end RAGFlow setelah pengembangan selesai: 9. Hentikan layanan front-end dan back-end RAGFlow setelah pengembangan selesai:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 Dokumentasi ## 📚 Dokumentasi
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
@ -368,7 +361,7 @@ docker build --platform linux/amd64 \
## 📜 Roadmap ## 📜 Roadmap
Lihat [Roadmap RAGFlow 2026](https://github.com/infiniflow/ragflow/issues/12241) Lihat [Roadmap RAGFlow 2025](https://github.com/infiniflow/ragflow/issues/4214)
## 🏄 Komunitas ## 🏄 Komunitas

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,23 +37,17 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## 💡 RAGFlow とは? ## 💡 RAGFlow とは?
[RAGFlow](https://ragflow.io/) は、先進的な[RAG](https://ragflow.io/basics/what-is-rag)Retrieval-Augmented Generation技術と Agent 機能を融合し、大規模言語モデルLLMに優れたコンテキスト層を構築する最先端のオープンソース RAG エンジンです。あらゆる規模の企業に対応可能な合理化された RAG ワークフローを提供し、統合型[コンテキストエンジン](https://ragflow.io/basics/what-is-agent-context-engine)と事前構築されたAgentテンプレートにより、開発者が複雑なデータを驚異的な効率性と精度で高精細なプロダクションレディAIシステムへ変換することを可能にします。 [RAGFlow](https://ragflow.io/) は、先進的なRAGRetrieval-Augmented Generation技術と Agent 機能を融合し、大規模言語モデルLLMに優れたコンテキスト層を構築する最先端のオープンソース RAG エンジンです。あらゆる規模の企業に対応可能な合理化された RAG ワークフローを提供し、統合型コンテキストエンジンと事前構築されたAgentテンプレートにより、開発者が複雑なデータを驚異的な効率性と精度で高精細なプロダクションレディAIシステムへ変換することを可能にします。
## 🎮 Demo ## 🎮 Demo
@ -66,16 +60,13 @@
## 🔥 最新情報 ## 🔥 最新情報
- 2025-12-26 AIエージェントの「メモリ」機能をサポート。
- 2025-11-19 Gemini 3 Proをサポートしています。
- 2025-11-12 Confluence、S3、Notion、Discord、Google Drive からのデータ同期をサポートします。
- 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。 - 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
- 2025-08-04 新モデル、キミK2およびGrok 4をサポート。
- 2025-08-01 エージェントワークフローとMCPをサポート。 - 2025-08-01 エージェントワークフローとMCPをサポート。
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。 - 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
- 2025-05-05 言語間クエリをサポートしました。 - 2025-05-05 言語間クエリをサポートしました。
- 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。 - 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。
- 2025-02-28 インターネット検索 (TAVILY) と組み合わせて、あらゆる LLM の詳細な調査をサポートします。
- 2024-12-18 DeepDoc のドキュメント レイアウト分析モデルをアップグレードします。 - 2024-12-18 DeepDoc のドキュメント レイアウト分析モデルをアップグレードします。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。 - 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
@ -118,7 +109,7 @@
## 🔎 システム構成 ## 🔎 システム構成
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 初期設定 ## 🎬 初期設定
@ -156,48 +147,41 @@
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. リポジトリをクローンする: 2. リポジトリをクローンする:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. ビルド済みの Docker イメージをビルドし、サーバーを起動する: 3. ビルド済みの Docker イメージをビルドし、サーバーを起動する:
> [!CAUTION] > [!CAUTION]
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。 > 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。 > ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.23.1 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.23.1 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。 > 以下のコマンドは、RAGFlow Docker イメージの v0.20.5-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.5-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.5 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5 と設定します。
```bash
$ cd ragflow/docker
# git checkout v0.23.1
# 任意: 安定版タグを利用 (一覧: https://github.com/infiniflow/ragflow/releases)
# この手順は、コード内の entrypoint.sh ファイルが Docker イメージのバージョンと一致していることを確認します。
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
> 注意:`v0.22.0` より前のバージョンでは、embedding モデルを含むイメージと、embedding モデルを含まない slim イメージの両方を提供していました。詳細は以下の通りです:
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|----------------|
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> `v0.22.0` 以降、当プロジェクトでは slim エディションのみを提供し、イメージタグに **-slim** サフィックスを付けなくなりました。
1. サーバーを立ち上げた後、サーバーの状態を確認する:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
1. サーバーを立ち上げた後、サーバーの状態を確認する:
```bash
$ docker logs -f ragflow-server
``` ```
_以下の出力は、システムが正常に起動したことを確認するものです:_ _以下の出力は、システムが正常に起動したことを確認するものです:_
@ -213,15 +197,12 @@
``` ```
> もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。 > もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。
>
2. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
2. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。 > デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
>
3. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。 3. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
> 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。 > 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。
>
_これで初期設定完了ショーの開幕です_ _これで初期設定完了ショーの開幕です_
@ -250,40 +231,37 @@
RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。 RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。
1. 実行中のすべてのコンテナを停止するには: 1. 実行中のすべてのコンテナを停止するには:
```bash ```bash
$ docker compose -f docker/docker-compose.yml down -v $ docker compose -f docker/docker-compose.yml down -v
``` ```
Note: `-v` は docker コンテナのボリュームを削除し、既存のデータをクリアします。 Note: `-v` は docker コンテナのボリュームを削除し、既存のデータをクリアします。
2. **docker/.env** の「DOC \_ ENGINE」を「infinity」に設定します。 2. **docker/.env** の「DOC \_ ENGINE」を「infinity」に設定します。
3. 起動コンテナ:
3. 起動コンテナ:
```bash ```bash
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
``` ```
> [!WARNING] > [!WARNING]
> Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。 > Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。
>
## 🔧 ソースコードで Docker イメージを作成 ## 🔧 ソースコードで Docker イメージを作成(埋め込みモデルなし)
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。 この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
プロキシ環境下にいる場合は、プロキシ引数を指定できます: ## 🔧 ソースコードをコンパイルした Docker イメージ(埋め込みモデルを含む)
この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 ソースコードからサービスを起動する方法 ## 🔨 ソースコードからサービスを起動する方法
@ -293,15 +271,17 @@ docker build --platform linux/amd64 \
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. ソースコードをクローンし、Python の依存関係をインストールする: 2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する: 3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する:
```bash ```bash
@ -313,11 +293,13 @@ docker build --platform linux/amd64 \
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください: 4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. オペレーティングシステムにjemallocがない場合は、次のようにインストールします: 5. オペレーティングシステムにjemallocがない場合は、次のようにインストールします:
```bash ```bash
@ -328,6 +310,7 @@ docker build --platform linux/amd64 \
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. バックエンドサービスを起動する: 6. バックエンドサービスを起動する:
```bash ```bash
@ -335,12 +318,14 @@ docker build --platform linux/amd64 \
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. フロントエンドの依存関係をインストールする: 7. フロントエンドの依存関係をインストールする:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. フロントエンドサービスを起動する: 8. フロントエンドサービスを起動する:
```bash ```bash
@ -350,12 +335,14 @@ docker build --platform linux/amd64 \
_以下の画面で、システムが正常に起動したことを示します:_ _以下の画面で、システムが正常に起動したことを示します:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. 開発が完了したら、RAGFlow のフロントエンド サービスとバックエンド サービスを停止します: 9. 開発が完了したら、RAGFlow のフロントエンド サービスとバックエンド サービスを停止します:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 ドキュメンテーション ## 📚 ドキュメンテーション
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
@ -368,7 +355,7 @@ docker build --platform linux/amd64 \
## 📜 ロードマップ ## 📜 ロードマップ
[RAGFlow ロードマップ 2026](https://github.com/infiniflow/ragflow/issues/12241) を参照 [RAGFlow ロードマップ 2025](https://github.com/infiniflow/ragflow/issues/4214) を参照
## 🏄 コミュニティ ## 🏄 コミュニティ

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,24 +37,17 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## 💡 RAGFlow란? ## 💡 RAGFlow란?
[RAGFlow](https://ragflow.io/) 는 최첨단 [RAG](https://ragflow.io/basics/what-is-rag)(Retrieval-Augmented Generation)와 Agent 기능을 융합하여 대규모 언어 모델(LLM)을 위한 우수한 컨텍스트 계층을 생성하는 선도적인 오픈소스 RAG 엔진입니다. 모든 규모의 기업에 적용 가능한 효율적인 RAG 워크플로를 제공하며, 통합 [컨텍스트 엔진](https://ragflow.io/basics/what-is-agent-context-engine)과 사전 구축된 Agent 템플릿을 통해 개발자들이 복잡한 데이터를 예외적인 효율성과 정밀도로 고급 구현도의 프로덕션 준비 완료 AI 시스템으로 변환할 수 있도록 지원합니다. [RAGFlow](https://ragflow.io/) 는 최첨단 RAG(Retrieval-Augmented Generation)와 Agent 기능을 융합하여 대규모 언어 모델(LLM)을 위한 우수한 컨텍스트 계층을 생성하는 선도적인 오픈소스 RAG 엔진입니다. 모든 규모의 기업에 적용 가능한 효율적인 RAG 워크플로를 제공하며, 통합 컨텍스트 엔진과 사전 구축된 Agent 템플릿을 통해 개발자들이 복잡한 데이터를 예외적인 효율성과 정밀도로 고급 구현도의 프로덕션 준비 완료 AI 시스템으로 변환할 수 있도록 지원합니다.
## 🎮 데모 ## 🎮 데모
@ -67,16 +60,13 @@
## 🔥 업데이트 ## 🔥 업데이트
- 2025-12-26 AI 에이전트의 '메모리' 기능 지원.
- 2025-11-19 Gemini 3 Pro를 지원합니다.
- 2025-11-12 Confluence, S3, Notion, Discord, Google Drive에서 데이터 동기화를 지원합니다.
- 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다.
- 2025-10-15 조정된 데이터 파이프라인 지원.
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다. - 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
- 2025-08-04 새로운 모델인 Kimi K2와 Grok 4를 포함하여 지원합니다.
- 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다. - 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다.
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다. - 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
- 2025-05-05 언어 간 쿼리를 지원합니다. - 2025-05-05 언어 간 쿼리를 지원합니다.
- 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다. - 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다.
- 2025-02-28 인터넷 검색(TAVILY)과 결합되어 모든 LLM에 대한 심층 연구를 지원합니다.
- 2024-12-18 DeepDoc의 문서 레이아웃 분석 모델 업그레이드. - 2024-12-18 DeepDoc의 문서 레이아웃 분석 모델 업그레이드.
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다. - 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
@ -119,7 +109,7 @@
## 🔎 시스템 아키텍처 ## 🔎 시스템 아키텍처
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 시작하기 ## 🎬 시작하기
@ -170,36 +160,28 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다. > 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image). > ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.23.1 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.23.1과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. > 아래 명령어는 RAGFlow Docker 이미지의 v0.20.5-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.5-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.5을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5로 설정합니다.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# git checkout v0.23.1
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# 이 단계는 코드의 entrypoint.sh 파일이 Docker 이미지 버전과 일치하도록 보장합니다.
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
> 참고: `v0.22.0` 이전 버전에서는 embedding 모델이 포함된 이미지와 embedding 모델이 포함되지 않은 slim 이미지를 모두 제공했습니다. 자세한 내용은 다음과 같습니다: | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
|-------------------|-----------------|-----------------------|----------------| | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> `v0.22.0`부터는 slim 에디션만 배포하며 이미지 태그에 **-slim** 접미사를 더 이상 붙이지 않습니다.
1. 서버가 시작된 후 서버 상태를 확인하세요: 1. 서버가 시작된 후 서버 상태를 확인하세요:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_ _다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_
@ -214,7 +196,7 @@
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
``` ```
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network abnormal` 오류가 발생할 수 있습니다. > 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network anormal` 오류가 발생할 수 있습니다.
2. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요. 2. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요.
> 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다. > 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다.
@ -261,23 +243,24 @@ RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및
> [!WARNING] > [!WARNING]
> Linux/arm64 시스템에서 Infinity로 전환하는 것은 공식적으로 지원되지 않습니다. > Linux/arm64 시스템에서 Infinity로 전환하는 것은 공식적으로 지원되지 않습니다.
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다 ## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다. 이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
프록시 환경인 경우, 프록시 인수를 전달할 수 있습니다: ## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 소스 코드로 서비스를 시작합니다. ## 🔨 소스 코드로 서비스를 시작합니다.
@ -293,7 +276,7 @@ docker build --platform linux/amd64 \
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
@ -372,7 +355,7 @@ docker build --platform linux/amd64 \
## 📜 로드맵 ## 📜 로드맵
[RAGFlow 로드맵 2026](https://github.com/infiniflow/ragflow/issues/12241)을 확인하세요. [RAGFlow 로드맵 2025](https://github.com/infiniflow/ragflow/issues/4214)을 확인하세요.
## 🏄 커뮤니티 ## 🏄 커뮤니티

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -37,19 +37,13 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Documentação</a> | <a href="https://ragflow.io/docs/dev/">Documentação</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<details open> <details open>
<summary><b>📕 Índice</b></summary> <summary><b>📕 Índice</b></summary>
@ -73,7 +67,7 @@
## 💡 O que é o RAGFlow? ## 💡 O que é o RAGFlow?
[RAGFlow](https://ragflow.io/) é um mecanismo de [RAG](https://ragflow.io/basics/what-is-rag) (Retrieval-Augmented Generation) open-source líder que fusiona tecnologias RAG de ponta com funcionalidades Agent para criar uma camada contextual superior para LLMs. Oferece um fluxo de trabalho RAG otimizado adaptável a empresas de qualquer escala. Alimentado por [um motor de contexto](https://ragflow.io/basics/what-is-agent-context-engine) convergente e modelos Agent pré-construídos, o RAGFlow permite que desenvolvedores transformem dados complexos em sistemas de IA de alta fidelidade e pronto para produção com excepcional eficiência e precisão. [RAGFlow](https://ragflow.io/) é um mecanismo de RAG (Retrieval-Augmented Generation) open-source líder que fusiona tecnologias RAG de ponta com funcionalidades Agent para criar uma camada contextual superior para LLMs. Oferece um fluxo de trabalho RAG otimizado adaptável a empresas de qualquer escala. Alimentado por um motor de contexto convergente e modelos Agent pré-construídos, o RAGFlow permite que desenvolvedores transformem dados complexos em sistemas de IA de alta fidelidade e pronto para produção com excepcional eficiência e precisão.
## 🎮 Demo ## 🎮 Demo
@ -86,16 +80,13 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações ## 🔥 Últimas Atualizações
- 26-12-2025 Suporte à função 'Memória' para agentes de IA.
- 19-11-2025 Suporta Gemini 3 Pro.
- 12-11-2025 Suporta a sincronização de dados do Confluence, S3, Notion, Discord e Google Drive.
- 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos.
- 15-10-2025 Suporte para pipelines de dados orquestrados.
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI. - 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
- 04-08-2025 Suporta novos modelos, incluindo Kimi K2 e Grok 4.
- 01-08-2025 Suporta fluxo de trabalho agente e MCP. - 01-08-2025 Suporta fluxo de trabalho agente e MCP.
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente. - 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
- 05-05-2025 Suporte a consultas entre idiomas. - 05-05-2025 Suporte a consultas entre idiomas.
- 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX. - 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX.
- 28-02-2025 combinado com a pesquisa na Internet (T AVI LY), suporta pesquisas profundas para qualquer LLM.
- 18-12-2024 Atualiza o modelo de Análise de Layout de Documentos no DeepDoc. - 18-12-2024 Atualiza o modelo de Análise de Layout de Documentos no DeepDoc.
- 22-08-2024 Suporta conversão de texto para comandos SQL via RAG. - 22-08-2024 Suporta conversão de texto para comandos SQL via RAG.
@ -138,7 +129,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔎 Arquitetura do Sistema ## 🔎 Arquitetura do Sistema
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 Primeiros Passos ## 🎬 Primeiros Passos
@ -156,92 +147,84 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
### 🚀 Iniciar o servidor ### 🚀 Iniciar o servidor
1. Certifique-se de que `vm.max_map_count` >= 262144: 1. Certifique-se de que `vm.max_map_count` >= 262144:
> Para verificar o valor de `vm.max_map_count`: > Para verificar o valor de `vm.max_map_count`:
> >
> ```bash > ```bash
> $ sysctl vm.max_map_count > $ sysctl vm.max_map_count
> ``` > ```
> >
> Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144: > Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144:
> >
> ```bash > ```bash
> # Neste caso, defina para 262144: > # Neste caso, defina para 262144:
> $ sudo sysctl -w vm.max_map_count=262144 > $ sudo sysctl -w vm.max_map_count=262144
> ``` > ```
> >
> Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**: > Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**:
> >
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. Clone o repositório:
```bash 2. Clone o repositório:
$ git clone https://github.com/infiniflow/ragflow.git
``` ```bash
3. Inicie o servidor usando as imagens Docker pré-compiladas: $ git clone https://github.com/infiniflow/ragflow.git
```
3. Inicie o servidor usando as imagens Docker pré-compiladas:
> [!CAUTION] > [!CAUTION]
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64. > Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema. > Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição`v0.23.1` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.23.1`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. > O comando abaixo baixa a edição `v0.20.5-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.5-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` para a edição completa `v0.20.5`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# git checkout v0.23.1 # To use GPU to accelerate embedding and DeepDoc tasks:
# Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases) # docker compose -f docker-compose-gpu.yml up -d
# Esta etapa garante que o arquivo entrypoint.sh no código corresponda à versão da imagem do Docker. ```
# Use CPU for DeepDoc tasks: | Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
$ docker compose -f docker-compose.yml up -d | --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.20.5 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.20.5-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |
# To use GPU to accelerate DeepDoc tasks: 4. Verifique o status do servidor após tê-lo iniciado:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
> Nota: Antes da `v0.22.0`, fornecíamos imagens com modelos de embedding e imagens slim sem modelos de embedding. Detalhes a seguir: ```bash
$ docker logs -f ragflow-server
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | _O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
|-------------------|-----------------|-----------------------|----------------|
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
> A partir da `v0.22.0`, distribuímos apenas a edição slim e não adicionamos mais o sufixo **-slim** às tags das imagens. ```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
4. Verifique o status do servidor após tê-lo iniciado: * Rodando em todos os endereços (0.0.0.0)
```
```bash > Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
$ docker logs -f docker-ragflow-cpu-1
```
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_ 5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
```bash > Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão.
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Rodando em todos os endereços (0.0.0.0) 6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
```
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network abnormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado. > Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações.
>
5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
> Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão.
>
6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
> Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações.
>
_O show está no ar!_ _O show está no ar!_
@ -272,9 +255,9 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
```bash ```bash
$ docker compose -f docker/docker-compose.yml down -v $ docker compose -f docker/docker-compose.yml down -v
``` ```
Note: `-v` irá deletar os volumes do contêiner, e os dados existentes serão apagados. Note: `-v` irá deletar os volumes do contêiner, e os dados existentes serão apagados.
2. Defina `DOC_ENGINE` no **docker/.env** para `infinity`. 2. Defina `DOC_ENGINE` no **docker/.env** para `infinity`.
3. Inicie os contêineres: 3. Inicie os contêineres:
```bash ```bash
@ -282,25 +265,26 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
``` ```
> [!ATENÇÃO] > [!ATENÇÃO]
> A mudança para o Infinity em uma máquina Linux/arm64 ainda não é oficialmente suportada. > A mudança para o Infinity em uma máquina Linux/arm64 ainda não é oficialmente suportada.
## 🔧 Criar uma imagem Docker ## 🔧 Criar uma imagem Docker sem modelos de incorporação
Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação. Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação.
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
Se você estiver atrás de um proxy, pode passar argumentos de proxy: ## 🔧 Criar uma imagem Docker incluindo modelos de incorporação
Esta imagem tem cerca de 9 GB de tamanho. Como inclui modelos de incorporação, depende apenas de serviços externos de LLM.
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 Lançar o serviço a partir do código-fonte para desenvolvimento ## 🔨 Lançar o serviço a partir do código-fonte para desenvolvimento
@ -310,15 +294,17 @@ docker build --platform linux/amd64 \
```bash ```bash
pipx install uv pre-commit pipx install uv pre-commit
``` ```
2. Clone o código-fonte e instale as dependências Python: 2. Clone o código-fonte e instale as dependências Python:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # instala os módulos Python dependentes do RAGFlow uv sync --python 3.10 --all-extras # instala os módulos Python dependentes do RAGFlow
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. Inicie os serviços dependentes (MinIO, Elasticsearch, Redis e MySQL) usando Docker Compose: 3. Inicie os serviços dependentes (MinIO, Elasticsearch, Redis e MySQL) usando Docker Compose:
```bash ```bash
@ -330,21 +316,24 @@ docker build --platform linux/amd64 \
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. Se não conseguir acessar o HuggingFace, defina a variável de ambiente `HF_ENDPOINT` para usar um site espelho: 4. Se não conseguir acessar o HuggingFace, defina a variável de ambiente `HF_ENDPOINT` para usar um site espelho:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. Se o seu sistema operacional não tiver jemalloc, instale-o da seguinte maneira: 5. Se o seu sistema operacional não tiver jemalloc, instale-o da seguinte maneira:
```bash ```bash
# ubuntu # ubuntu
sudo apt-get install libjemalloc-dev sudo apt-get install libjemalloc-dev
# centos # centos
sudo yum instalar jemalloc sudo yum instalar jemalloc
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. Lance o serviço de back-end: 6. Lance o serviço de back-end:
```bash ```bash
@ -352,12 +341,14 @@ docker build --platform linux/amd64 \
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. Instale as dependências do front-end: 7. Instale as dependências do front-end:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. Lance o serviço de front-end: 8. Lance o serviço de front-end:
```bash ```bash
@ -367,11 +358,13 @@ docker build --platform linux/amd64 \
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_ _O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Pare os serviços de front-end e back-end do RAGFlow após a conclusão do desenvolvimento: 9. Pare os serviços de front-end e back-end do RAGFlow após a conclusão do desenvolvimento:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 Documentação ## 📚 Documentação
@ -385,7 +378,7 @@ docker build --platform linux/amd64 \
## 📜 Roadmap ## 📜 Roadmap
Veja o [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241) Veja o [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214)
## 🏄 Comunidade ## 🏄 Comunidade

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,15 +37,13 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -72,7 +70,7 @@
## 💡 RAGFlow 是什麼? ## 💡 RAGFlow 是什麼?
[RAGFlow](https://ragflow.io/) 是一款領先的開源 [RAG](https://ragflow.io/basics/what-is-rag)Retrieval-Augmented Generation引擎通過融合前沿的 RAG 技術與 Agent 能力,為大型語言模型提供卓越的上下文層。它提供可適配任意規模企業的端到端 RAG 工作流,憑藉融合式[上下文引擎](https://ragflow.io/basics/what-is-agent-context-engine)與預置的 Agent 模板,助力開發者以極致效率與精度將複雜數據轉化為高可信、生產級的人工智能系統。 [RAGFlow](https://ragflow.io/) 是一款領先的開源 RAGRetrieval-Augmented Generation引擎通過融合前沿的 RAG 技術與 Agent 能力,為大型語言模型提供卓越的上下文層。它提供可適配任意規模企業的端到端 RAG 工作流,憑藉融合式上下文引擎與預置的 Agent 模板,助力開發者以極致效率與精度將複雜數據轉化為高可信、生產級的人工智能系統。
## 🎮 Demo 試用 ## 🎮 Demo 試用
@ -85,16 +83,13 @@
## 🔥 近期更新 ## 🔥 近期更新
- 2025-12-26 支援AI代理的「記憶」功能。
- 2025-11-19 支援 Gemini 3 Pro。
- 2025-11-12 支援從 Confluence、S3、Notion、Discord、Google Drive 進行資料同步。
- 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。
- 2025-10-15 支援可編排的資料管道。
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。 - 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支援 agentic workflow 和 MCP。 - 2025-08-04 支援 Kimi K2 和 Grok 4 等模型.
- 2025-08-01 支援 agentic workflow 和 MCP
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。 - 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。 - 2025-05-05 支援跨語言查詢。
- 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述 - 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述.
- 2025-02-28 結合網路搜尋Tavily對於任意大模型實現類似 Deep Research 的推理功能.
- 2024-12-18 升級了 DeepDoc 的文檔佈局分析模型。 - 2024-12-18 升級了 DeepDoc 的文檔佈局分析模型。
- 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。 - 2024-08-22 支援用 RAG 技術實現從自然語言到 SQL 語句的轉換。
@ -125,7 +120,7 @@
### 🍔 **相容各類異質資料來源** ### 🍔 **相容各類異質資料來源**
- 支援豐富的文件類型,包括 Word 文件、PPT、excel 表格、txt 檔案、圖片、PDF、影印件、印件、結構化資料、網頁等。 - 支援豐富的文件類型,包括 Word 文件、PPT、excel 表格、txt 檔案、圖片、PDF、影印件、印件、結構化資料、網頁等。
### 🛀 **全程無憂、自動化的 RAG 工作流程** ### 🛀 **全程無憂、自動化的 RAG 工作流程**
@ -137,7 +132,7 @@
## 🔎 系統架構 ## 🔎 系統架構
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 快速開始 ## 🎬 快速開始
@ -175,54 +170,47 @@
> ```bash > ```bash
> vm.max_map_count=262144 > vm.max_map_count=262144
> ``` > ```
>
2. 克隆倉庫: 2. 克隆倉庫:
```bash ```bash
$ git clone https://github.com/infiniflow/ragflow.git $ git clone https://github.com/infiniflow/ragflow.git
``` ```
3. 進入 **docker** 資料夾,利用事先編譯好的 Docker 映像啟動伺服器: 3. 進入 **docker** 資料夾,利用事先編譯好的 Docker 映像啟動伺服器:
> [!CAUTION] > [!CAUTION]
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。 > 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。 > 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow Docker 映像 `v0.23.1`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.23.1` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。 > 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.5-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.5-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` 來下載 RAGFlow 鏡像的 `v0.20.5` 完整發行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# git checkout v0.23.1
# 可選使用穩定版標籤查看發佈https://github.com/infiniflow/ragflow/releases
# 此步驟確保程式碼中的 entrypoint.sh 檔案與 Docker 映像版本一致。
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d ```
```
> 注意:在 `v0.22.0` 之前的版本,我們會同時提供包含 embedding 模型的映像和不含 embedding 模型的 slim 映像。具體如下: | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | > [!TIP]
|-------------------|-----------------|-----------------------|----------------| > 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
| v0.21.1 | &approx;9 | ✔️ | Stable release | >
| v0.21.1-slim | &approx;2 | ❌ | Stable release | > - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
> 從 `v0.22.0` 開始,我們只發佈 slim 版本,並且不再在映像標籤後附加 **-slim** 後綴。
> [!TIP]
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
>
> - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
4. 伺服器啟動成功後再次確認伺服器狀態: 4. 伺服器啟動成功後再次確認伺服器狀態:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_出現以下介面提示說明伺服器啟動成功_ _出現以下介面提示說明伺服器啟動成功_
@ -237,16 +225,13 @@
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
``` ```
> 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network abnormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。 > 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network anormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。
>
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
> 上面這個範例中,您只需輸入 http://IP_OF_YOUR_MACHINE 即可:未改動過設定則無需輸入連接埠(預設的 HTTP 服務連接埠 80 > 上面這個範例中,您只需輸入 http://IP_OF_YOUR_MACHINE 即可:未改動過設定則無需輸入連接埠(預設的 HTTP 服務連接埠 80
>
6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案的 `user_default_llm` 欄位設定 LLM factory並在 `API_KEY` 欄填入和你選擇的大模型相對應的 API key。 6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案的 `user_default_llm` 欄位設定 LLM factory並在 `API_KEY` 欄填入和你選擇的大模型相對應的 API key。
> 詳見 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。 > 詳見 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
>
_好戲開始接著奏樂接著舞 _ _好戲開始接著奏樂接著舞 _
@ -264,7 +249,7 @@
> [./docker/README](./docker/README.md) 解釋了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的環境變數設定和服務配置。 > [./docker/README](./docker/README.md) 解釋了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的環境變數設定和服務配置。
如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置 `80:80` 改為 `<YOUR_SERVING_PORT>:80` 。 如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置`80:80` 改為`<YOUR_SERVING_PORT>:80` 。
> 所有系統配置都需要透過系統重新啟動生效: > 所有系統配置都需要透過系統重新啟動生效:
> >
@ -281,9 +266,10 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
```bash ```bash
$ docker compose -f docker/docker-compose.yml down -v $ docker compose -f docker/docker-compose.yml down -v
``` ```
Note: `-v` 將會刪除 docker 容器的 volumes已有的資料會被清空。 Note: `-v` 將會刪除 docker 容器的 volumes已有的資料會被清空。
2. 設定 **docker/.env** 目錄中的 `DOC_ENGINE` 為 `infinity`. 2. 設定 **docker/.env** 目錄中的 `DOC_ENGINE` 為 `infinity`.
3. 啟動容器: 3. 啟動容器:
```bash ```bash
@ -293,23 +279,24 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
> [!WARNING] > [!WARNING]
> Infinity 目前官方並未正式支援在 Linux/arm64 架構下的機器上運行. > Infinity 目前官方並未正式支援在 Linux/arm64 架構下的機器上運行.
## 🔧 原始碼編譯 Docker 映像 ## 🔧 原始碼編譯 Docker 映像(不含 embedding 模型)
本 Docker 映像大小約 2 GB 左右並且依賴外部的大模型和 embedding 服務。 本 Docker 映像大小約 2 GB 左右並且依賴外部的大模型和 embedding 服務。
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
若您位於代理環境,可傳遞代理參數: ## 🔧 原始碼編譯 Docker 映像(包含 embedding 模型)
本 Docker 大小約 9 GB 左右。由於已包含 embedding 模型,所以只需依賴外部的大模型服務即可。
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 以原始碼啟動服務 ## 🔨 以原始碼啟動服務
@ -320,15 +307,17 @@ docker build --platform linux/amd64 \
pipx install uv pre-commit pipx install uv pre-commit
export UV_INDEX=https://mirrors.aliyun.com/pypi/simple export UV_INDEX=https://mirrors.aliyun.com/pypi/simple
``` ```
2. 下載原始碼並安裝 Python 依賴: 2. 下載原始碼並安裝 Python 依賴:
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
3. 透過 Docker Compose 啟動依賴的服務MinIO, Elasticsearch, Redis, and MySQL 3. 透過 Docker Compose 啟動依賴的服務MinIO, Elasticsearch, Redis, and MySQL
```bash ```bash
@ -340,11 +329,13 @@ docker build --platform linux/amd64 \
``` ```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
``` ```
4. 如果無法存取 HuggingFace可以把環境變數 `HF_ENDPOINT` 設為對應的鏡像網站: 4. 如果無法存取 HuggingFace可以把環境變數 `HF_ENDPOINT` 設為對應的鏡像網站:
```bash ```bash
export HF_ENDPOINT=https://hf-mirror.com export HF_ENDPOINT=https://hf-mirror.com
``` ```
5. 如果你的操作系统没有 jemalloc请按照如下方式安装 5. 如果你的操作系统没有 jemalloc请按照如下方式安装
```bash ```bash
@ -355,6 +346,7 @@ docker build --platform linux/amd64 \
# mac # mac
sudo brew install jemalloc sudo brew install jemalloc
``` ```
6. 啟動後端服務: 6. 啟動後端服務:
```bash ```bash
@ -362,12 +354,14 @@ docker build --platform linux/amd64 \
export PYTHONPATH=$(pwd) export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh bash docker/launch_backend_service.sh
``` ```
7. 安裝前端依賴: 7. 安裝前端依賴:
```bash ```bash
cd web cd web
npm install npm install
``` ```
8. 啟動前端服務: 8. 啟動前端服務:
```bash ```bash
@ -377,16 +371,15 @@ docker build --platform linux/amd64 \
以下界面說明系統已成功啟動_ 以下界面說明系統已成功啟動_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187) ![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
``` ```
```
9. 開發完成後停止 RAGFlow 前端和後端服務: 9. 開發完成後停止 RAGFlow 前端和後端服務:
```bash ```bash
pkill -f "ragflow_server.py|task_executor.py" pkill -f "ragflow_server.py|task_executor.py"
``` ```
## 📚 技術文檔 ## 📚 技術文檔
- [Quickstart](https://ragflow.io/docs/dev/) - [Quickstart](https://ragflow.io/docs/dev/)
@ -399,7 +392,7 @@ docker build --platform linux/amd64 \
## 📜 路線圖 ## 📜 路線圖
詳見 [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241) 。 詳見 [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214) 。
## 🏄 開源社群 ## 🏄 開源社群

View File

@ -1,6 +1,6 @@
<div align="center"> <div align="center">
<a href="https://demo.ragflow.io/"> <a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo"> <img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
</a> </a>
</div> </div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.23.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -37,15 +37,13 @@
<h4 align="center"> <h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> | <a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/12241">Roadmap</a> | <a href="https://github.com/infiniflow/ragflow/issues/4214">Roadmap</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> | <a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/NjYzJD3GM3">Discord</a> | <a href="https://discord.gg/NjYzJD3GM3">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a> <a href="https://demo.ragflow.io">Demo</a>
</h4> </h4>
<div align="center" style="margin-top:20px;margin-bottom:20px;"> #
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center"> <div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -72,7 +70,7 @@
## 💡 RAGFlow 是什么? ## 💡 RAGFlow 是什么?
[RAGFlow](https://ragflow.io/) 是一款领先的开源检索增强生成([RAG](https://ragflow.io/basics/what-is-rag))引擎,通过融合前沿的 RAG 技术与 Agent 能力,为大型语言模型提供卓越的上下文层。它提供可适配任意规模企业的端到端 RAG 工作流,凭借融合式[上下文引擎](https://ragflow.io/basics/what-is-agent-context-engine)与预置的 Agent 模板,助力开发者以极致效率与精度将复杂数据转化为高可信、生产级的人工智能系统。 [RAGFlow](https://ragflow.io/) 是一款领先的开源检索增强生成RAG引擎通过融合前沿的 RAG 技术与 Agent 能力,为大型语言模型提供卓越的上下文层。它提供可适配任意规模企业的端到端 RAG 工作流,凭借融合式上下文引擎与预置的 Agent 模板,助力开发者以极致效率与精度将复杂数据转化为高可信、生产级的人工智能系统。
## 🎮 Demo 试用 ## 🎮 Demo 试用
@ -85,16 +83,13 @@
## 🔥 近期更新 ## 🔥 近期更新
- 2025-12-26 支持AI代理的“记忆”功能。 - 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型.
- 2025-11-19 支持 Gemini 3 Pro。 - 2025-08-04 新增对 Kimi K2 和 Grok 4 等模型的支持.
- 2025-11-12 支持从 Confluence、S3、Notion、Discord、Google Drive 进行数据同步。
- 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。
- 2025-10-15 支持可编排的数据管道。
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支持 agentic workflow 和 MCP。 - 2025-08-01 支持 agentic workflow 和 MCP。
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。 - 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。 - 2025-05-05 支持跨语言查询。
- 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述 - 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述.
- 2025-02-28 结合互联网搜索Tavily对于任意大模型实现类似 Deep Research 的推理功能.
- 2024-12-18 升级了 DeepDoc 的文档布局分析模型。 - 2024-12-18 升级了 DeepDoc 的文档布局分析模型。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。 - 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
@ -137,7 +132,7 @@
## 🔎 系统架构 ## 🔎 系统架构
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/> <img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div> </div>
## 🎬 快速开始 ## 🎬 快速开始
@ -188,31 +183,23 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。 > 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。 > 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow Docker 镜像 `v0.23.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.23.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。 > 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.5-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.5-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` 来下载 RAGFlow 镜像的 `v0.20.5` 完整发行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
# git checkout v0.23.1
# 可选使用稳定版本标签查看发布https://github.com/infiniflow/ragflow/releases
# 这一步确保代码中的 entrypoint.sh 文件与 Docker 镜像的版本保持一致。
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d $ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks: # To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env # docker compose -f docker-compose-gpu.yml up -d
# docker compose -f docker-compose.yml up -d
``` ```
> 注意:在 `v0.22.0` 之前的版本,我们会同时提供包含 embedding 模型的镜像和不含 embedding 模型的 slim 镜像。具体如下: | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
|-------------------|-----------------|-----------------------|----------------| | v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.1 | &approx;9 | ✔️ | Stable release | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| v0.21.1-slim | &approx;2 | ❌ | Stable release | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
> 从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
> [!TIP] > [!TIP]
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。 > 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
@ -223,7 +210,7 @@
4. 服务器启动成功后再次确认服务器状态: 4. 服务器启动成功后再次确认服务器状态:
```bash ```bash
$ docker logs -f docker-ragflow-cpu-1 $ docker logs -f ragflow-server
``` ```
_出现以下界面提示说明服务器启动成功_ _出现以下界面提示说明服务器启动成功_
@ -238,7 +225,7 @@
* Running on all addresses (0.0.0.0) * Running on all addresses (0.0.0.0)
``` ```
> 如果您在没有看到上面的提示信息出来之前,就尝试登录 RAGFlow你的浏览器有可能会提示 `network abnormal` 或 `网络异常`。 > 如果您在没有看到上面的提示信息出来之前,就尝试登录 RAGFlow你的浏览器有可能会提示 `network anormal` 或 `网络异常`。
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。 5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80 > 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
@ -292,23 +279,24 @@ RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换
> [!WARNING] > [!WARNING]
> Infinity 目前官方并未正式支持在 Linux/arm64 架构下的机器上运行. > Infinity 目前官方并未正式支持在 Linux/arm64 架构下的机器上运行.
## 🔧 源码编译 Docker 镜像 ## 🔧 源码编译 Docker 镜像(不含 embedding 模型)
本 Docker 镜像大小约 2 GB 左右并且依赖外部的大模型和 embedding 服务。 本 Docker 镜像大小约 2 GB 左右并且依赖外部的大模型和 embedding 服务。
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly . docker build --platform linux/amd64 --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
``` ```
如果您处在代理环境下,可以传递代理参数: ## 🔧 源码编译 Docker 镜像(包含 embedding 模型)
本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
```bash ```bash
docker build --platform linux/amd64 \ git clone https://github.com/infiniflow/ragflow.git
--build-arg http_proxy=http://YOUR_PROXY:PORT \ cd ragflow/
--build-arg https_proxy=http://YOUR_PROXY:PORT \ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
-f Dockerfile -t infiniflow/ragflow:nightly .
``` ```
## 🔨 以源代码启动服务 ## 🔨 以源代码启动服务
@ -325,7 +313,7 @@ docker build --platform linux/amd64 \
```bash ```bash
git clone https://github.com/infiniflow/ragflow.git git clone https://github.com/infiniflow/ragflow.git
cd ragflow/ cd ragflow/
uv sync --python 3.12 # install RAGFlow dependent python modules uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv run download_deps.py uv run download_deps.py
pre-commit install pre-commit install
``` ```
@ -402,7 +390,7 @@ docker build --platform linux/amd64 \
## 📜 路线图 ## 📜 路线图
详见 [RAGFlow Roadmap 2026](https://github.com/infiniflow/ragflow/issues/12241) 。 详见 [RAGFlow Roadmap 2025](https://github.com/infiniflow/ragflow/issues/4214) 。
## 🏄 开源社区 ## 🏄 开源社区

View File

@ -6,8 +6,8 @@ Use this section to tell people about which versions of your project are
currently being supported with security updates. currently being supported with security updates.
| Version | Supported | | Version | Supported |
|---------|--------------------| | ------- | ------------------ |
| <=0.7.0 | :white_check_mark: | | <=0.7.0 | :white_check_mark: |
## Reporting a Vulnerability ## Reporting a Vulnerability

View File

@ -4,7 +4,7 @@
Admin Service is a dedicated management component designed to monitor, maintain, and administrate the RAGFlow system. It provides comprehensive tools for ensuring system stability, performing operational tasks, and managing users and permissions efficiently. Admin Service is a dedicated management component designed to monitor, maintain, and administrate the RAGFlow system. It provides comprehensive tools for ensuring system stability, performing operational tasks, and managing users and permissions efficiently.
The service offers real-time monitoring of critical components, including the RAGFlow server, Task Executor processes, and dependent services such as MySQL, Infinity, Elasticsearch, Redis, and MinIO. It automatically checks their health status, resource usage, and uptime, and performs restarts in case of failures to minimize downtime. The service offers real-time monitoring of critical components, including the RAGFlow server, Task Executor processes, and dependent services such as MySQL, Elasticsearch, Redis, and MinIO. It automatically checks their health status, resource usage, and uptime, and performs restarts in case of failures to minimize downtime.
For user and system management, it supports listing, creating, modifying, and deleting users and their associated resources like knowledge bases and Agents. For user and system management, it supports listing, creating, modifying, and deleting users and their associated resources like knowledge bases and Agents.
@ -15,55 +15,22 @@ It consists of a server-side Service and a command-line client (CLI), both imple
- **Admin Service**: A backend service that interfaces with the RAGFlow system to execute administrative operations and monitor its status. - **Admin Service**: A backend service that interfaces with the RAGFlow system to execute administrative operations and monitor its status.
- **Admin CLI**: A command-line interface that allows users to connect to the Admin Service and issue commands for system management. - **Admin CLI**: A command-line interface that allows users to connect to the Admin Service and issue commands for system management.
### Starting the Admin Service ### Starting the Admin Service
#### Launching from source code 1. Before start Admin Service, please make sure RAGFlow system is already started.
1. Before start Admin Service, please make sure RAGFlow system is already started.
2. Launch from source code:
```bash
python admin/server/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
#### Using docker image
1. Before startup, please configure the `docker_compose.yml` file to enable admin server:
```bash
command:
- --enable-adminserver
```
2. Start the containers, the service will start and listen for incoming connections from the CLI on the configured port.
2. Run the service script:
```bash
python admin/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
### Using the Admin CLI ### Using the Admin CLI
1. Ensure the Admin Service is running. 1. Ensure the Admin Service is running.
2. Install ragflow-cli. 2. Launch the CLI client:
```bash ```bash
pip install ragflow-cli==0.23.1 python admin/admin_client.py -h 0.0.0.0 -p 9381
```
3. Launch the CLI client:
```bash
ragflow-cli -h 127.0.0.1 -p 9381
```
You will be prompted to enter the superuser's password to log in.
The default password is admin.
**Parameters:**
- -h: RAGFlow admin server host address
- -p: RAGFlow admin server port
## Supported Commands ## Supported Commands
@ -75,7 +42,12 @@ Commands are case-insensitive and must be terminated with a semicolon (`;`).
- Lists all available services within the RAGFlow system. - Lists all available services within the RAGFlow system.
- `SHOW SERVICE <id>;` - `SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by `<id>`. - Shows detailed status information for the service identified by `<id>`.
- `STARTUP SERVICE <id>;`
- Attempts to start the service identified by `<id>`.
- `SHUTDOWN SERVICE <id>;`
- Attempts to gracefully shut down the service identified by `<id>`.
- `RESTART SERVICE <id>;`
- Attempts to restart the service identified by `<id>`.
### User Management Commands ### User Management Commands
@ -83,17 +55,10 @@ Commands are case-insensitive and must be terminated with a semicolon (`;`).
- Lists all users known to the system. - Lists all users known to the system.
- `SHOW USER '<username>';` - `SHOW USER '<username>';`
- Shows details and permissions for the specified user. The username must be enclosed in single or double quotes. - Shows details and permissions for the specified user. The username must be enclosed in single or double quotes.
- `CREATE USER <username> <password>;`
- Create user by username and password. The username and password must be enclosed in single or double quotes.
- `DROP USER '<username>';` - `DROP USER '<username>';`
- Removes the specified user from the system. Use with caution. - Removes the specified user from the system. Use with caution.
- `ALTER USER PASSWORD '<username>' '<new_password>';` - `ALTER USER PASSWORD '<username>' '<new_password>';`
- Changes the password for the specified user. - Changes the password for the specified user.
- `ALTER USER ACTIVE <username> <on/off>;`
- Changes the user to active or inactive.
### Data and Agent Commands ### Data and Agent Commands

574
admin/admin_client.py Normal file
View File

@ -0,0 +1,574 @@
import argparse
import base64
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from typing import Dict, List, Any
from lark import Lark, Transformer, Tree
import requests
from requests.auth import HTTPBasicAuth
from api.common.base64 import encode_to_base64
GRAMMAR = r"""
start: command
command: sql_command | meta_command
sql_command: list_services
| show_service
| startup_service
| shutdown_service
| restart_service
| list_users
| show_user
| drop_user
| alter_user
| create_user
| activate_user
| list_datasets
| list_agents
// meta command definition
meta_command: "\\" meta_command_name [meta_args]
meta_command_name: /[a-zA-Z?]+/
meta_args: (meta_arg)+
meta_arg: /[^\\s"']+/ | quoted_string
// command definition
LIST: "LIST"i
SERVICES: "SERVICES"i
SHOW: "SHOW"i
CREATE: "CREATE"i
SERVICE: "SERVICE"i
SHUTDOWN: "SHUTDOWN"i
STARTUP: "STARTUP"i
RESTART: "RESTART"i
USERS: "USERS"i
DROP: "DROP"i
USER: "USER"i
ALTER: "ALTER"i
ACTIVE: "ACTIVE"i
PASSWORD: "PASSWORD"i
DATASETS: "DATASETS"i
OF: "OF"i
AGENTS: "AGENTS"i
list_services: LIST SERVICES ";"
show_service: SHOW SERVICE NUMBER ";"
startup_service: STARTUP SERVICE NUMBER ";"
shutdown_service: SHUTDOWN SERVICE NUMBER ";"
restart_service: RESTART SERVICE NUMBER ";"
list_users: LIST USERS ";"
drop_user: DROP USER quoted_string ";"
alter_user: ALTER USER PASSWORD quoted_string quoted_string ";"
show_user: SHOW USER quoted_string ";"
create_user: CREATE USER quoted_string quoted_string ";"
activate_user: ALTER USER ACTIVE quoted_string status ";"
list_datasets: LIST DATASETS OF quoted_string ";"
list_agents: LIST AGENTS OF quoted_string ";"
identifier: WORD
quoted_string: QUOTED_STRING
status: WORD
QUOTED_STRING: /'[^']+'/ | /"[^"]+"/
WORD: /[a-zA-Z0-9_\-\.]+/
NUMBER: /[0-9]+/
%import common.WS
%ignore WS
"""
class AdminTransformer(Transformer):
def start(self, items):
return items[0]
def command(self, items):
return items[0]
def list_services(self, items):
result = {'type': 'list_services'}
return result
def show_service(self, items):
service_id = int(items[2])
return {"type": "show_service", "number": service_id}
def startup_service(self, items):
service_id = int(items[2])
return {"type": "startup_service", "number": service_id}
def shutdown_service(self, items):
service_id = int(items[2])
return {"type": "shutdown_service", "number": service_id}
def restart_service(self, items):
service_id = int(items[2])
return {"type": "restart_service", "number": service_id}
def list_users(self, items):
return {"type": "list_users"}
def show_user(self, items):
user_name = items[2]
return {"type": "show_user", "username": user_name}
def drop_user(self, items):
user_name = items[2]
return {"type": "drop_user", "username": user_name}
def alter_user(self, items):
user_name = items[3]
new_password = items[4]
return {"type": "alter_user", "username": user_name, "password": new_password}
def create_user(self, items):
user_name = items[2]
password = items[3]
return {"type": "create_user", "username": user_name, "password": password, "role": "user"}
def activate_user(self, items):
user_name = items[3]
activate_status = items[4]
return {"type": "activate_user", "activate_status": activate_status, "username": user_name}
def list_datasets(self, items):
user_name = items[3]
return {"type": "list_datasets", "username": user_name}
def list_agents(self, items):
user_name = items[3]
return {"type": "list_agents", "username": user_name}
def meta_command(self, items):
command_name = str(items[0]).lower()
args = items[1:] if len(items) > 1 else []
# handle quoted parameter
parsed_args = []
for arg in args:
if hasattr(arg, 'value'):
parsed_args.append(arg.value)
else:
parsed_args.append(str(arg))
return {'type': 'meta', 'command': command_name, 'args': parsed_args}
def meta_command_name(self, items):
return items[0]
def meta_args(self, items):
return items
def encrypt(input_string):
pub = '-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArq9XTUSeYr2+N1h3Afl/z8Dse/2yD0ZGrKwx+EEEcdsBLca9Ynmx3nIB5obmLlSfmskLpBo0UACBmB5rEjBp2Q2f3AG3Hjd4B+gNCG6BDaawuDlgANIhGnaTLrIqWrrcm4EMzJOnAOI1fgzJRsOOUEfaS318Eq9OVO3apEyCCt0lOQK6PuksduOjVxtltDav+guVAA068NrPYmRNabVKRNLJpL8w4D44sfth5RvZ3q9t+6RTArpEtc5sh5ChzvqPOzKGMXW83C95TxmXqpbK6olN4RevSfVjEAgCydH6HN6OhtOQEcnrU97r9H0iZOWwbw3pVrZiUkuRD1R56Wzs2wIDAQAB\n-----END PUBLIC KEY-----'
pub_key = RSA.importKey(pub)
cipher = Cipher_pkcs1_v1_5.new(pub_key)
cipher_text = cipher.encrypt(base64.b64encode(input_string.encode('utf-8')))
return base64.b64encode(cipher_text).decode("utf-8")
class AdminCommandParser:
def __init__(self):
self.parser = Lark(GRAMMAR, start='start', parser='lalr', transformer=AdminTransformer())
self.command_history = []
def parse_command(self, command_str: str) -> Dict[str, Any]:
if not command_str.strip():
return {'type': 'empty'}
self.command_history.append(command_str)
try:
result = self.parser.parse(command_str)
return result
except Exception as e:
return {'type': 'error', 'message': f'Parse error: {str(e)}'}
class AdminCLI:
def __init__(self):
self.parser = AdminCommandParser()
self.is_interactive = False
self.admin_account = "admin@ragflow.io"
self.admin_password: str = "admin"
self.host: str = ""
self.port: int = 0
def verify_admin(self, args):
conn_info = self._parse_connection_args(args)
if 'error' in conn_info:
print(f"Error: {conn_info['error']}")
return
self.host = conn_info['host']
self.port = conn_info['port']
print(f"Attempt to access ip: {self.host}, port: {self.port}")
url = f'http://{self.host}:{self.port}/api/v1/admin/auth'
try_count = 0
while True:
try_count += 1
if try_count > 3:
return False
admin_passwd = input(f"password for {self.admin_account}: ").strip()
try:
self.admin_password = encode_to_base64(admin_passwd)
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
if response.status_code == 200:
res_json = response.json()
error_code = res_json.get('code', -1)
if error_code == 0:
print("Authentication successful.")
return True
else:
error_message = res_json.get('message', 'Unknown error')
print(f"Authentication failed: {error_message}, try again")
continue
else:
print(f"Bad responsestatus: {response.status_code}, try again")
except Exception:
print(f"Can't access {self.host}, port: {self.port}")
def _print_table_simple(self, data):
if not data:
print("No data to print")
return
if isinstance(data, dict):
# handle single row data
data = [data]
columns = list(data[0].keys())
col_widths = {}
for col in columns:
max_width = len(str(col))
for item in data:
value_len = len(str(item.get(col, '')))
if value_len > max_width:
max_width = value_len
col_widths[col] = max(2, max_width)
# Generate delimiter
separator = "+" + "+".join(["-" * (col_widths[col] + 2) for col in columns]) + "+"
# Print header
print(separator)
header = "|" + "|".join([f" {col:<{col_widths[col]}} " for col in columns]) + "|"
print(header)
print(separator)
# Print data
for item in data:
row = "|"
for col in columns:
value = str(item.get(col, ''))
if len(value) > col_widths[col]:
value = value[:col_widths[col] - 3] + "..."
row += f" {value:<{col_widths[col]}} |"
print(row)
print(separator)
def run_interactive(self):
self.is_interactive = True
print("RAGFlow Admin command line interface - Type '\\?' for help, '\\q' to quit")
while True:
try:
command = input("admin> ").strip()
if not command:
continue
print(f"command: {command}")
result = self.parser.parse_command(command)
self.execute_command(result)
if isinstance(result, Tree):
continue
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']:
break
except KeyboardInterrupt:
print("\nUse '\\q' to quit")
except EOFError:
print("\nGoodbye!")
break
def run_single_command(self, args):
conn_info = self._parse_connection_args(args)
if 'error' in conn_info:
print(f"Error: {conn_info['error']}")
return
def _parse_connection_args(self, args: List[str]) -> Dict[str, Any]:
parser = argparse.ArgumentParser(description='Admin CLI Client', add_help=False)
parser.add_argument('-h', '--host', default='localhost', help='Admin service host')
parser.add_argument('-p', '--port', type=int, default=8080, help='Admin service port')
try:
parsed_args, remaining_args = parser.parse_known_args(args)
return {
'host': parsed_args.host,
'port': parsed_args.port,
}
except SystemExit:
return {'error': 'Invalid connection arguments'}
def execute_command(self, parsed_command: Dict[str, Any]):
command_dict: dict
if isinstance(parsed_command, Tree):
command_dict = parsed_command.children[0]
else:
if parsed_command['type'] == 'error':
print(f"Error: {parsed_command['message']}")
return
else:
command_dict = parsed_command
# print(f"Parsed command: {command_dict}")
command_type = command_dict['type']
match command_type:
case 'list_services':
self._handle_list_services(command_dict)
case 'show_service':
self._handle_show_service(command_dict)
case 'restart_service':
self._handle_restart_service(command_dict)
case 'shutdown_service':
self._handle_shutdown_service(command_dict)
case 'startup_service':
self._handle_startup_service(command_dict)
case 'list_users':
self._handle_list_users(command_dict)
case 'show_user':
self._handle_show_user(command_dict)
case 'drop_user':
self._handle_drop_user(command_dict)
case 'alter_user':
self._handle_alter_user(command_dict)
case 'create_user':
self._handle_create_user(command_dict)
case 'activate_user':
self._handle_activate_user(command_dict)
case 'list_datasets':
self._handle_list_datasets(command_dict)
case 'list_agents':
self._handle_list_agents(command_dict)
case 'meta':
self._handle_meta_command(command_dict)
case _:
print(f"Command '{command_type}' would be executed with API")
def _handle_list_services(self, command):
print("Listing all services")
url = f'http://{self.host}:{self.port}/api/v1/admin/services'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_service(self, command):
service_id: int = command['number']
print(f"Showing service: {service_id}")
def _handle_restart_service(self, command):
service_id: int = command['number']
print(f"Restart service {service_id}")
def _handle_shutdown_service(self, command):
service_id: int = command['number']
print(f"Shutdown service {service_id}")
def _handle_startup_service(self, command):
service_id: int = command['number']
print(f"Startup service {service_id}")
def _handle_list_users(self, command):
print("Listing all users")
url = f'http://{self.host}:{self.port}/api/v1/admin/users'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Showing user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get user {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_drop_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Drop user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}'
response = requests.delete(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to drop user, code: {res_json['code']}, message: {res_json['message']}")
def _handle_alter_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
password_tree: Tree = command['password']
password: str = password_tree.children[0].strip("'\"")
print(f"Alter user: {username}, password: {password}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/password'
response = requests.put(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password),
json={'new_password': encrypt(password)})
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to alter password, code: {res_json['code']}, message: {res_json['message']}")
def _handle_create_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
password_tree: Tree = command['password']
password: str = password_tree.children[0].strip("'\"")
role: str = command['role']
print(f"Create user: {username}, password: {password}, role: {role}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users'
response = requests.post(
url,
auth=HTTPBasicAuth(self.admin_account, self.admin_password),
json={'username': username, 'password': encrypt(password), 'role': role}
)
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to create user {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_activate_user(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
activate_tree: Tree = command['activate_status']
activate_status: str = activate_tree.children[0].strip("'\"")
if activate_status.lower() in ['on', 'off']:
print(f"Alter user {username} activate status, turn {activate_status.lower()}.")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/activate'
response = requests.put(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password),
json={'activate_status': activate_status})
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to alter activate status, code: {res_json['code']}, message: {res_json['message']}")
else:
print(f"Unknown activate status: {activate_status}.")
def _handle_list_datasets(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Listing all datasets of user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/datasets'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all datasets of {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_list_agents(self, command):
username_tree: Tree = command['username']
username: str = username_tree.children[0].strip("'\"")
print(f"Listing all agents of user: {username}")
url = f'http://{self.host}:{self.port}/api/v1/admin/users/{username}/agents'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all agents of {username}, code: {res_json['code']}, message: {res_json['message']}")
def _handle_meta_command(self, command):
meta_command = command['command']
args = command.get('args', [])
if meta_command in ['?', 'h', 'help']:
self.show_help()
elif meta_command in ['q', 'quit', 'exit']:
print("Goodbye!")
else:
print(f"Meta command '{meta_command}' with args {args}")
def show_help(self):
"""Help info"""
help_text = """
Commands:
LIST SERVICES
SHOW SERVICE <service>
STARTUP SERVICE <service>
SHUTDOWN SERVICE <service>
RESTART SERVICE <service>
LIST USERS
SHOW USER <user>
DROP USER <user>
CREATE USER <user> <password>
ALTER USER PASSWORD <user> <new_password>
ALTER USER ACTIVE <user> <on/off>
LIST DATASETS OF <user>
LIST AGENTS OF <user>
Meta Commands:
\\?, \\h, \\help Show this help
\\q, \\quit, \\exit Quit the CLI
"""
print(help_text)
def main():
import sys
cli = AdminCLI()
if len(sys.argv) == 1 or (len(sys.argv) > 1 and sys.argv[1] == '-'):
print(r"""
____ ___ ______________ ___ __ _
/ __ \/ | / ____/ ____/ /___ _ __ / | ____/ /___ ___ (_)___
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / /| |/ __ / __ `__ \/ / __ \
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / ___ / /_/ / / / / / / / / / /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
if cli.verify_admin(sys.argv):
cli.run_interactive()
else:
if cli.verify_admin(sys.argv):
cli.run_interactive()
# cli.run_single_command(sys.argv[1:])
if __name__ == '__main__':
main()

47
admin/admin_server.py Normal file
View File

@ -0,0 +1,47 @@
import os
import signal
import logging
import time
import threading
import traceback
from werkzeug.serving import run_simple
from flask import Flask
from routes import admin_bp
from api.utils.log_utils import init_root_logger
from api.constants import SERVICE_CONF
from api import settings
from config import load_configurations, SERVICE_CONFIGS
stop_event = threading.Event()
if __name__ == '__main__':
init_root_logger("admin_service")
logging.info(r"""
____ ___ ______________ ___ __ _
/ __ \/ | / ____/ ____/ /___ _ __ / | ____/ /___ ___ (_)___
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / /| |/ __ / __ `__ \/ / __ \
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / ___ / /_/ / / / / / / / / / /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
app = Flask(__name__)
app.register_blueprint(admin_bp)
settings.init_settings()
SERVICE_CONFIGS.configs = load_configurations(SERVICE_CONF)
try:
logging.info("RAGFlow Admin service start...")
run_simple(
hostname="0.0.0.0",
port=9381,
application=app,
threaded=True,
use_reloader=True,
use_debugger=True,
)
except Exception:
traceback.print_exc()
stop_event.set()
time.sleep(1)
os.kill(os.getpid(), signal.SIGKILL)

57
admin/auth.py Normal file
View File

@ -0,0 +1,57 @@
import logging
import uuid
from functools import wraps
from flask import request, jsonify
from exceptions import AdminException
from api.db.init_data import encode_to_base64
from api.db.services import UserService
def check_admin(username: str, password: str):
users = UserService.query(email=username)
if not users:
logging.info(f"Username: {username} is not registered!")
user_info = {
"id": uuid.uuid1().hex,
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
"creator": "system",
"status": "1",
}
if not UserService.save(**user_info):
raise AdminException("Can't init admin.", 500)
user = UserService.query_user(username, password)
if user:
return True
else:
return False
def login_verify(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if not auth or 'username' not in auth.parameters or 'password' not in auth.parameters:
return jsonify({
"code": 401,
"message": "Authentication required",
"data": None
}), 200
username = auth.parameters['username']
password = auth.parameters['password']
# TODO: to check the username and password from DB
if check_admin(username, password) is False:
return jsonify({
"code": 403,
"message": "Access denied",
"data": None
}), 200
return f(*args, **kwargs)
return decorated

View File

@ -1,47 +0,0 @@
#!/bin/bash
set -e
echo "🚀 Start building..."
echo "================================"
PROJECT_NAME="ragflow-cli"
RELEASE_DIR="release"
BUILD_DIR="dist"
SOURCE_DIR="src"
PACKAGE_DIR="ragflow_cli"
echo "🧹 Clean old build folder..."
rm -rf release/
echo "📁 Prepare source code..."
mkdir release/$PROJECT_NAME/$SOURCE_DIR -p
cp pyproject.toml release/$PROJECT_NAME/pyproject.toml
cp README.md release/$PROJECT_NAME/README.md
mkdir release/$PROJECT_NAME/$SOURCE_DIR/$PACKAGE_DIR -p
cp ragflow_cli.py release/$PROJECT_NAME/$SOURCE_DIR/$PACKAGE_DIR/ragflow_cli.py
if [ -d "release/$PROJECT_NAME/$SOURCE_DIR" ]; then
echo "✅ source dir: release/$PROJECT_NAME/$SOURCE_DIR"
else
echo "❌ source dir not exist: release/$PROJECT_NAME/$SOURCE_DIR"
exit 1
fi
echo "🔨 Make build file..."
cd release/$PROJECT_NAME
export PYTHONPATH=$(pwd)
python -m build
echo "✅ check build result..."
if [ -d "$BUILD_DIR" ]; then
echo "📦 Package generated:"
ls -la $BUILD_DIR/
else
echo "❌ Build Failed: $BUILD_DIR not exist."
exit 1
fi
echo "🎉 Build finished successfully!"

View File

@ -1,182 +0,0 @@
#
# Copyright 2026 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
import json
import typing
from typing import Any, Dict, Optional
import requests
# from requests.sessions import HTTPAdapter
class HttpClient:
def __init__(
self,
host: str = "127.0.0.1",
port: int = 9381,
api_version: str = "v1",
api_key: Optional[str] = None,
connect_timeout: float = 5.0,
read_timeout: float = 60.0,
verify_ssl: bool = False,
) -> None:
self.host = host
self.port = port
self.api_version = api_version
self.api_key = api_key
self.login_token: str | None = None
self.connect_timeout = connect_timeout
self.read_timeout = read_timeout
self.verify_ssl = verify_ssl
def api_base(self) -> str:
return f"{self.host}:{self.port}/api/{self.api_version}"
def non_api_base(self) -> str:
return f"{self.host}:{self.port}/{self.api_version}"
def build_url(self, path: str, use_api_base: bool = True) -> str:
base = self.api_base() if use_api_base else self.non_api_base()
if self.verify_ssl:
return f"https://{base}/{path.lstrip('/')}"
else:
return f"http://{base}/{path.lstrip('/')}"
def _headers(self, auth_kind: Optional[str], extra: Optional[Dict[str, str]]) -> Dict[str, str]:
headers = {}
if auth_kind == "api" and self.api_key:
headers["Authorization"] = f"Bearer {self.api_key}"
elif auth_kind == "web" and self.login_token:
headers["Authorization"] = self.login_token
elif auth_kind == "admin" and self.login_token:
headers["Authorization"] = self.login_token
else:
pass
if extra:
headers.update(extra)
return headers
def request(
self,
method: str,
path: str,
*,
use_api_base: bool = True,
auth_kind: Optional[str] = "api",
headers: Optional[Dict[str, str]] = None,
json_body: Optional[Dict[str, Any]] = None,
data: Any = None,
files: Any = None,
params: Optional[Dict[str, Any]] = None,
stream: bool = False,
iterations: int = 1,
) -> requests.Response | dict:
url = self.build_url(path, use_api_base=use_api_base)
merged_headers = self._headers(auth_kind, headers)
# timeout: Tuple[float, float] = (self.connect_timeout, self.read_timeout)
session = requests.Session()
# adapter = HTTPAdapter(pool_connections=100, pool_maxsize=100)
# session.mount("http://", adapter)
http_function = typing.Any
match method:
case "GET":
http_function = session.get
case "POST":
http_function = session.post
case "PUT":
http_function = session.put
case "DELETE":
http_function = session.delete
case "PATCH":
http_function = session.patch
case _:
raise ValueError(f"Invalid HTTP method: {method}")
if iterations > 1:
response_list = []
total_duration = 0.0
for _ in range(iterations):
start_time = time.perf_counter()
response = http_function(url, headers=merged_headers, json=json_body, data=data, stream=stream)
# response = session.get(url, headers=merged_headers, json=json_body, data=data, stream=stream)
# response = requests.request(
# method=method,
# url=url,
# headers=merged_headers,
# json=json_body,
# data=data,
# files=files,
# params=params,
# stream=stream,
# verify=self.verify_ssl,
# )
end_time = time.perf_counter()
total_duration += end_time - start_time
response_list.append(response)
return {"duration": total_duration, "response_list": response_list}
else:
return http_function(url, headers=merged_headers, json=json_body, data=data, stream=stream)
# return session.get(url, headers=merged_headers, json=json_body, data=data, stream=stream)
# return requests.request(
# method=method,
# url=url,
# headers=merged_headers,
# json=json_body,
# data=data,
# files=files,
# params=params,
# stream=stream,
# verify=self.verify_ssl,
# )
def request_json(
self,
method: str,
path: str,
*,
use_api_base: bool = True,
auth_kind: Optional[str] = "api",
headers: Optional[Dict[str, str]] = None,
json_body: Optional[Dict[str, Any]] = None,
data: Any = None,
files: Any = None,
params: Optional[Dict[str, Any]] = None,
stream: bool = False,
) -> Dict[str, Any]:
response = self.request(
method,
path,
use_api_base=use_api_base,
auth_kind=auth_kind,
headers=headers,
json_body=json_body,
data=data,
files=files,
params=params,
stream=stream,
)
try:
return response.json()
except Exception as exc:
raise ValueError(f"Non-JSON response from {path}: {exc}") from exc
@staticmethod
def parse_json_bytes(raw: bytes) -> Dict[str, Any]:
try:
return json.loads(raw.decode("utf-8"))
except Exception as exc:
raise ValueError(f"Invalid JSON payload: {exc}") from exc

View File

@ -1,623 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from lark import Transformer
GRAMMAR = r"""
start: command
command: sql_command | meta_command
sql_command: login_user
| ping_server
| list_services
| show_service
| startup_service
| shutdown_service
| restart_service
| register_user
| list_users
| show_user
| drop_user
| alter_user
| create_user
| activate_user
| list_datasets
| list_agents
| create_role
| drop_role
| alter_role
| list_roles
| show_role
| grant_permission
| revoke_permission
| alter_user_role
| show_user_permission
| show_version
| grant_admin
| revoke_admin
| set_variable
| show_variable
| list_variables
| list_configs
| list_environments
| generate_key
| list_keys
| drop_key
| show_current_user
| set_default_llm
| set_default_vlm
| set_default_embedding
| set_default_reranker
| set_default_asr
| set_default_tts
| reset_default_llm
| reset_default_vlm
| reset_default_embedding
| reset_default_reranker
| reset_default_asr
| reset_default_tts
| create_model_provider
| drop_model_provider
| create_user_dataset_with_parser
| create_user_dataset_with_pipeline
| drop_user_dataset
| list_user_datasets
| list_user_dataset_files
| list_user_agents
| list_user_chats
| create_user_chat
| drop_user_chat
| list_user_model_providers
| list_user_default_models
| parse_dataset_docs
| parse_dataset_sync
| parse_dataset_async
| import_docs_into_dataset
| search_on_datasets
| benchmark
// meta command definition
meta_command: "\\" meta_command_name [meta_args]
meta_command_name: /[a-zA-Z?]+/
meta_args: (meta_arg)+
meta_arg: /[^\\s"']+/ | quoted_string
// command definition
LOGIN: "LOGIN"i
REGISTER: "REGISTER"i
LIST: "LIST"i
SERVICES: "SERVICES"i
SHOW: "SHOW"i
CREATE: "CREATE"i
SERVICE: "SERVICE"i
SHUTDOWN: "SHUTDOWN"i
STARTUP: "STARTUP"i
RESTART: "RESTART"i
USERS: "USERS"i
DROP: "DROP"i
USER: "USER"i
ALTER: "ALTER"i
ACTIVE: "ACTIVE"i
ADMIN: "ADMIN"i
PASSWORD: "PASSWORD"i
DATASET: "DATASET"i
DATASETS: "DATASETS"i
OF: "OF"i
AGENTS: "AGENTS"i
ROLE: "ROLE"i
ROLES: "ROLES"i
DESCRIPTION: "DESCRIPTION"i
GRANT: "GRANT"i
REVOKE: "REVOKE"i
ALL: "ALL"i
PERMISSION: "PERMISSION"i
TO: "TO"i
FROM: "FROM"i
FOR: "FOR"i
RESOURCES: "RESOURCES"i
ON: "ON"i
SET: "SET"i
RESET: "RESET"i
VERSION: "VERSION"i
VAR: "VAR"i
VARS: "VARS"i
CONFIGS: "CONFIGS"i
ENVS: "ENVS"i
KEY: "KEY"i
KEYS: "KEYS"i
GENERATE: "GENERATE"i
MODEL: "MODEL"i
MODELS: "MODELS"i
PROVIDER: "PROVIDER"i
PROVIDERS: "PROVIDERS"i
DEFAULT: "DEFAULT"i
CHATS: "CHATS"i
CHAT: "CHAT"i
FILES: "FILES"i
AS: "AS"i
PARSE: "PARSE"i
IMPORT: "IMPORT"i
INTO: "INTO"i
WITH: "WITH"i
PARSER: "PARSER"i
PIPELINE: "PIPELINE"i
SEARCH: "SEARCH"i
CURRENT: "CURRENT"i
LLM: "LLM"i
VLM: "VLM"i
EMBEDDING: "EMBEDDING"i
RERANKER: "RERANKER"i
ASR: "ASR"i
TTS: "TTS"i
ASYNC: "ASYNC"i
SYNC: "SYNC"i
BENCHMARK: "BENCHMARK"i
PING: "PING"i
login_user: LOGIN USER quoted_string ";"
list_services: LIST SERVICES ";"
show_service: SHOW SERVICE NUMBER ";"
startup_service: STARTUP SERVICE NUMBER ";"
shutdown_service: SHUTDOWN SERVICE NUMBER ";"
restart_service: RESTART SERVICE NUMBER ";"
register_user: REGISTER USER quoted_string AS quoted_string PASSWORD quoted_string ";"
list_users: LIST USERS ";"
drop_user: DROP USER quoted_string ";"
alter_user: ALTER USER PASSWORD quoted_string quoted_string ";"
show_user: SHOW USER quoted_string ";"
create_user: CREATE USER quoted_string quoted_string ";"
activate_user: ALTER USER ACTIVE quoted_string status ";"
list_datasets: LIST DATASETS OF quoted_string ";"
list_agents: LIST AGENTS OF quoted_string ";"
create_role: CREATE ROLE identifier [DESCRIPTION quoted_string] ";"
drop_role: DROP ROLE identifier ";"
alter_role: ALTER ROLE identifier SET DESCRIPTION quoted_string ";"
list_roles: LIST ROLES ";"
show_role: SHOW ROLE identifier ";"
grant_permission: GRANT identifier_list ON identifier TO ROLE identifier ";"
revoke_permission: REVOKE identifier_list ON identifier FROM ROLE identifier ";"
alter_user_role: ALTER USER quoted_string SET ROLE identifier ";"
show_user_permission: SHOW USER PERMISSION quoted_string ";"
show_version: SHOW VERSION ";"
grant_admin: GRANT ADMIN quoted_string ";"
revoke_admin: REVOKE ADMIN quoted_string ";"
generate_key: GENERATE KEY FOR USER quoted_string ";"
list_keys: LIST KEYS OF quoted_string ";"
drop_key: DROP KEY quoted_string OF quoted_string ";"
set_variable: SET VAR identifier identifier ";"
show_variable: SHOW VAR identifier ";"
list_variables: LIST VARS ";"
list_configs: LIST CONFIGS ";"
list_environments: LIST ENVS ";"
benchmark: BENCHMARK NUMBER NUMBER user_statement
user_statement: ping_server
| show_current_user
| create_model_provider
| drop_model_provider
| set_default_llm
| set_default_vlm
| set_default_embedding
| set_default_reranker
| set_default_asr
| set_default_tts
| reset_default_llm
| reset_default_vlm
| reset_default_embedding
| reset_default_reranker
| reset_default_asr
| reset_default_tts
| create_user_dataset_with_parser
| create_user_dataset_with_pipeline
| drop_user_dataset
| list_user_datasets
| list_user_dataset_files
| list_user_agents
| list_user_chats
| create_user_chat
| drop_user_chat
| list_user_model_providers
| list_user_default_models
| import_docs_into_dataset
| search_on_datasets
ping_server: PING ";"
show_current_user: SHOW CURRENT USER ";"
create_model_provider: CREATE MODEL PROVIDER quoted_string quoted_string ";"
drop_model_provider: DROP MODEL PROVIDER quoted_string ";"
set_default_llm: SET DEFAULT LLM quoted_string ";"
set_default_vlm: SET DEFAULT VLM quoted_string ";"
set_default_embedding: SET DEFAULT EMBEDDING quoted_string ";"
set_default_reranker: SET DEFAULT RERANKER quoted_string ";"
set_default_asr: SET DEFAULT ASR quoted_string ";"
set_default_tts: SET DEFAULT TTS quoted_string ";"
reset_default_llm: RESET DEFAULT LLM ";"
reset_default_vlm: RESET DEFAULT VLM ";"
reset_default_embedding: RESET DEFAULT EMBEDDING ";"
reset_default_reranker: RESET DEFAULT RERANKER ";"
reset_default_asr: RESET DEFAULT ASR ";"
reset_default_tts: RESET DEFAULT TTS ";"
list_user_datasets: LIST DATASETS ";"
create_user_dataset_with_parser: CREATE DATASET quoted_string WITH EMBEDDING quoted_string PARSER quoted_string ";"
create_user_dataset_with_pipeline: CREATE DATASET quoted_string WITH EMBEDDING quoted_string PIPELINE quoted_string ";"
drop_user_dataset: DROP DATASET quoted_string ";"
list_user_dataset_files: LIST FILES OF DATASET quoted_string ";"
list_user_agents: LIST AGENTS ";"
list_user_chats: LIST CHATS ";"
create_user_chat: CREATE CHAT quoted_string ";"
drop_user_chat: DROP CHAT quoted_string ";"
list_user_model_providers: LIST MODEL PROVIDERS ";"
list_user_default_models: LIST DEFAULT MODELS ";"
import_docs_into_dataset: IMPORT quoted_string INTO DATASET quoted_string ";"
search_on_datasets: SEARCH quoted_string ON DATASETS quoted_string ";"
parse_dataset_docs: PARSE quoted_string OF DATASET quoted_string ";"
parse_dataset_sync: PARSE DATASET quoted_string SYNC ";"
parse_dataset_async: PARSE DATASET quoted_string ASYNC ";"
identifier_list: identifier ("," identifier)*
identifier: WORD
quoted_string: QUOTED_STRING
status: WORD
QUOTED_STRING: /'[^']+'/ | /"[^"]+"/
WORD: /[a-zA-Z0-9_\-\.]+/
NUMBER: /[0-9]+/
%import common.WS
%ignore WS
"""
class RAGFlowCLITransformer(Transformer):
def start(self, items):
return items[0]
def command(self, items):
return items[0]
def login_user(self, items):
email = items[2].children[0].strip("'\"")
return {"type": "login_user", "email": email}
def ping_server(self, items):
return {"type": "ping_server"}
def list_services(self, items):
result = {"type": "list_services"}
return result
def show_service(self, items):
service_id = int(items[2])
return {"type": "show_service", "number": service_id}
def startup_service(self, items):
service_id = int(items[2])
return {"type": "startup_service", "number": service_id}
def shutdown_service(self, items):
service_id = int(items[2])
return {"type": "shutdown_service", "number": service_id}
def restart_service(self, items):
service_id = int(items[2])
return {"type": "restart_service", "number": service_id}
def register_user(self, items):
user_name: str = items[2].children[0].strip("'\"")
nickname: str = items[4].children[0].strip("'\"")
password: str = items[6].children[0].strip("'\"")
return {"type": "register_user", "user_name": user_name, "nickname": nickname, "password": password}
def list_users(self, items):
return {"type": "list_users"}
def show_user(self, items):
user_name = items[2]
return {"type": "show_user", "user_name": user_name}
def drop_user(self, items):
user_name = items[2]
return {"type": "drop_user", "user_name": user_name}
def alter_user(self, items):
user_name = items[3]
new_password = items[4]
return {"type": "alter_user", "user_name": user_name, "password": new_password}
def create_user(self, items):
user_name = items[2]
password = items[3]
return {"type": "create_user", "user_name": user_name, "password": password, "role": "user"}
def activate_user(self, items):
user_name = items[3]
activate_status = items[4]
return {"type": "activate_user", "activate_status": activate_status, "user_name": user_name}
def list_datasets(self, items):
user_name = items[3]
return {"type": "list_datasets", "user_name": user_name}
def list_agents(self, items):
user_name = items[3]
return {"type": "list_agents", "user_name": user_name}
def create_role(self, items):
role_name = items[2]
if len(items) > 4:
description = items[4]
return {"type": "create_role", "role_name": role_name, "description": description}
else:
return {"type": "create_role", "role_name": role_name}
def drop_role(self, items):
role_name = items[2]
return {"type": "drop_role", "role_name": role_name}
def alter_role(self, items):
role_name = items[2]
description = items[5]
return {"type": "alter_role", "role_name": role_name, "description": description}
def list_roles(self, items):
return {"type": "list_roles"}
def show_role(self, items):
role_name = items[2]
return {"type": "show_role", "role_name": role_name}
def grant_permission(self, items):
action_list = items[1]
resource = items[3]
role_name = items[6]
return {"type": "grant_permission", "role_name": role_name, "resource": resource, "actions": action_list}
def revoke_permission(self, items):
action_list = items[1]
resource = items[3]
role_name = items[6]
return {"type": "revoke_permission", "role_name": role_name, "resource": resource, "actions": action_list}
def alter_user_role(self, items):
user_name = items[2]
role_name = items[5]
return {"type": "alter_user_role", "user_name": user_name, "role_name": role_name}
def show_user_permission(self, items):
user_name = items[3]
return {"type": "show_user_permission", "user_name": user_name}
def show_version(self, items):
return {"type": "show_version"}
def grant_admin(self, items):
user_name = items[2]
return {"type": "grant_admin", "user_name": user_name}
def revoke_admin(self, items):
user_name = items[2]
return {"type": "revoke_admin", "user_name": user_name}
def generate_key(self, items):
user_name = items[4]
return {"type": "generate_key", "user_name": user_name}
def list_keys(self, items):
user_name = items[3]
return {"type": "list_keys", "user_name": user_name}
def drop_key(self, items):
key = items[2]
user_name = items[4]
return {"type": "drop_key", "key": key, "user_name": user_name}
def set_variable(self, items):
var_name = items[2]
var_value = items[3]
return {"type": "set_variable", "var_name": var_name, "var_value": var_value}
def show_variable(self, items):
var_name = items[2]
return {"type": "show_variable", "var_name": var_name}
def list_variables(self, items):
return {"type": "list_variables"}
def list_configs(self, items):
return {"type": "list_configs"}
def list_environments(self, items):
return {"type": "list_environments"}
def create_model_provider(self, items):
provider_name = items[3].children[0].strip("'\"")
provider_key = items[4].children[0].strip("'\"")
return {"type": "create_model_provider", "provider_name": provider_name, "provider_key": provider_key}
def drop_model_provider(self, items):
provider_name = items[3].children[0].strip("'\"")
return {"type": "drop_model_provider", "provider_name": provider_name}
def show_current_user(self, items):
return {"type": "show_current_user"}
def set_default_llm(self, items):
llm_id = items[3].children[0].strip("'\"")
return {"type": "set_default_model", "model_type": "llm_id", "model_id": llm_id}
def set_default_vlm(self, items):
vlm_id = items[3].children[0].strip("'\"")
return {"type": "set_default_model", "model_type": "img2txt_id", "model_id": vlm_id}
def set_default_embedding(self, items):
embedding_id = items[3].children[0].strip("'\"")
return {"type": "set_default_model", "model_type": "embd_id", "model_id": embedding_id}
def set_default_reranker(self, items):
reranker_id = items[3].children[0].strip("'\"")
return {"type": "set_default_model", "model_type": "reranker_id", "model_id": reranker_id}
def set_default_asr(self, items):
asr_id = items[3].children[0].strip("'\"")
return {"type": "set_default_model", "model_type": "asr_id", "model_id": asr_id}
def set_default_tts(self, items):
tts_id = items[3].children[0].strip("'\"")
return {"type": "set_default_model", "model_type": "tts_id", "model_id": tts_id}
def reset_default_llm(self, items):
return {"type": "reset_default_model", "model_type": "llm_id"}
def reset_default_vlm(self, items):
return {"type": "reset_default_model", "model_type": "img2txt_id"}
def reset_default_embedding(self, items):
return {"type": "reset_default_model", "model_type": "embd_id"}
def reset_default_reranker(self, items):
return {"type": "reset_default_model", "model_type": "reranker_id"}
def reset_default_asr(self, items):
return {"type": "reset_default_model", "model_type": "asr_id"}
def reset_default_tts(self, items):
return {"type": "reset_default_model", "model_type": "tts_id"}
def list_user_datasets(self, items):
return {"type": "list_user_datasets"}
def create_user_dataset_with_parser(self, items):
dataset_name = items[2].children[0].strip("'\"")
embedding = items[5].children[0].strip("'\"")
parser_type = items[7].children[0].strip("'\"")
return {"type": "create_user_dataset", "dataset_name": dataset_name, "embedding": embedding,
"parser_type": parser_type}
def create_user_dataset_with_pipeline(self, items):
dataset_name = items[2].children[0].strip("'\"")
embedding = items[5].children[0].strip("'\"")
pipeline = items[7].children[0].strip("'\"")
return {"type": "create_user_dataset", "dataset_name": dataset_name, "embedding": embedding,
"pipeline": pipeline}
def drop_user_dataset(self, items):
dataset_name = items[2].children[0].strip("'\"")
return {"type": "drop_user_dataset", "dataset_name": dataset_name}
def list_user_dataset_files(self, items):
dataset_name = items[4].children[0].strip("'\"")
return {"type": "list_user_dataset_files", "dataset_name": dataset_name}
def list_user_agents(self, items):
return {"type": "list_user_agents"}
def list_user_chats(self, items):
return {"type": "list_user_chats"}
def create_user_chat(self, items):
chat_name = items[2].children[0].strip("'\"")
return {"type": "create_user_chat", "chat_name": chat_name}
def drop_user_chat(self, items):
chat_name = items[2].children[0].strip("'\"")
return {"type": "drop_user_chat", "chat_name": chat_name}
def list_user_model_providers(self, items):
return {"type": "list_user_model_providers"}
def list_user_default_models(self, items):
return {"type": "list_user_default_models"}
def parse_dataset_docs(self, items):
document_list_str = items[1].children[0].strip("'\"")
document_names = document_list_str.split(",")
if len(document_names) == 1:
document_names = document_names[0]
document_names = document_names.split(" ")
dataset_name = items[4].children[0].strip("'\"")
return {"type": "parse_dataset_docs", "dataset_name": dataset_name, "document_names": document_names}
def parse_dataset_sync(self, items):
dataset_name = items[2].children[0].strip("'\"")
return {"type": "parse_dataset", "dataset_name": dataset_name, "method": "sync"}
def parse_dataset_async(self, items):
dataset_name = items[2].children[0].strip("'\"")
return {"type": "parse_dataset", "dataset_name": dataset_name, "method": "async"}
def import_docs_into_dataset(self, items):
document_list_str = items[1].children[0].strip("'\"")
document_paths = document_list_str.split(",")
if len(document_paths) == 1:
document_paths = document_paths[0]
document_paths = document_paths.split(" ")
dataset_name = items[4].children[0].strip("'\"")
return {"type": "import_docs_into_dataset", "dataset_name": dataset_name, "document_paths": document_paths}
def search_on_datasets(self, items):
question = items[1].children[0].strip("'\"")
datasets_str = items[4].children[0].strip("'\"")
datasets = datasets_str.split(",")
if len(datasets) == 1:
datasets = datasets[0]
datasets = datasets.split(" ")
return {"type": "search_on_datasets", "datasets": datasets, "question": question}
def benchmark(self, items):
concurrency: int = int(items[1])
iterations: int = int(items[2])
command = items[3].children[0]
return {"type": "benchmark", "concurrency": concurrency, "iterations": iterations, "command": command}
def action_list(self, items):
return items
def meta_command(self, items):
command_name = str(items[0]).lower()
args = items[1:] if len(items) > 1 else []
# handle quoted parameter
parsed_args = []
for arg in args:
if hasattr(arg, "value"):
parsed_args.append(arg.value)
else:
parsed_args.append(str(arg))
return {"type": "meta", "command": command_name, "args": parsed_args}
def meta_command_name(self, items):
return items[0]
def meta_args(self, items):
return items

View File

@ -1,27 +0,0 @@
[project]
name = "ragflow-cli"
version = "0.23.1"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }
readme = "README.md"
requires-python = ">=3.12,<3.15"
dependencies = [
"requests>=2.30.0,<3.0.0",
"beartype>=0.20.0,<1.0.0",
"pycryptodomex>=3.10.0",
"lark>=1.1.0",
]
[dependency-groups]
test = [
"pytest>=8.3.5",
"requests>=2.32.3",
"requests-toolbelt>=1.0.0",
]
[tool.setuptools]
py-modules = ["ragflow_cli", "parser"]
[project.scripts]
ragflow-cli = "ragflow_cli:main"

View File

@ -1,322 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
import argparse
import base64
import getpass
from cmd import Cmd
from typing import Any, Dict, List
import requests
import warnings
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from Cryptodome.PublicKey import RSA
from lark import Lark, Tree
from parser import GRAMMAR, RAGFlowCLITransformer
from http_client import HttpClient
from ragflow_client import RAGFlowClient, run_command
from user import login_user
warnings.filterwarnings("ignore", category=getpass.GetPassWarning)
def encrypt(input_string):
pub = "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArq9XTUSeYr2+N1h3Afl/z8Dse/2yD0ZGrKwx+EEEcdsBLca9Ynmx3nIB5obmLlSfmskLpBo0UACBmB5rEjBp2Q2f3AG3Hjd4B+gNCG6BDaawuDlgANIhGnaTLrIqWrrcm4EMzJOnAOI1fgzJRsOOUEfaS318Eq9OVO3apEyCCt0lOQK6PuksduOjVxtltDav+guVAA068NrPYmRNabVKRNLJpL8w4D44sfth5RvZ3q9t+6RTArpEtc5sh5ChzvqPOzKGMXW83C95TxmXqpbK6olN4RevSfVjEAgCydH6HN6OhtOQEcnrU97r9H0iZOWwbw3pVrZiUkuRD1R56Wzs2wIDAQAB\n-----END PUBLIC KEY-----"
pub_key = RSA.importKey(pub)
cipher = Cipher_pkcs1_v1_5.new(pub_key)
cipher_text = cipher.encrypt(base64.b64encode(input_string.encode("utf-8")))
return base64.b64encode(cipher_text).decode("utf-8")
def encode_to_base64(input_string):
base64_encoded = base64.b64encode(input_string.encode("utf-8"))
return base64_encoded.decode("utf-8")
class RAGFlowCLI(Cmd):
def __init__(self):
super().__init__()
self.parser = Lark(GRAMMAR, start="start", parser="lalr", transformer=RAGFlowCLITransformer())
self.command_history = []
self.account = "admin@ragflow.io"
self.account_password: str = "admin"
self.session = requests.Session()
self.host: str = ""
self.port: int = 0
self.mode: str = "admin"
self.ragflow_client = None
intro = r"""Type "\h" for help."""
prompt = "ragflow> "
def onecmd(self, command: str) -> bool:
try:
result = self.parse_command(command)
if isinstance(result, dict):
if "type" in result and result.get("type") == "empty":
return False
self.execute_command(result)
if isinstance(result, Tree):
return False
if result.get("type") == "meta" and result.get("command") in ["q", "quit", "exit"]:
return True
except KeyboardInterrupt:
print("\nUse '\\q' to quit")
except EOFError:
print("\nGoodbye!")
return True
return False
def emptyline(self) -> bool:
return False
def default(self, line: str) -> bool:
return self.onecmd(line)
def parse_command(self, command_str: str) -> dict[str, str]:
if not command_str.strip():
return {"type": "empty"}
self.command_history.append(command_str)
try:
result = self.parser.parse(command_str)
return result
except Exception as e:
return {"type": "error", "message": f"Parse error: {str(e)}"}
def verify_auth(self, arguments: dict, single_command: bool, auth: bool):
server_type = arguments.get("type", "admin")
http_client = HttpClient(arguments["host"], arguments["port"])
if not auth:
self.ragflow_client = RAGFlowClient(http_client, server_type)
return True
user_name = arguments["username"]
attempt_count = 3
if single_command:
attempt_count = 1
try_count = 0
while True:
try_count += 1
if try_count > attempt_count:
return False
if single_command:
user_password = arguments["password"]
else:
user_password = getpass.getpass(f"password for {user_name}: ").strip()
try:
token = login_user(http_client, server_type, user_name, user_password)
http_client.login_token = token
self.ragflow_client = RAGFlowClient(http_client, server_type)
return True
except Exception as e:
print(str(e))
print("Can't access server for login (connection failed)")
def _format_service_detail_table(self, data):
if isinstance(data, list):
return data
if not all([isinstance(v, list) for v in data.values()]):
# normal table
return data
# handle task_executor heartbeats map, for example {'name': [{'done': 2, 'now': timestamp1}, {'done': 3, 'now': timestamp2}]
task_executor_list = []
for k, v in data.items():
# display latest status
heartbeats = sorted(v, key=lambda x: x["now"], reverse=True)
task_executor_list.append(
{
"task_executor_name": k,
**heartbeats[0],
}
if heartbeats
else {"task_executor_name": k}
)
return task_executor_list
def _print_table_simple(self, data):
if not data:
print("No data to print")
return
if isinstance(data, dict):
# handle single row data
data = [data]
columns = list(set().union(*(d.keys() for d in data)))
columns.sort()
col_widths = {}
def get_string_width(text):
half_width_chars = " !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\t\n\r"
width = 0
for char in text:
if char in half_width_chars:
width += 1
else:
width += 2
return width
for col in columns:
max_width = get_string_width(str(col))
for item in data:
value_len = get_string_width(str(item.get(col, "")))
if value_len > max_width:
max_width = value_len
col_widths[col] = max(2, max_width)
# Generate delimiter
separator = "+" + "+".join(["-" * (col_widths[col] + 2) for col in columns]) + "+"
# Print header
print(separator)
header = "|" + "|".join([f" {col:<{col_widths[col]}} " for col in columns]) + "|"
print(header)
print(separator)
# Print data
for item in data:
row = "|"
for col in columns:
value = str(item.get(col, ""))
if get_string_width(value) > col_widths[col]:
value = value[: col_widths[col] - 3] + "..."
row += f" {value:<{col_widths[col] - (get_string_width(value) - len(value))}} |"
print(row)
print(separator)
def run_interactive(self, args):
if self.verify_auth(args, single_command=False, auth=args["auth"]):
print(r"""
____ ___ ______________ ________ ____
/ __ \/ | / ____/ ____/ /___ _ __ / ____/ / / _/
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / / / / / /
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / /___/ /____/ /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ \____/_____/___/
""")
self.cmdloop()
print("RAGFlow command line interface - Type '\\?' for help, '\\q' to quit")
def run_single_command(self, args):
if self.verify_auth(args, single_command=True, auth=args["auth"]):
command = args["command"]
result = self.parse_command(command)
self.execute_command(result)
def parse_connection_args(self, args: List[str]) -> Dict[str, Any]:
parser = argparse.ArgumentParser(description="RAGFlow CLI Client", add_help=False)
parser.add_argument("-h", "--host", default="127.0.0.1", help="Admin or RAGFlow service host")
parser.add_argument("-p", "--port", type=int, default=9381, help="Admin or RAGFlow service port")
parser.add_argument("-w", "--password", default="admin", type=str, help="Superuser password")
parser.add_argument("-t", "--type", default="admin", type=str, help="CLI mode, admin or user")
parser.add_argument("-u", "--username", default=None,
help="Username (email). In admin mode defaults to admin@ragflow.io, in user mode required.")
parser.add_argument("command", nargs="?", help="Single command")
try:
parsed_args, remaining_args = parser.parse_known_args(args)
# Determine username based on mode
username = parsed_args.username
if parsed_args.type == "admin":
if username is None:
username = "admin@ragflow.io"
if remaining_args:
if remaining_args[0] == "command":
command_str = ' '.join(remaining_args[1:]) + ';'
auth = True
if remaining_args[1] == "register":
auth = False
else:
if username is None:
print("Error: username (-u) is required in user mode")
return {"error": "Username required"}
return {
"host": parsed_args.host,
"port": parsed_args.port,
"password": parsed_args.password,
"type": parsed_args.type,
"username": username,
"command": command_str,
"auth": auth
}
else:
return {"error": "Invalid command"}
else:
auth = True
if username is None:
auth = False
return {
"host": parsed_args.host,
"port": parsed_args.port,
"type": parsed_args.type,
"username": username,
"auth": auth
}
except SystemExit:
return {"error": "Invalid connection arguments"}
def execute_command(self, parsed_command: Dict[str, Any]):
command_dict: dict
if isinstance(parsed_command, Tree):
command_dict = parsed_command.children[0]
else:
if parsed_command["type"] == "error":
print(f"Error: {parsed_command['message']}")
return
else:
command_dict = parsed_command
# print(f"Parsed command: {command_dict}")
run_command(self.ragflow_client, command_dict)
def main():
cli = RAGFlowCLI()
args = cli.parse_connection_args(sys.argv)
if "error" in args:
print("Error: Invalid connection arguments")
return
if "command" in args:
# single command mode
# for user mode, api key or password is ok
# for admin mode, only password
if "password" not in args:
print("Error: password is missing")
return
cli.run_single_command(args)
else:
cli.run_interactive(args)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@ -1,65 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from http_client import HttpClient
class AuthException(Exception):
def __init__(self, message, code=401):
super().__init__(message)
self.code = code
self.message = message
def encrypt_password(password_plain: str) -> str:
try:
from api.utils.crypt import crypt
except Exception as exc:
raise AuthException(
"Password encryption unavailable; install pycryptodomex (uv sync --python 3.12 --group test)."
) from exc
return crypt(password_plain)
def register_user(client: HttpClient, email: str, nickname: str, password: str) -> None:
password_enc = encrypt_password(password)
payload = {"email": email, "nickname": nickname, "password": password_enc}
res = client.request_json("POST", "/user/register", use_api_base=False, auth_kind=None, json_body=payload)
if res.get("code") == 0:
return
msg = res.get("message", "")
if "has already registered" in msg:
return
raise AuthException(f"Register failed: {msg}")
def login_user(client: HttpClient, server_type: str, email: str, password: str) -> str:
password_enc = encrypt_password(password)
payload = {"email": email, "password": password_enc}
if server_type == "admin":
response = client.request("POST", "/admin/login", use_api_base=True, auth_kind=None, json_body=payload)
else:
response = client.request("POST", "/user/login", use_api_base=False, auth_kind=None, json_body=payload)
try:
res = response.json()
except Exception as exc:
raise AuthException(f"Login failed: invalid JSON response ({exc})") from exc
if res.get("code") != 0:
raise AuthException(f"Login failed: {res.get('message')}")
token = response.headers.get("Authorization")
if not token:
raise AuthException("Login failed: missing Authorization header")
return token

298
admin/client/uv.lock generated
View File

@ -1,298 +0,0 @@
version = 1
revision = 3
requires-python = ">=3.10, <3.13"
[[package]]
name = "beartype"
version = "0.22.6"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/88/e2/105ceb1704cb80fe4ab3872529ab7b6f365cf7c74f725e6132d0efcf1560/beartype-0.22.6.tar.gz", hash = "sha256:97fbda69c20b48c5780ac2ca60ce3c1bb9af29b3a1a0216898ffabdd523e48f4", size = 1588975, upload-time = "2025-11-20T04:47:14.736Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/98/c9/ceecc71fe2c9495a1d8e08d44f5f31f5bca1350d5b2e27a4b6265424f59e/beartype-0.22.6-py3-none-any.whl", hash = "sha256:0584bc46a2ea2a871509679278cda992eadde676c01356ab0ac77421f3c9a093", size = 1324807, upload-time = "2025-11-20T04:47:11.837Z" },
]
[[package]]
name = "certifi"
version = "2025.11.12"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/a2/8c/58f469717fa48465e4a50c014a0400602d3c437d7c0c468e17ada824da3a/certifi-2025.11.12.tar.gz", hash = "sha256:d8ab5478f2ecd78af242878415affce761ca6bc54a22a27e026d7c25357c3316", size = 160538, upload-time = "2025-11-12T02:54:51.517Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/70/7d/9bc192684cea499815ff478dfcdc13835ddf401365057044fb721ec6bddb/certifi-2025.11.12-py3-none-any.whl", hash = "sha256:97de8790030bbd5c2d96b7ec782fc2f7820ef8dba6db909ccf95449f2d062d4b", size = 159438, upload-time = "2025-11-12T02:54:49.735Z" },
]
[[package]]
name = "charset-normalizer"
version = "3.4.4"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1f/b8/6d51fc1d52cbd52cd4ccedd5b5b2f0f6a11bbf6765c782298b0f3e808541/charset_normalizer-3.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e824f1492727fa856dd6eda4f7cee25f8518a12f3c4a56a74e8095695089cf6d", size = 209709, upload-time = "2025-10-14T04:40:11.385Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/5c/af/1f9d7f7faafe2ddfb6f72a2e07a548a629c61ad510fe60f9630309908fef/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4bd5d4137d500351a30687c2d3971758aac9a19208fc110ccb9d7188fbe709e8", size = 148814, upload-time = "2025-10-14T04:40:13.135Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/79/3d/f2e3ac2bbc056ca0c204298ea4e3d9db9b4afe437812638759db2c976b5f/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:027f6de494925c0ab2a55eab46ae5129951638a49a34d87f4c3eda90f696b4ad", size = 144467, upload-time = "2025-10-14T04:40:14.728Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ec/85/1bf997003815e60d57de7bd972c57dc6950446a3e4ccac43bc3070721856/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f820802628d2694cb7e56db99213f930856014862f3fd943d290ea8438d07ca8", size = 162280, upload-time = "2025-10-14T04:40:16.14Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3e/8e/6aa1952f56b192f54921c436b87f2aaf7c7a7c3d0d1a765547d64fd83c13/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:798d75d81754988d2565bff1b97ba5a44411867c0cf32b77a7e8f8d84796b10d", size = 159454, upload-time = "2025-10-14T04:40:17.567Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/36/3b/60cbd1f8e93aa25d1c669c649b7a655b0b5fb4c571858910ea9332678558/charset_normalizer-3.4.4-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9d1bb833febdff5c8927f922386db610b49db6e0d4f4ee29601d71e7c2694313", size = 153609, upload-time = "2025-10-14T04:40:19.08Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/64/91/6a13396948b8fd3c4b4fd5bc74d045f5637d78c9675585e8e9fbe5636554/charset_normalizer-3.4.4-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:9cd98cdc06614a2f768d2b7286d66805f94c48cde050acdbbb7db2600ab3197e", size = 151849, upload-time = "2025-10-14T04:40:20.607Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b7/7a/59482e28b9981d105691e968c544cc0df3b7d6133152fb3dcdc8f135da7a/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:077fbb858e903c73f6c9db43374fd213b0b6a778106bc7032446a8e8b5b38b93", size = 151586, upload-time = "2025-10-14T04:40:21.719Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/92/59/f64ef6a1c4bdd2baf892b04cd78792ed8684fbc48d4c2afe467d96b4df57/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:244bfb999c71b35de57821b8ea746b24e863398194a4014e4c76adc2bbdfeff0", size = 145290, upload-time = "2025-10-14T04:40:23.069Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/6b/63/3bf9f279ddfa641ffa1962b0db6a57a9c294361cc2f5fcac997049a00e9c/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:64b55f9dce520635f018f907ff1b0df1fdc31f2795a922fb49dd14fbcdf48c84", size = 163663, upload-time = "2025-10-14T04:40:24.17Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ed/09/c9e38fc8fa9e0849b172b581fd9803bdf6e694041127933934184e19f8c3/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:faa3a41b2b66b6e50f84ae4a68c64fcd0c44355741c6374813a800cd6695db9e", size = 151964, upload-time = "2025-10-14T04:40:25.368Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d2/d1/d28b747e512d0da79d8b6a1ac18b7ab2ecfd81b2944c4c710e166d8dd09c/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:6515f3182dbe4ea06ced2d9e8666d97b46ef4c75e326b79bb624110f122551db", size = 161064, upload-time = "2025-10-14T04:40:26.806Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/bb/9a/31d62b611d901c3b9e5500c36aab0ff5eb442043fb3a1c254200d3d397d9/charset_normalizer-3.4.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cc00f04ed596e9dc0da42ed17ac5e596c6ccba999ba6bd92b0e0aef2f170f2d6", size = 155015, upload-time = "2025-10-14T04:40:28.284Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1f/f3/107e008fa2bff0c8b9319584174418e5e5285fef32f79d8ee6a430d0039c/charset_normalizer-3.4.4-cp310-cp310-win32.whl", hash = "sha256:f34be2938726fc13801220747472850852fe6b1ea75869a048d6f896838c896f", size = 99792, upload-time = "2025-10-14T04:40:29.613Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/eb/66/e396e8a408843337d7315bab30dbf106c38966f1819f123257f5520f8a96/charset_normalizer-3.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:a61900df84c667873b292c3de315a786dd8dac506704dea57bc957bd31e22c7d", size = 107198, upload-time = "2025-10-14T04:40:30.644Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b5/58/01b4f815bf0312704c267f2ccb6e5d42bcc7752340cd487bc9f8c3710597/charset_normalizer-3.4.4-cp310-cp310-win_arm64.whl", hash = "sha256:cead0978fc57397645f12578bfd2d5ea9138ea0fac82b2f63f7f7c6877986a69", size = 100262, upload-time = "2025-10-14T04:40:32.108Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ed/27/c6491ff4954e58a10f69ad90aca8a1b6fe9c5d3c6f380907af3c37435b59/charset_normalizer-3.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6e1fcf0720908f200cd21aa4e6750a48ff6ce4afe7ff5a79a90d5ed8a08296f8", size = 206988, upload-time = "2025-10-14T04:40:33.79Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/94/59/2e87300fe67ab820b5428580a53cad894272dbb97f38a7a814a2a1ac1011/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5f819d5fe9234f9f82d75bdfa9aef3a3d72c4d24a6e57aeaebba32a704553aa0", size = 147324, upload-time = "2025-10-14T04:40:34.961Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/07/fb/0cf61dc84b2b088391830f6274cb57c82e4da8bbc2efeac8c025edb88772/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:a59cb51917aa591b1c4e6a43c132f0cdc3c76dbad6155df4e28ee626cc77a0a3", size = 142742, upload-time = "2025-10-14T04:40:36.105Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/62/8b/171935adf2312cd745d290ed93cf16cf0dfe320863ab7cbeeae1dcd6535f/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8ef3c867360f88ac904fd3f5e1f902f13307af9052646963ee08ff4f131adafc", size = 160863, upload-time = "2025-10-14T04:40:37.188Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/09/73/ad875b192bda14f2173bfc1bc9a55e009808484a4b256748d931b6948442/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d9e45d7faa48ee908174d8fe84854479ef838fc6a705c9315372eacbc2f02897", size = 157837, upload-time = "2025-10-14T04:40:38.435Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/6d/fc/de9cce525b2c5b94b47c70a4b4fb19f871b24995c728e957ee68ab1671ea/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:840c25fb618a231545cbab0564a799f101b63b9901f2569faecd6b222ac72381", size = 151550, upload-time = "2025-10-14T04:40:40.053Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/55/c2/43edd615fdfba8c6f2dfbd459b25a6b3b551f24ea21981e23fb768503ce1/charset_normalizer-3.4.4-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ca5862d5b3928c4940729dacc329aa9102900382fea192fc5e52eb69d6093815", size = 149162, upload-time = "2025-10-14T04:40:41.163Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/03/86/bde4ad8b4d0e9429a4e82c1e8f5c659993a9a863ad62c7df05cf7b678d75/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d9c7f57c3d666a53421049053eaacdd14bbd0a528e2186fcb2e672effd053bb0", size = 150019, upload-time = "2025-10-14T04:40:42.276Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1f/86/a151eb2af293a7e7bac3a739b81072585ce36ccfb4493039f49f1d3cae8c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:277e970e750505ed74c832b4bf75dac7476262ee2a013f5574dd49075879e161", size = 143310, upload-time = "2025-10-14T04:40:43.439Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b5/fe/43dae6144a7e07b87478fdfc4dbe9efd5defb0e7ec29f5f58a55aeef7bf7/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:31fd66405eaf47bb62e8cd575dc621c56c668f27d46a61d975a249930dd5e2a4", size = 162022, upload-time = "2025-10-14T04:40:44.547Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/80/e6/7aab83774f5d2bca81f42ac58d04caf44f0cc2b65fc6db2b3b2e8a05f3b3/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:0d3d8f15c07f86e9ff82319b3d9ef6f4bf907608f53fe9d92b28ea9ae3d1fd89", size = 149383, upload-time = "2025-10-14T04:40:46.018Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/4f/e8/b289173b4edae05c0dde07f69f8db476a0b511eac556dfe0d6bda3c43384/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:9f7fcd74d410a36883701fafa2482a6af2ff5ba96b9a620e9e0721e28ead5569", size = 159098, upload-time = "2025-10-14T04:40:47.081Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d8/df/fe699727754cae3f8478493c7f45f777b17c3ef0600e28abfec8619eb49c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ebf3e58c7ec8a8bed6d66a75d7fb37b55e5015b03ceae72a8e7c74495551e224", size = 152991, upload-time = "2025-10-14T04:40:48.246Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1a/86/584869fe4ddb6ffa3bd9f491b87a01568797fb9bd8933f557dba9771beaf/charset_normalizer-3.4.4-cp311-cp311-win32.whl", hash = "sha256:eecbc200c7fd5ddb9a7f16c7decb07b566c29fa2161a16cf67b8d068bd21690a", size = 99456, upload-time = "2025-10-14T04:40:49.376Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/65/f6/62fdd5feb60530f50f7e38b4f6a1d5203f4d16ff4f9f0952962c044e919a/charset_normalizer-3.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:5ae497466c7901d54b639cf42d5b8c1b6a4fead55215500d2f486d34db48d016", size = 106978, upload-time = "2025-10-14T04:40:50.844Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7a/9d/0710916e6c82948b3be62d9d398cb4fcf4e97b56d6a6aeccd66c4b2f2bd5/charset_normalizer-3.4.4-cp311-cp311-win_arm64.whl", hash = "sha256:65e2befcd84bc6f37095f5961e68a6f077bf44946771354a28ad434c2cce0ae1", size = 99969, upload-time = "2025-10-14T04:40:52.272Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
]
[[package]]
name = "exceptiongroup"
version = "1.3.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/50/79/66800aadf48771f6b62f7eb014e352e5d06856655206165d775e675a02c9/exceptiongroup-1.3.1.tar.gz", hash = "sha256:8b412432c6055b0b7d14c310000ae93352ed6754f70fa8f7c34141f91c4e3219", size = 30371, upload-time = "2025-11-21T23:01:54.787Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8a/0e/97c33bf5009bdbac74fd2beace167cab3f978feb69cc36f1ef79360d6c4e/exceptiongroup-1.3.1-py3-none-any.whl", hash = "sha256:a7a39a3bd276781e98394987d3a5701d0c4edffb633bb7a5144577f82c773598", size = 16740, upload-time = "2025-11-21T23:01:53.443Z" },
]
[[package]]
name = "idna"
version = "3.11"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
]
[[package]]
name = "lark"
version = "1.3.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/da/34/28fff3ab31ccff1fd4f6c7c7b0ceb2b6968d8ea4950663eadcb5720591a0/lark-1.3.1.tar.gz", hash = "sha256:b426a7a6d6d53189d318f2b6236ab5d6429eaf09259f1ca33eb716eed10d2905", size = 382732, upload-time = "2025-10-27T18:25:56.653Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/82/3d/14ce75ef66813643812f3093ab17e46d3a206942ce7376d31ec2d36229e7/lark-1.3.1-py3-none-any.whl", hash = "sha256:c629b661023a014c37da873b4ff58a817398d12635d3bbb2c5a03be7fe5d1e12", size = 113151, upload-time = "2025-10-27T18:25:54.882Z" },
]
[[package]]
name = "packaging"
version = "25.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
]
[[package]]
name = "pycryptodomex"
version = "3.23.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/c9/85/e24bf90972a30b0fcd16c73009add1d7d7cd9140c2498a68252028899e41/pycryptodomex-3.23.0.tar.gz", hash = "sha256:71909758f010c82bc99b0abf4ea12012c98962fbf0583c2164f8b84533c2e4da", size = 4922157, upload-time = "2025-05-17T17:23:41.434Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/dd/9c/1a8f35daa39784ed8adf93a694e7e5dc15c23c741bbda06e1d45f8979e9e/pycryptodomex-3.23.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:06698f957fe1ab229a99ba2defeeae1c09af185baa909a31a5d1f9d42b1aaed6", size = 2499240, upload-time = "2025-05-17T17:22:46.953Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7a/62/f5221a191a97157d240cf6643747558759126c76ee92f29a3f4aee3197a5/pycryptodomex-3.23.0-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:b2c2537863eccef2d41061e82a881dcabb04944c5c06c5aa7110b577cc487545", size = 1644042, upload-time = "2025-05-17T17:22:49.098Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8c/fd/5a054543c8988d4ed7b612721d7e78a4b9bf36bc3c5ad45ef45c22d0060e/pycryptodomex-3.23.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:43c446e2ba8df8889e0e16f02211c25b4934898384c1ec1ec04d7889c0333587", size = 2186227, upload-time = "2025-05-17T17:22:51.139Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c8/a9/8862616a85cf450d2822dbd4fff1fcaba90877907a6ff5bc2672cafe42f8/pycryptodomex-3.23.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f489c4765093fb60e2edafdf223397bc716491b2b69fe74367b70d6999257a5c", size = 2272578, upload-time = "2025-05-17T17:22:53.676Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/46/9f/bda9c49a7c1842820de674ab36c79f4fbeeee03f8ff0e4f3546c3889076b/pycryptodomex-3.23.0-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bdc69d0d3d989a1029df0eed67cc5e8e5d968f3724f4519bd03e0ec68df7543c", size = 2312166, upload-time = "2025-05-17T17:22:56.585Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/03/cc/870b9bf8ca92866ca0186534801cf8d20554ad2a76ca959538041b7a7cf4/pycryptodomex-3.23.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:6bbcb1dd0f646484939e142462d9e532482bc74475cecf9c4903d4e1cd21f003", size = 2185467, upload-time = "2025-05-17T17:22:59.237Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/96/e3/ce9348236d8e669fea5dd82a90e86be48b9c341210f44e25443162aba187/pycryptodomex-3.23.0-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:8a4fcd42ccb04c31268d1efeecfccfd1249612b4de6374205376b8f280321744", size = 2346104, upload-time = "2025-05-17T17:23:02.112Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a5/e9/e869bcee87beb89040263c416a8a50204f7f7a83ac11897646c9e71e0daf/pycryptodomex-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:55ccbe27f049743a4caf4f4221b166560d3438d0b1e5ab929e07ae1702a4d6fd", size = 2271038, upload-time = "2025-05-17T17:23:04.872Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8d/67/09ee8500dd22614af5fbaa51a4aee6e342b5fa8aecf0a6cb9cbf52fa6d45/pycryptodomex-3.23.0-cp37-abi3-win32.whl", hash = "sha256:189afbc87f0b9f158386bf051f720e20fa6145975f1e76369303d0f31d1a8d7c", size = 1771969, upload-time = "2025-05-17T17:23:07.115Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/69/96/11f36f71a865dd6df03716d33bd07a67e9d20f6b8d39820470b766af323c/pycryptodomex-3.23.0-cp37-abi3-win_amd64.whl", hash = "sha256:52e5ca58c3a0b0bd5e100a9fbc8015059b05cffc6c66ce9d98b4b45e023443b9", size = 1803124, upload-time = "2025-05-17T17:23:09.267Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f9/93/45c1cdcbeb182ccd2e144c693eaa097763b08b38cded279f0053ed53c553/pycryptodomex-3.23.0-cp37-abi3-win_arm64.whl", hash = "sha256:02d87b80778c171445d67e23d1caef279bf4b25c3597050ccd2e13970b57fd51", size = 1707161, upload-time = "2025-05-17T17:23:11.414Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f3/b8/3e76d948c3c4ac71335bbe75dac53e154b40b0f8f1f022dfa295257a0c96/pycryptodomex-3.23.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:ebfff755c360d674306e5891c564a274a47953562b42fb74a5c25b8fc1fb1cb5", size = 1627695, upload-time = "2025-05-17T17:23:17.38Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/6a/cf/80f4297a4820dfdfd1c88cf6c4666a200f204b3488103d027b5edd9176ec/pycryptodomex-3.23.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eca54f4bb349d45afc17e3011ed4264ef1cc9e266699874cdd1349c504e64798", size = 1675772, upload-time = "2025-05-17T17:23:19.202Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/d1/42/1e969ee0ad19fe3134b0e1b856c39bd0b70d47a4d0e81c2a8b05727394c9/pycryptodomex-3.23.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2596e643d4365e14d0879dc5aafe6355616c61c2176009270f3048f6d9a61f", size = 1668083, upload-time = "2025-05-17T17:23:21.867Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/6e/c3/1de4f7631fea8a992a44ba632aa40e0008764c0fb9bf2854b0acf78c2cf2/pycryptodomex-3.23.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fdfac7cda115bca3a5abb2f9e43bc2fb66c2b65ab074913643803ca7083a79ea", size = 1706056, upload-time = "2025-05-17T17:23:24.031Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f2/5f/af7da8e6f1e42b52f44a24d08b8e4c726207434e2593732d39e7af5e7256/pycryptodomex-3.23.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:14c37aaece158d0ace436f76a7bb19093db3b4deade9797abfc39ec6cd6cc2fe", size = 1806478, upload-time = "2025-05-17T17:23:26.066Z" },
]
[[package]]
name = "pygments"
version = "2.19.2"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
]
[[package]]
name = "pytest"
version = "9.0.1"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
{ name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/07/56/f013048ac4bc4c1d9be45afd4ab209ea62822fb1598f40687e6bf45dcea4/pytest-9.0.1.tar.gz", hash = "sha256:3e9c069ea73583e255c3b21cf46b8d3c56f6e3a1a8f6da94ccb0fcf57b9d73c8", size = 1564125, upload-time = "2025-11-12T13:05:09.333Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0b/8b/6300fb80f858cda1c51ffa17075df5d846757081d11ab4aa35cef9e6258b/pytest-9.0.1-py3-none-any.whl", hash = "sha256:67be0030d194df2dfa7b556f2e56fb3c3315bd5c8822c6951162b92b32ce7dad", size = 373668, upload-time = "2025-11-12T13:05:07.379Z" },
]
[[package]]
name = "ragflow-cli"
version = "0.23.1"
source = { virtual = "." }
dependencies = [
{ name = "beartype" },
{ name = "lark" },
{ name = "pycryptodomex" },
{ name = "requests" },
]
[package.dev-dependencies]
test = [
{ name = "pytest" },
{ name = "requests" },
{ name = "requests-toolbelt" },
]
[package.metadata]
requires-dist = [
{ name = "beartype", specifier = ">=0.20.0,<1.0.0" },
{ name = "lark", specifier = ">=1.1.0" },
{ name = "pycryptodomex", specifier = ">=3.10.0" },
{ name = "requests", specifier = ">=2.30.0,<3.0.0" },
]
[package.metadata.requires-dev]
test = [
{ name = "pytest", specifier = ">=8.3.5" },
{ name = "requests", specifier = ">=2.32.3" },
{ name = "requests-toolbelt", specifier = ">=1.0.0" },
]
[[package]]
name = "requests"
version = "2.32.5"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "certifi" },
{ name = "charset-normalizer" },
{ name = "idna" },
{ name = "urllib3" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
]
[[package]]
name = "requests-toolbelt"
version = "1.0.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
dependencies = [
{ name = "requests" },
]
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/f3/61/d7545dafb7ac2230c70d38d31cbfe4cc64f7144dc41f6e4e4b78ecd9f5bb/requests-toolbelt-1.0.0.tar.gz", hash = "sha256:7681a0a3d047012b5bdc0ee37d7f8f07ebe76ab08caeccfc3921ce23c88d5bc6", size = 206888, upload-time = "2023-05-01T04:11:33.229Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3f/51/d4db610ef29373b879047326cbf6fa98b6c1969d6f6dc423279de2b1be2c/requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06", size = 54481, upload-time = "2023-05-01T04:11:28.427Z" },
]
[[package]]
name = "tomli"
version = "2.3.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/52/ed/3f73f72945444548f33eba9a87fc7a6e969915e7b1acc8260b30e1f76a2f/tomli-2.3.0.tar.gz", hash = "sha256:64be704a875d2a59753d80ee8a533c3fe183e3f06807ff7dc2232938ccb01549", size = 17392, upload-time = "2025-10-08T22:01:47.119Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b3/2e/299f62b401438d5fe1624119c723f5d877acc86a4c2492da405626665f12/tomli-2.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:88bd15eb972f3664f5ed4b57c1634a97153b4bac4479dcb6a495f41921eb7f45", size = 153236, upload-time = "2025-10-08T22:01:00.137Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/86/7f/d8fffe6a7aefdb61bced88fcb5e280cfd71e08939da5894161bd71bea022/tomli-2.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:883b1c0d6398a6a9d29b508c331fa56adbcdff647f6ace4dfca0f50e90dfd0ba", size = 148084, upload-time = "2025-10-08T22:01:01.63Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/47/5c/24935fb6a2ee63e86d80e4d3b58b222dafaf438c416752c8b58537c8b89a/tomli-2.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d1381caf13ab9f300e30dd8feadb3de072aeb86f1d34a8569453ff32a7dea4bf", size = 234832, upload-time = "2025-10-08T22:01:02.543Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/89/da/75dfd804fc11e6612846758a23f13271b76d577e299592b4371a4ca4cd09/tomli-2.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a0e285d2649b78c0d9027570d4da3425bdb49830a6156121360b3f8511ea3441", size = 242052, upload-time = "2025-10-08T22:01:03.836Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/70/8c/f48ac899f7b3ca7eb13af73bacbc93aec37f9c954df3c08ad96991c8c373/tomli-2.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0a154a9ae14bfcf5d8917a59b51ffd5a3ac1fd149b71b47a3a104ca4edcfa845", size = 239555, upload-time = "2025-10-08T22:01:04.834Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ba/28/72f8afd73f1d0e7829bfc093f4cb98ce0a40ffc0cc997009ee1ed94ba705/tomli-2.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:74bf8464ff93e413514fefd2be591c3b0b23231a77f901db1eb30d6f712fc42c", size = 245128, upload-time = "2025-10-08T22:01:05.84Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b6/eb/a7679c8ac85208706d27436e8d421dfa39d4c914dcf5fa8083a9305f58d9/tomli-2.3.0-cp311-cp311-win32.whl", hash = "sha256:00b5f5d95bbfc7d12f91ad8c593a1659b6387b43f054104cda404be6bda62456", size = 96445, upload-time = "2025-10-08T22:01:06.896Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0a/fe/3d3420c4cb1ad9cb462fb52967080575f15898da97e21cb6f1361d505383/tomli-2.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:4dc4ce8483a5d429ab602f111a93a6ab1ed425eae3122032db7e9acf449451be", size = 107165, upload-time = "2025-10-08T22:01:08.107Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ff/b7/40f36368fcabc518bb11c8f06379a0fd631985046c038aca08c6d6a43c6e/tomli-2.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d7d86942e56ded512a594786a5ba0a5e521d02529b3826e7761a05138341a2ac", size = 154891, upload-time = "2025-10-08T22:01:09.082Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f9/3f/d9dd692199e3b3aab2e4e4dd948abd0f790d9ded8cd10cbaae276a898434/tomli-2.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:73ee0b47d4dad1c5e996e3cd33b8a76a50167ae5f96a2607cbe8cc773506ab22", size = 148796, upload-time = "2025-10-08T22:01:10.266Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/60/83/59bff4996c2cf9f9387a0f5a3394629c7efa5ef16142076a23a90f1955fa/tomli-2.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:792262b94d5d0a466afb5bc63c7daa9d75520110971ee269152083270998316f", size = 242121, upload-time = "2025-10-08T22:01:11.332Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/45/e5/7c5119ff39de8693d6baab6c0b6dcb556d192c165596e9fc231ea1052041/tomli-2.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4f195fe57ecceac95a66a75ac24d9d5fbc98ef0962e09b2eddec5d39375aae52", size = 250070, upload-time = "2025-10-08T22:01:12.498Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/45/12/ad5126d3a278f27e6701abde51d342aa78d06e27ce2bb596a01f7709a5a2/tomli-2.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e31d432427dcbf4d86958c184b9bfd1e96b5b71f8eb17e6d02531f434fd335b8", size = 245859, upload-time = "2025-10-08T22:01:13.551Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/fb/a1/4d6865da6a71c603cfe6ad0e6556c73c76548557a8d658f9e3b142df245f/tomli-2.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7b0882799624980785240ab732537fcfc372601015c00f7fc367c55308c186f6", size = 250296, upload-time = "2025-10-08T22:01:14.614Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a0/b7/a7a7042715d55c9ba6e8b196d65d2cb662578b4d8cd17d882d45322b0d78/tomli-2.3.0-cp312-cp312-win32.whl", hash = "sha256:ff72b71b5d10d22ecb084d345fc26f42b5143c5533db5e2eaba7d2d335358876", size = 97124, upload-time = "2025-10-08T22:01:15.629Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/06/1e/f22f100db15a68b520664eb3328fb0ae4e90530887928558112c8d1f4515/tomli-2.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:1cb4ed918939151a03f33d4242ccd0aa5f11b3547d0cf30f7c74a408a5b99878", size = 107698, upload-time = "2025-10-08T22:01:16.51Z" },
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/77/b8/0135fadc89e73be292b473cb820b4f5a08197779206b33191e801feeae40/tomli-2.3.0-py3-none-any.whl", hash = "sha256:e95b1af3c5b07d9e643909b5abbec77cd9f1217e6d0bca72b0234736b9fb1f1b", size = 14408, upload-time = "2025-10-08T22:01:46.04Z" },
]
[[package]]
name = "typing-extensions"
version = "4.15.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
]
[[package]]
name = "urllib3"
version = "2.5.0"
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/15/22/9ee70a2574a4f4599c47dd506532914ce044817c7752a79b6a51286319bc/urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760", size = 393185, upload-time = "2025-06-18T14:07:41.644Z" }
wheels = [
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795, upload-time = "2025-06-18T14:07:40.39Z" },
]

View File

@ -1,46 +1,14 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging import logging
import threading import threading
from enum import Enum from enum import Enum
from pydantic import BaseModel from pydantic import BaseModel
from typing import Any from typing import Any
from common.config_utils import read_config from api.utils.configs import read_config
from urllib.parse import urlparse from urllib.parse import urlparse
class BaseConfig(BaseModel):
id: int
name: str
host: str
port: int
service_type: str
detail_func_name: str
def to_dict(self) -> dict[str, Any]:
return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port,
'service_type': self.service_type}
class ServiceConfigs: class ServiceConfigs:
configs = list[BaseConfig]
def __init__(self): def __init__(self):
self.configs = [] self.configs = []
self.lock = threading.Lock() self.lock = threading.Lock()
@ -58,6 +26,17 @@ class ServiceType(Enum):
FILE_STORE = "file_store" FILE_STORE = "file_store"
class BaseConfig(BaseModel):
id: int
name: str
host: str
port: int
service_type: str
def to_dict(self) -> dict[str, Any]:
return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port, 'service_type': self.service_type}
class MetaConfig(BaseConfig): class MetaConfig(BaseConfig):
meta_type: str meta_type: str
@ -183,13 +162,11 @@ class RAGFlowServerConfig(BaseConfig):
class TaskExecutorConfig(BaseConfig): class TaskExecutorConfig(BaseConfig):
message_queue_type: str
def to_dict(self) -> dict[str, Any]: def to_dict(self) -> dict[str, Any]:
result = super().to_dict() result = super().to_dict()
if 'extra' not in result: if 'extra' not in result:
result['extra'] = dict() result['extra'] = dict()
result['extra']['message_queue_type'] = self.message_queue_type
return result return result
@ -227,14 +204,12 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
ragflow_count = 0 ragflow_count = 0
id_count = 0 id_count = 0
for k, v in raw_configs.items(): for k, v in raw_configs.items():
match k: match (k):
case "ragflow": case "ragflow":
name: str = f'ragflow_{ragflow_count}' name: str = f'ragflow_{ragflow_count}'
host: str = v['host'] host: str = v['host']
http_port: int = v['http_port'] http_port: int = v['http_port']
config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port, config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port, service_type="ragflow_server")
service_type="ragflow_server",
detail_func_name="check_ragflow_server_alive")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "es": case "es":
@ -247,8 +222,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password: str = v.get('password') password: str = v.get('password')
config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval",
retrieval_type="elasticsearch", retrieval_type="elasticsearch",
username=username, password=password, username=username, password=password)
detail_func_name="get_es_cluster_stats")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
@ -259,9 +233,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
host = parts[0] host = parts[0]
port = int(parts[1]) port = int(parts[1])
database: str = v.get('db_name', 'default_db') database: str = v.get('db_name', 'default_db')
config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", retrieval_type="infinity",
retrieval_type="infinity", db_name=database)
db_name=database, detail_func_name="get_infinity_status")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "minio": case "minio":
@ -272,9 +245,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
port = int(parts[1]) port = int(parts[1])
user = v.get('user') user = v.get('user')
password = v.get('password') password = v.get('password')
config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, service_type="file_store",
service_type="file_store", store_type="minio")
store_type="minio", detail_func_name="check_minio_alive")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "redis": case "redis":
@ -286,7 +258,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password = v.get('password') password = v.get('password')
db: int = v.get('db') db: int = v.get('db')
config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db, config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db,
service_type="message_queue", mq_type="redis", detail_func_name="get_redis_info") service_type="message_queue", mq_type="redis")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "mysql": case "mysql":
@ -296,20 +268,11 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
username = v.get('user') username = v.get('user')
password = v.get('password') password = v.get('password')
config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password, config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password,
service_type="meta_data", meta_type="mysql", detail_func_name="get_mysql_status") service_type="meta_data", meta_type="mysql")
configurations.append(config) configurations.append(config)
id_count += 1 id_count += 1
case "admin": case "admin":
pass pass
case "task_executor":
name: str = 'task_executor'
host: str = v.get('host', '')
port: int = v.get('port', 0)
message_queue_type: str = v.get('message_queue_type')
config = TaskExecutorConfig(id=id_count, name=name, host=host, port=port, message_queue_type=message_queue_type,
service_type="task_executor", detail_func_name="check_task_executor_alive")
configurations.append(config)
id_count += 1
case _: case _:
logging.warning(f"Unknown configuration key: {k}") logging.warning(f"Unknown configuration key: {k}")
continue continue

15
admin/responses.py Normal file
View File

@ -0,0 +1,15 @@
from flask import jsonify
def success_response(data=None, message="Success", code = 0):
return jsonify({
"code": code,
"message": message,
"data": data
}), 200
def error_response(message="Error", code=-1, data=None):
return jsonify({
"code": code,
"message": message,
"data": data
}), 400

190
admin/routes.py Normal file
View File

@ -0,0 +1,190 @@
from flask import Blueprint, request
from auth import login_verify
from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr
from exceptions import AdminException
admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
@admin_bp.route('/auth', methods=['GET'])
@login_verify
def auth_admin():
try:
return success_response(None, "Admin is authorized", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users', methods=['GET'])
@login_verify
def list_users():
try:
users = UserMgr.get_all_users()
return success_response(users, "Get all users", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users', methods=['POST'])
@login_verify
def create_user():
try:
data = request.get_json()
if not data or 'username' not in data or 'password' not in data:
return error_response("Username and password are required", 400)
username = data['username']
password = data['password']
role = data.get('role', 'user')
res = UserMgr.create_user(username, password, role)
if res["success"]:
user_info = res["user_info"]
user_info.pop("password") # do not return password
return success_response(user_info, "User created successfully")
else:
return error_response("create user failed")
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e))
@admin_bp.route('/users/<username>', methods=['DELETE'])
@login_verify
def delete_user(username):
try:
res = UserMgr.delete_user(username)
if res["success"]:
return success_response(None, res["message"])
else:
return error_response(res["message"])
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/password', methods=['PUT'])
@login_verify
def change_password(username):
try:
data = request.get_json()
if not data or 'new_password' not in data:
return error_response("New password is required", 400)
new_password = data['new_password']
msg = UserMgr.update_user_password(username, new_password)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/activate', methods=['PUT'])
@login_verify
def alter_user_activate_status(username):
try:
data = request.get_json()
if not data or 'activate_status' not in data:
return error_response("Activation status is required", 400)
activate_status = data['activate_status']
msg = UserMgr.update_user_activate_status(username, activate_status)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>', methods=['GET'])
@login_verify
def get_user_details(username):
try:
user_details = UserMgr.get_user_details(username)
return success_response(user_details)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/datasets', methods=['GET'])
@login_verify
def get_user_datasets(username):
try:
datasets_list = UserServiceMgr.get_user_datasets(username)
return success_response(datasets_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/agents', methods=['GET'])
@login_verify
def get_user_agents(username):
try:
agents_list = UserServiceMgr.get_user_agents(username)
return success_response(agents_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services', methods=['GET'])
@login_verify
def get_services():
try:
services = ServiceMgr.get_all_services()
return success_response(services, "Get all services", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/service_types/<service_type>', methods=['GET'])
@login_verify
def get_services_by_type(service_type_str):
try:
services = ServiceMgr.get_services_by_type(service_type_str)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['GET'])
@login_verify
def get_service(service_id):
try:
services = ServiceMgr.get_service_details(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['DELETE'])
@login_verify
def shutdown_service(service_id):
try:
services = ServiceMgr.shutdown_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/services/<service_id>', methods=['PUT'])
@login_verify
def restart_service(service_id):
try:
services = ServiceMgr.restart_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)

View File

@ -1,84 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
start_ts = time.time()
import os
import signal
import logging
import threading
import traceback
import faulthandler
from flask import Flask
from flask_login import LoginManager
from werkzeug.serving import run_simple
from routes import admin_bp
from common.log_utils import init_root_logger
from common.constants import SERVICE_CONF
from common.config_utils import show_configs
from common import settings
from config import load_configurations, SERVICE_CONFIGS
from auth import init_default_admin, setup_auth
from flask_session import Session
from common.versions import get_ragflow_version
stop_event = threading.Event()
if __name__ == '__main__':
faulthandler.enable()
init_root_logger("admin_service")
logging.info(r"""
____ ___ ______________ ___ __ _
/ __ \/ | / ____/ ____/ /___ _ __ / | ____/ /___ ___ (_)___
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / /| |/ __ / __ `__ \/ / __ \
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / ___ / /_/ / / / / / / / / / /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
app = Flask(__name__)
app.register_blueprint(admin_bp)
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
app.config["MAX_CONTENT_LENGTH"] = int(
os.environ.get("MAX_CONTENT_LENGTH", 1024 * 1024 * 1024)
)
Session(app)
logging.info(f'RAGFlow version: {get_ragflow_version()}')
show_configs()
login_manager = LoginManager()
login_manager.init_app(app)
settings.init_settings()
setup_auth(login_manager)
init_default_admin()
SERVICE_CONFIGS.configs = load_configurations(SERVICE_CONF)
try:
logging.info(f"RAGFlow admin is ready after {time.time() - start_ts}s initialization.")
run_simple(
hostname="0.0.0.0",
port=9381,
application=app,
threaded=True,
use_reloader=False,
use_debugger=True,
)
except Exception:
traceback.print_exc()
stop_event.set()
time.sleep(1)
os.kill(os.getpid(), signal.SIGKILL)

View File

@ -1,188 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import uuid
from functools import wraps
from datetime import datetime
from flask import jsonify, request
from flask_login import current_user, login_user
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api.common.exceptions import AdminException, UserNotFoundError
from api.common.base64 import encode_to_base64
from api.db.services import UserService
from common.constants import ActiveEnum, StatusEnum
from api.utils.crypt import decrypt
from common.misc_utils import get_uuid
from common.time_utils import current_timestamp, datetime_format, get_format_time
from common.connection_utils import sync_construct_response
from common import settings
def setup_auth(login_manager):
@login_manager.request_loader
def load_user(web_request):
jwt = Serializer(secret_key=settings.SECRET_KEY)
authorization = web_request.headers.get("Authorization")
if authorization:
try:
access_token = str(jwt.loads(authorization))
if not access_token or not access_token.strip():
logging.warning("Authentication attempt with empty access token")
return None
# Access tokens should be UUIDs (32 hex characters)
if len(access_token.strip()) < 32:
logging.warning(f"Authentication attempt with invalid token format: {len(access_token)} chars")
return None
user = UserService.query(
access_token=access_token, status=StatusEnum.VALID.value
)
if user:
if not user[0].access_token or not user[0].access_token.strip():
logging.warning(f"User {user[0].email} has empty access_token in database")
return None
return user[0]
else:
return None
except Exception as e:
logging.warning(f"load_user got exception {e}")
return None
else:
return None
def init_default_admin():
# Verify that at least one active admin user exists. If not, create a default one.
users = UserService.query(is_superuser=True)
if not users:
default_admin = {
"id": uuid.uuid1().hex,
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
"creator": "system",
"status": "1",
}
if not UserService.save(**default_admin):
raise AdminException("Can't init admin.", 500)
elif not any([u.is_active == ActiveEnum.ACTIVE.value for u in users]):
raise AdminException("No active admin. Please update 'is_active' in db manually.", 500)
def check_admin_auth(func):
@wraps(func)
def wrapper(*args, **kwargs):
user = UserService.filter_by_id(current_user.id)
if not user:
raise UserNotFoundError(current_user.email)
if not user.is_superuser:
raise AdminException("Not admin", 403)
if user.is_active == ActiveEnum.INACTIVE.value:
raise AdminException(f"User {current_user.email} inactive", 403)
return func(*args, **kwargs)
return wrapper
def login_admin(email: str, password: str):
"""
:param email: admin email
:param password: string before decrypt
"""
users = UserService.query(email=email)
if not users:
raise UserNotFoundError(email)
psw = decrypt(password)
user = UserService.query_user(email, psw)
if not user:
raise AdminException("Email and password do not match!")
if not user.is_superuser:
raise AdminException("Not admin", 403)
if user.is_active == ActiveEnum.INACTIVE.value:
raise AdminException(f"User {email} inactive", 403)
resp = user.to_json()
user.access_token = get_uuid()
login_user(user)
user.update_time = (current_timestamp(),)
user.update_date = (datetime_format(datetime.now()),)
user.last_login_time = get_format_time()
user.save()
msg = "Welcome back!"
return sync_construct_response(data=resp, auth=user.get_id(), message=msg)
def check_admin(username: str, password: str):
users = UserService.query(email=username)
if not users:
logging.info(f"Username: {username} is not registered!")
user_info = {
"id": uuid.uuid1().hex,
"password": encode_to_base64("admin"),
"nickname": "admin",
"is_superuser": True,
"email": "admin@ragflow.io",
"creator": "system",
"status": "1",
}
if not UserService.save(**user_info):
raise AdminException("Can't init admin.", 500)
user = UserService.query_user(username, password)
if user:
return True
else:
return False
def login_verify(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if not auth or 'username' not in auth.parameters or 'password' not in auth.parameters:
return jsonify({
"code": 401,
"message": "Authentication required",
"data": None
}), 200
username = auth.parameters['username']
password = auth.parameters['password']
try:
if not check_admin(username, password):
return jsonify({
"code": 500,
"message": "Access denied",
"data": None
}), 200
except Exception:
logging.exception("An error occurred during admin login verification.")
return jsonify({
"code": 500,
"message": "An internal server error occurred."
}), 200
return f(*args, **kwargs)
return decorated

View File

@ -1,76 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from typing import Dict, Any
from api.common.exceptions import AdminException
class RoleMgr:
@staticmethod
def create_role(role_name: str, description: str):
error_msg = f"not implement: create role: {role_name}, description: {description}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def update_role_description(role_name: str, description: str) -> Dict[str, Any]:
error_msg = f"not implement: update role: {role_name} with description: {description}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def delete_role(role_name: str) -> Dict[str, Any]:
error_msg = f"not implement: drop role: {role_name}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def list_roles() -> Dict[str, Any]:
error_msg = "not implement: list roles"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def get_role_permission(role_name: str) -> Dict[str, Any]:
error_msg = f"not implement: show role {role_name}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def grant_role_permission(role_name: str, actions: list, resource: str) -> Dict[str, Any]:
error_msg = f"not implement: grant role {role_name} actions: {actions} on {resource}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def revoke_role_permission(role_name: str, actions: list, resource: str) -> Dict[str, Any]:
error_msg = f"not implement: revoke role {role_name} actions: {actions} on {resource}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def update_user_role(user_name: str, role_name: str) -> Dict[str, Any]:
error_msg = f"not implement: update user role: {user_name} to role {role_name}"
logging.error(error_msg)
raise AdminException(error_msg)
@staticmethod
def get_user_permission(user_name: str) -> Dict[str, Any]:
error_msg = f"not implement: get user permission: {user_name}"
logging.error(error_msg)
raise AdminException(error_msg)

View File

@ -1,654 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import secrets
import logging
from typing import Any
from common.time_utils import current_timestamp, datetime_format
from datetime import datetime
from flask import Blueprint, Response, request
from flask_login import current_user, login_required, logout_user
from auth import login_verify, login_admin, check_admin_auth
from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr, SettingsMgr, ConfigMgr, EnvironmentsMgr, SandboxMgr
from roles import RoleMgr
from api.common.exceptions import AdminException
from common.versions import get_ragflow_version
from api.utils.api_utils import generate_confirmation_token
admin_bp = Blueprint("admin", __name__, url_prefix="/api/v1/admin")
@admin_bp.route("/ping", methods=["GET"])
def ping():
return success_response("PONG")
@admin_bp.route("/login", methods=["POST"])
def login():
if not request.json:
return error_response("Authorize admin failed.", 400)
try:
email = request.json.get("email", "")
password = request.json.get("password", "")
return login_admin(email, password)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/logout", methods=["GET"])
@login_required
def logout():
try:
current_user.access_token = f"INVALID_{secrets.token_hex(16)}"
current_user.save()
logout_user()
return success_response(True)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/auth", methods=["GET"])
@login_verify
def auth_admin():
try:
return success_response(None, "Admin is authorized", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users", methods=["GET"])
@login_required
@check_admin_auth
def list_users():
try:
users = UserMgr.get_all_users()
return success_response(users, "Get all users", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users", methods=["POST"])
@login_required
@check_admin_auth
def create_user():
try:
data = request.get_json()
if not data or "username" not in data or "password" not in data:
return error_response("Username and password are required", 400)
username = data["username"]
password = data["password"]
role = data.get("role", "user")
res = UserMgr.create_user(username, password, role)
if res["success"]:
user_info = res["user_info"]
user_info.pop("password") # do not return password
return success_response(user_info, "User created successfully")
else:
return error_response("create user failed")
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e))
@admin_bp.route("/users/<username>", methods=["DELETE"])
@login_required
@check_admin_auth
def delete_user(username):
try:
res = UserMgr.delete_user(username)
if res["success"]:
return success_response(None, res["message"])
else:
return error_response(res["message"])
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/password", methods=["PUT"])
@login_required
@check_admin_auth
def change_password(username):
try:
data = request.get_json()
if not data or "new_password" not in data:
return error_response("New password is required", 400)
new_password = data["new_password"]
msg = UserMgr.update_user_password(username, new_password)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/activate", methods=["PUT"])
@login_required
@check_admin_auth
def alter_user_activate_status(username):
try:
data = request.get_json()
if not data or "activate_status" not in data:
return error_response("Activation status is required", 400)
activate_status = data["activate_status"]
msg = UserMgr.update_user_activate_status(username, activate_status)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/admin", methods=["PUT"])
@login_required
@check_admin_auth
def grant_admin(username):
try:
if current_user.email == username:
return error_response(f"can't grant current user: {username}", 409)
msg = UserMgr.grant_admin(username)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/admin", methods=["DELETE"])
@login_required
@check_admin_auth
def revoke_admin(username):
try:
if current_user.email == username:
return error_response(f"can't grant current user: {username}", 409)
msg = UserMgr.revoke_admin(username)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>", methods=["GET"])
@login_required
@check_admin_auth
def get_user_details(username):
try:
user_details = UserMgr.get_user_details(username)
return success_response(user_details)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/datasets", methods=["GET"])
@login_required
@check_admin_auth
def get_user_datasets(username):
try:
datasets_list = UserServiceMgr.get_user_datasets(username)
return success_response(datasets_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/agents", methods=["GET"])
@login_required
@check_admin_auth
def get_user_agents(username):
try:
agents_list = UserServiceMgr.get_user_agents(username)
return success_response(agents_list)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/services", methods=["GET"])
@login_required
@check_admin_auth
def get_services():
try:
services = ServiceMgr.get_all_services()
return success_response(services, "Get all services", 0)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/service_types/<service_type>", methods=["GET"])
@login_required
@check_admin_auth
def get_services_by_type(service_type_str):
try:
services = ServiceMgr.get_services_by_type(service_type_str)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/services/<service_id>", methods=["GET"])
@login_required
@check_admin_auth
def get_service(service_id):
try:
services = ServiceMgr.get_service_details(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/services/<service_id>", methods=["DELETE"])
@login_required
@check_admin_auth
def shutdown_service(service_id):
try:
services = ServiceMgr.shutdown_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/services/<service_id>", methods=["PUT"])
@login_required
@check_admin_auth
def restart_service(service_id):
try:
services = ServiceMgr.restart_service(service_id)
return success_response(services)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles", methods=["POST"])
@login_required
@check_admin_auth
def create_role():
try:
data = request.get_json()
if not data or "role_name" not in data:
return error_response("Role name is required", 400)
role_name: str = data["role_name"]
description: str = data["description"]
res = RoleMgr.create_role(role_name, description)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles/<role_name>", methods=["PUT"])
@login_required
@check_admin_auth
def update_role(role_name: str):
try:
data = request.get_json()
if not data or "description" not in data:
return error_response("Role description is required", 400)
description: str = data["description"]
res = RoleMgr.update_role_description(role_name, description)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles/<role_name>", methods=["DELETE"])
@login_required
@check_admin_auth
def delete_role(role_name: str):
try:
res = RoleMgr.delete_role(role_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles", methods=["GET"])
@login_required
@check_admin_auth
def list_roles():
try:
res = RoleMgr.list_roles()
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles/<role_name>/permission", methods=["GET"])
@login_required
@check_admin_auth
def get_role_permission(role_name: str):
try:
res = RoleMgr.get_role_permission(role_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles/<role_name>/permission", methods=["POST"])
@login_required
@check_admin_auth
def grant_role_permission(role_name: str):
try:
data = request.get_json()
if not data or "actions" not in data or "resource" not in data:
return error_response("Permission is required", 400)
actions: list = data["actions"]
resource: str = data["resource"]
res = RoleMgr.grant_role_permission(role_name, actions, resource)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/roles/<role_name>/permission", methods=["DELETE"])
@login_required
@check_admin_auth
def revoke_role_permission(role_name: str):
try:
data = request.get_json()
if not data or "actions" not in data or "resource" not in data:
return error_response("Permission is required", 400)
actions: list = data["actions"]
resource: str = data["resource"]
res = RoleMgr.revoke_role_permission(role_name, actions, resource)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<user_name>/role", methods=["PUT"])
@login_required
@check_admin_auth
def update_user_role(user_name: str):
try:
data = request.get_json()
if not data or "role_name" not in data:
return error_response("Role name is required", 400)
role_name: str = data["role_name"]
res = RoleMgr.update_user_role(user_name, role_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<user_name>/permission", methods=["GET"])
@login_required
@check_admin_auth
def get_user_permission(user_name: str):
try:
res = RoleMgr.get_user_permission(user_name)
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/variables", methods=["PUT"])
@login_required
@check_admin_auth
def set_variable():
try:
data = request.get_json()
if not data and "var_name" not in data:
return error_response("Var name is required", 400)
if "var_value" not in data:
return error_response("Var value is required", 400)
var_name: str = data["var_name"]
var_value: str = data["var_value"]
SettingsMgr.update_by_name(var_name, var_value)
return success_response(None, "Set variable successfully")
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/variables", methods=["GET"])
@login_required
@check_admin_auth
def get_variable():
try:
if request.content_length is None or request.content_length == 0:
# list variables
res = list(SettingsMgr.get_all())
return success_response(res)
# get var
data = request.get_json()
if not data and "var_name" not in data:
return error_response("Var name is required", 400)
var_name: str = data["var_name"]
res = SettingsMgr.get_by_name(var_name)
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/configs", methods=["GET"])
@login_required
@check_admin_auth
def get_config():
try:
res = list(ConfigMgr.get_all())
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/environments", methods=["GET"])
@login_required
@check_admin_auth
def get_environments():
try:
res = list(EnvironmentsMgr.get_all())
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/keys", methods=["POST"])
@login_required
@check_admin_auth
def generate_user_api_key(username: str) -> tuple[Response, int]:
try:
user_details: list[dict[str, Any]] = UserMgr.get_user_details(username)
if not user_details:
return error_response("User not found!", 404)
tenants: list[dict[str, Any]] = UserServiceMgr.get_user_tenants(username)
if not tenants:
return error_response("Tenant not found!", 404)
tenant_id: str = tenants[0]["tenant_id"]
key: str = generate_confirmation_token()
obj: dict[str, Any] = {
"tenant_id": tenant_id,
"token": key,
"beta": generate_confirmation_token().replace("ragflow-", "")[:32],
"create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()),
"update_time": None,
"update_date": None,
}
if not UserMgr.save_api_key(obj):
return error_response("Failed to generate API key!", 500)
return success_response(obj, "API key generated successfully")
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/keys", methods=["GET"])
@login_required
@check_admin_auth
def get_user_api_keys(username: str) -> tuple[Response, int]:
try:
api_keys: list[dict[str, Any]] = UserMgr.get_user_api_key(username)
return success_response(api_keys, "Get user API keys")
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/users/<username>/keys/<key>", methods=["DELETE"])
@login_required
@check_admin_auth
def delete_user_api_key(username: str, key: str) -> tuple[Response, int]:
try:
deleted = UserMgr.delete_api_key(username, key)
if deleted:
return success_response(None, "API key deleted successfully")
else:
return error_response("API key not found or could not be deleted", 404)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/version", methods=["GET"])
@login_required
@check_admin_auth
def show_version():
try:
res = {"version": get_ragflow_version()}
return success_response(res)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/sandbox/providers", methods=["GET"])
@login_required
@check_admin_auth
def list_sandbox_providers():
"""List all available sandbox providers."""
try:
res = SandboxMgr.list_providers()
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/sandbox/providers/<provider_id>/schema", methods=["GET"])
@login_required
@check_admin_auth
def get_sandbox_provider_schema(provider_id: str):
"""Get configuration schema for a specific provider."""
try:
res = SandboxMgr.get_provider_config_schema(provider_id)
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/sandbox/config", methods=["GET"])
@login_required
@check_admin_auth
def get_sandbox_config():
"""Get current sandbox configuration."""
try:
res = SandboxMgr.get_config()
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route("/sandbox/config", methods=["POST"])
@login_required
@check_admin_auth
def set_sandbox_config():
"""Set sandbox provider configuration."""
try:
data = request.get_json()
if not data:
logging.error("set_sandbox_config: Request body is required")
return error_response("Request body is required", 400)
provider_type = data.get("provider_type")
if not provider_type:
logging.error("set_sandbox_config: provider_type is required")
return error_response("provider_type is required", 400)
config = data.get("config", {})
set_active = data.get("set_active", True) # Default to True for backward compatibility
logging.info(f"set_sandbox_config: provider_type={provider_type}, set_active={set_active}")
logging.info(f"set_sandbox_config: config keys={list(config.keys())}")
res = SandboxMgr.set_config(provider_type, config, set_active)
return success_response(res, "Sandbox configuration updated successfully")
except AdminException as e:
logging.exception("set_sandbox_config AdminException")
return error_response(str(e), 400)
except Exception as e:
logging.exception("set_sandbox_config unexpected error")
return error_response(str(e), 500)
@admin_bp.route("/sandbox/test", methods=["POST"])
@login_required
@check_admin_auth
def test_sandbox_connection():
"""Test connection to sandbox provider."""
try:
data = request.get_json()
if not data:
return error_response("Request body is required", 400)
provider_type = data.get("provider_type")
if not provider_type:
return error_response("provider_type is required", 400)
config = data.get("config", {})
res = SandboxMgr.test_connection(provider_type, config)
return success_response(res)
except AdminException as e:
return error_response(str(e), 400)
except Exception as e:
return error_response(str(e), 500)

View File

@ -1,723 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import os
import logging
import re
from typing import Any
from werkzeug.security import check_password_hash
from common.constants import ActiveEnum
from api.db.services import UserService
from api.db.joint_services.user_account_service import create_new_user, delete_user_data
from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_service import TenantService, UserTenantService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.system_settings_service import SystemSettingsService
from api.db.services.api_service import APITokenService
from api.db.db_models import APIToken
from api.utils.crypt import decrypt
from api.utils import health_utils
from api.common.exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from config import SERVICE_CONFIGS
class UserMgr:
@staticmethod
def get_all_users():
users = UserService.get_all_users()
result = []
for user in users:
result.append(
{
"email": user.email,
"nickname": user.nickname,
"create_date": user.create_date,
"is_active": user.is_active,
"is_superuser": user.is_superuser,
}
)
return result
@staticmethod
def get_user_details(username):
# use email to query
users = UserService.query_user_by_email(username)
result = []
for user in users:
result.append(
{
"avatar": user.avatar,
"email": user.email,
"language": user.language,
"last_login_time": user.last_login_time,
"is_active": user.is_active,
"is_anonymous": user.is_anonymous,
"login_channel": user.login_channel,
"status": user.status,
"is_superuser": user.is_superuser,
"create_date": user.create_date,
"update_date": user.update_date,
}
)
return result
@staticmethod
def create_user(username, password, role="user") -> dict:
# Validate the email address
if not re.match(r"^[\w\._-]+@([\w_-]+\.)+[\w-]{2,}$", username):
raise AdminException(f"Invalid email address: {username}!")
# Check if the email address is already used
if UserService.query(email=username):
raise UserAlreadyExistsError(username)
# Construct user info data
user_info_dict = {
"email": username,
"nickname": "", # ask user to edit it manually in settings.
"password": decrypt(password),
"login_channel": "password",
"is_superuser": role == "admin",
}
return create_new_user(user_info_dict)
@staticmethod
def delete_user(username):
# use email to delete
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
if len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
usr = user_list[0]
return delete_user_data(usr.id)
@staticmethod
def update_user_password(username, new_password) -> str:
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check new_password different from old.
usr = user_list[0]
psw = decrypt(new_password)
if check_password_hash(usr.password, psw):
return "Same password, no need to update!"
# update password
UserService.update_user_password(usr.id, psw)
return "Password updated successfully!"
@staticmethod
def update_user_activate_status(username, activate_status: str):
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check activate status different from new
usr = user_list[0]
# format activate_status before handle
_activate_status = activate_status.lower()
target_status = {
"on": ActiveEnum.ACTIVE.value,
"off": ActiveEnum.INACTIVE.value,
}.get(_activate_status)
if not target_status:
raise AdminException(f"Invalid activate_status: {activate_status}")
if target_status == usr.is_active:
return f"User activate status is already {_activate_status}!"
# update is_active
UserService.update_user(usr.id, {"is_active": target_status})
return f"Turn {_activate_status} user activate status successfully!"
@staticmethod
def get_user_api_key(username: str) -> list[dict[str, Any]]:
# use email to find user. check exist and unique.
user_list: list[Any] = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"More than one user with username '{username}' found!")
usr: Any = user_list[0]
# tenant_id is typically the same as user_id for the owner tenant
tenant_id: str = usr.id
# Query all API keys for this tenant
api_keys: Any = APITokenService.query(tenant_id=tenant_id)
result: list[dict[str, Any]] = []
for key in api_keys:
result.append(key.to_dict())
return result
@staticmethod
def save_api_key(api_key: dict[str, Any]) -> bool:
return APITokenService.save(**api_key)
@staticmethod
def delete_api_key(username: str, key: str) -> bool:
# use email to find user. check exist and unique.
user_list: list[Any] = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
usr: Any = user_list[0]
# tenant_id is typically the same as user_id for the owner tenant
tenant_id: str = usr.id
# Delete the API key
deleted_count: int = APITokenService.filter_delete([APIToken.tenant_id == tenant_id, APIToken.token == key])
return deleted_count > 0
@staticmethod
def grant_admin(username: str):
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check activate status different from new
usr = user_list[0]
if usr.is_superuser:
return f"{usr} is already superuser!"
# update is_active
UserService.update_user(usr.id, {"is_superuser": True})
return "Grant successfully!"
@staticmethod
def revoke_admin(username: str):
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check activate status different from new
usr = user_list[0]
if not usr.is_superuser:
return f"{usr} isn't superuser, yet!"
# update is_active
UserService.update_user(usr.id, {"is_superuser": False})
return "Revoke successfully!"
class UserServiceMgr:
@staticmethod
def get_user_datasets(username):
# use email to find user.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# find tenants
usr = user_list[0]
tenants = TenantService.get_joined_tenants_by_user_id(usr.id)
tenant_ids = [m["tenant_id"] for m in tenants]
# filter permitted kb and owned kb
return KnowledgebaseService.get_all_kb_by_tenant_ids(tenant_ids, usr.id)
@staticmethod
def get_user_agents(username):
# use email to find user.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# find tenants
usr = user_list[0]
tenants = TenantService.get_joined_tenants_by_user_id(usr.id)
tenant_ids = [m["tenant_id"] for m in tenants]
# filter permitted agents and owned agents
res = UserCanvasService.get_all_agents_by_tenant_ids(tenant_ids, usr.id)
return [{"title": r["title"], "permission": r["permission"], "canvas_category": r["canvas_category"].split("_")[0], "avatar": r["avatar"]} for r in res]
@staticmethod
def get_user_tenants(email: str) -> list[dict[str, Any]]:
users: list[Any] = UserService.query_user_by_email(email)
if not users:
raise UserNotFoundError(email)
user: Any = users[0]
tenants: list[dict[str, Any]] = UserTenantService.get_tenants_by_user_id(user.id)
return tenants
class ServiceMgr:
@staticmethod
def get_all_services():
doc_engine = os.getenv("DOC_ENGINE", "elasticsearch")
result = []
configs = SERVICE_CONFIGS.configs
for service_id, config in enumerate(configs):
config_dict = config.to_dict()
if config_dict["service_type"] == "retrieval":
if config_dict["extra"]["retrieval_type"] != doc_engine:
continue
try:
service_detail = ServiceMgr.get_service_details(service_id)
if "status" in service_detail:
config_dict["status"] = service_detail["status"]
else:
config_dict["status"] = "timeout"
except Exception as e:
logging.warning(f"Can't get service details, error: {e}")
config_dict["status"] = "timeout"
if not config_dict["host"]:
config_dict["host"] = "-"
if not config_dict["port"]:
config_dict["port"] = "-"
result.append(config_dict)
return result
@staticmethod
def get_services_by_type(service_type_str: str):
raise AdminException("get_services_by_type: not implemented")
@staticmethod
def get_service_details(service_id: int):
service_idx = int(service_id)
configs = SERVICE_CONFIGS.configs
if service_idx < 0 or service_idx >= len(configs):
raise AdminException(f"invalid service_index: {service_idx}")
service_config = configs[service_idx]
# exclude retrieval service if retrieval_type is not matched
doc_engine = os.getenv("DOC_ENGINE", "elasticsearch")
if service_config.service_type == "retrieval":
if service_config.retrieval_type != doc_engine:
raise AdminException(f"invalid service_index: {service_idx}")
service_info = {"name": service_config.name, "detail_func_name": service_config.detail_func_name}
detail_func = getattr(health_utils, service_info.get("detail_func_name"))
res = detail_func()
res.update({"service_name": service_info.get("name")})
return res
@staticmethod
def shutdown_service(service_id: int):
raise AdminException("shutdown_service: not implemented")
@staticmethod
def restart_service(service_id: int):
raise AdminException("restart_service: not implemented")
class SettingsMgr:
@staticmethod
def get_all():
settings = SystemSettingsService.get_all()
result = []
for setting in settings:
result.append(
{
"name": setting.name,
"source": setting.source,
"data_type": setting.data_type,
"value": setting.value,
}
)
return result
@staticmethod
def get_by_name(name: str):
settings = SystemSettingsService.get_by_name(name)
if len(settings) == 0:
raise AdminException(f"Can't get setting: {name}")
result = []
for setting in settings:
result.append(
{
"name": setting.name,
"source": setting.source,
"data_type": setting.data_type,
"value": setting.value,
}
)
return result
@staticmethod
def update_by_name(name: str, value: str):
settings = SystemSettingsService.get_by_name(name)
if len(settings) == 1:
setting = settings[0]
setting.value = value
setting_dict = setting.to_dict()
SystemSettingsService.update_by_name(name, setting_dict)
elif len(settings) > 1:
raise AdminException(f"Can't update more than 1 setting: {name}")
else:
# Create new setting if it doesn't exist
# Determine data_type based on name and value
if name.startswith("sandbox."):
data_type = "json"
elif name.endswith(".enabled"):
data_type = "boolean"
else:
data_type = "string"
new_setting = {
"name": name,
"value": str(value),
"source": "admin",
"data_type": data_type,
}
SystemSettingsService.save(**new_setting)
class ConfigMgr:
@staticmethod
def get_all():
result = []
configs = SERVICE_CONFIGS.configs
for config in configs:
config_dict = config.to_dict()
result.append(config_dict)
return result
class EnvironmentsMgr:
@staticmethod
def get_all():
result = []
env_kv = {"env": "DOC_ENGINE", "value": os.getenv("DOC_ENGINE")}
result.append(env_kv)
env_kv = {"env": "DEFAULT_SUPERUSER_EMAIL", "value": os.getenv("DEFAULT_SUPERUSER_EMAIL", "admin@ragflow.io")}
result.append(env_kv)
env_kv = {"env": "DB_TYPE", "value": os.getenv("DB_TYPE", "mysql")}
result.append(env_kv)
env_kv = {"env": "DEVICE", "value": os.getenv("DEVICE", "cpu")}
result.append(env_kv)
env_kv = {"env": "STORAGE_IMPL", "value": os.getenv("STORAGE_IMPL", "MINIO")}
result.append(env_kv)
return result
class SandboxMgr:
"""Manager for sandbox provider configuration and operations."""
# Provider registry with metadata
PROVIDER_REGISTRY = {
"self_managed": {
"name": "Self-Managed",
"description": "On-premise deployment using Daytona/Docker",
"tags": ["self-hosted", "low-latency", "secure"],
},
"aliyun_codeinterpreter": {
"name": "Aliyun Code Interpreter",
"description": "Aliyun Function Compute Code Interpreter - Code execution in serverless microVMs",
"tags": ["saas", "cloud", "scalable", "aliyun"],
},
"e2b": {
"name": "E2B",
"description": "E2B Cloud - Code Execution Sandboxes",
"tags": ["saas", "fast", "global"],
},
}
@staticmethod
def list_providers():
"""List all available sandbox providers."""
result = []
for provider_id, metadata in SandboxMgr.PROVIDER_REGISTRY.items():
result.append({
"id": provider_id,
**metadata
})
return result
@staticmethod
def get_provider_config_schema(provider_id: str):
"""Get configuration schema for a specific provider."""
from agent.sandbox.providers import (
SelfManagedProvider,
AliyunCodeInterpreterProvider,
E2BProvider,
)
schemas = {
"self_managed": SelfManagedProvider.get_config_schema(),
"aliyun_codeinterpreter": AliyunCodeInterpreterProvider.get_config_schema(),
"e2b": E2BProvider.get_config_schema(),
}
if provider_id not in schemas:
raise AdminException(f"Unknown provider: {provider_id}")
return schemas.get(provider_id, {})
@staticmethod
def get_config():
"""Get current sandbox configuration."""
try:
# Get active provider type
provider_type_settings = SystemSettingsService.get_by_name("sandbox.provider_type")
if not provider_type_settings:
# Return default config if not set
provider_type = "self_managed"
else:
provider_type = provider_type_settings[0].value
# Get provider-specific config
provider_config_settings = SystemSettingsService.get_by_name(f"sandbox.{provider_type}")
if not provider_config_settings:
provider_config = {}
else:
try:
provider_config = json.loads(provider_config_settings[0].value)
except json.JSONDecodeError:
provider_config = {}
return {
"provider_type": provider_type,
"config": provider_config,
}
except Exception as e:
raise AdminException(f"Failed to get sandbox config: {str(e)}")
@staticmethod
def set_config(provider_type: str, config: dict, set_active: bool = True):
"""
Set sandbox provider configuration.
Args:
provider_type: Provider identifier (e.g., "self_managed", "e2b")
config: Provider configuration dictionary
set_active: If True, also update the active provider. If False,
only update the configuration without switching providers.
Default: True
Returns:
Dictionary with updated provider_type and config
"""
from agent.sandbox.providers import (
SelfManagedProvider,
AliyunCodeInterpreterProvider,
E2BProvider,
)
try:
# Validate provider type
if provider_type not in SandboxMgr.PROVIDER_REGISTRY:
raise AdminException(f"Unknown provider type: {provider_type}")
# Get provider schema for validation
schema = SandboxMgr.get_provider_config_schema(provider_type)
# Validate config against schema
for field_name, field_schema in schema.items():
if field_schema.get("required", False) and field_name not in config:
raise AdminException(f"Required field '{field_name}' is missing")
# Type validation
if field_name in config:
field_type = field_schema.get("type")
if field_type == "integer":
if not isinstance(config[field_name], int):
raise AdminException(f"Field '{field_name}' must be an integer")
elif field_type == "string":
if not isinstance(config[field_name], str):
raise AdminException(f"Field '{field_name}' must be a string")
elif field_type == "bool":
if not isinstance(config[field_name], bool):
raise AdminException(f"Field '{field_name}' must be a boolean")
# Range validation for integers
if field_type == "integer" and field_name in config:
min_val = field_schema.get("min")
max_val = field_schema.get("max")
if min_val is not None and config[field_name] < min_val:
raise AdminException(f"Field '{field_name}' must be >= {min_val}")
if max_val is not None and config[field_name] > max_val:
raise AdminException(f"Field '{field_name}' must be <= {max_val}")
# Provider-specific custom validation
provider_classes = {
"self_managed": SelfManagedProvider,
"aliyun_codeinterpreter": AliyunCodeInterpreterProvider,
"e2b": E2BProvider,
}
provider = provider_classes[provider_type]()
is_valid, error_msg = provider.validate_config(config)
if not is_valid:
raise AdminException(f"Provider validation failed: {error_msg}")
# Update provider_type only if set_active is True
if set_active:
SettingsMgr.update_by_name("sandbox.provider_type", provider_type)
# Always update the provider config
config_json = json.dumps(config)
SettingsMgr.update_by_name(f"sandbox.{provider_type}", config_json)
return {"provider_type": provider_type, "config": config}
except AdminException:
raise
except Exception as e:
raise AdminException(f"Failed to set sandbox config: {str(e)}")
@staticmethod
def test_connection(provider_type: str, config: dict):
"""
Test connection to sandbox provider by executing a simple Python script.
This creates a temporary sandbox instance and runs a test code to verify:
- Connection credentials are valid
- Sandbox can be created
- Code execution works correctly
Args:
provider_type: Provider identifier
config: Provider configuration dictionary
Returns:
dict with test results including stdout, stderr, exit_code, execution_time
"""
try:
from agent.sandbox.providers import (
SelfManagedProvider,
AliyunCodeInterpreterProvider,
E2BProvider,
)
# Instantiate provider based on type
provider_classes = {
"self_managed": SelfManagedProvider,
"aliyun_codeinterpreter": AliyunCodeInterpreterProvider,
"e2b": E2BProvider,
}
if provider_type not in provider_classes:
raise AdminException(f"Unknown provider type: {provider_type}")
provider = provider_classes[provider_type]()
# Initialize with config
if not provider.initialize(config):
raise AdminException(f"Failed to initialize provider '{provider_type}'")
# Create a temporary sandbox instance for testing
instance = provider.create_instance(template="python")
if not instance or instance.status != "READY":
raise AdminException(f"Failed to create sandbox instance. Status: {instance.status if instance else 'None'}")
# Simple test code that exercises basic Python functionality
test_code = """
# Test basic Python functionality
import sys
import json
import math
print("Python version:", sys.version)
print("Platform:", sys.platform)
# Test basic calculations
result = 2 + 2
print(f"2 + 2 = {result}")
# Test JSON operations
data = {"test": "data", "value": 123}
print(f"JSON dump: {json.dumps(data)}")
# Test math operations
print(f"Math.sqrt(16) = {math.sqrt(16)}")
# Test error handling
try:
x = 1 / 1
print("Division test: OK")
except Exception as e:
print(f"Error: {e}")
# Return success indicator
print("TEST_PASSED")
"""
# Execute test code with timeout
execution_result = provider.execute_code(
instance_id=instance.instance_id,
code=test_code,
language="python",
timeout=10 # 10 seconds timeout
)
# Clean up the test instance (if provider supports it)
try:
if hasattr(provider, 'terminate_instance'):
provider.terminate_instance(instance.instance_id)
logging.info(f"Cleaned up test instance {instance.instance_id}")
else:
logging.warning(f"Provider {provider_type} does not support terminate_instance, test instance may leak")
except Exception as cleanup_error:
logging.warning(f"Failed to cleanup test instance {instance.instance_id}: {cleanup_error}")
# Build detailed result message
success = execution_result.exit_code == 0 and "TEST_PASSED" in execution_result.stdout
message_parts = [
f"Test {success and 'PASSED' or 'FAILED'}",
f"Exit code: {execution_result.exit_code}",
f"Execution time: {execution_result.execution_time:.2f}s"
]
if execution_result.stdout.strip():
stdout_preview = execution_result.stdout.strip()[:200]
message_parts.append(f"Output: {stdout_preview}...")
if execution_result.stderr.strip():
stderr_preview = execution_result.stderr.strip()[:200]
message_parts.append(f"Errors: {stderr_preview}...")
message = " | ".join(message_parts)
return {
"success": success,
"message": message,
"details": {
"exit_code": execution_result.exit_code,
"execution_time": execution_result.execution_time,
"stdout": execution_result.stdout,
"stderr": execution_result.stderr,
}
}
except AdminException:
raise
except Exception as e:
import traceback
error_details = traceback.format_exc()
raise AdminException(f"Connection test failed: {str(e)}\\n\\nStack trace:\\n{error_details}")

175
admin/services.py Normal file
View File

@ -0,0 +1,175 @@
import re
from werkzeug.security import check_password_hash
from api.db import ActiveEnum
from api.db.services import UserService
from api.db.joint_services.user_account_service import create_new_user, delete_user_data
from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_service import TenantService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.crypt import decrypt
from exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from config import SERVICE_CONFIGS
class UserMgr:
@staticmethod
def get_all_users():
users = UserService.get_all_users()
result = []
for user in users:
result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date, 'is_active': user.is_active})
return result
@staticmethod
def get_user_details(username):
# use email to query
users = UserService.query_user_by_email(username)
result = []
for user in users:
result.append({
'email': user.email,
'language': user.language,
'last_login_time': user.last_login_time,
'is_authenticated': user.is_authenticated,
'is_active': user.is_active,
'is_anonymous': user.is_anonymous,
'login_channel': user.login_channel,
'status': user.status,
'is_superuser': user.is_superuser,
'create_date': user.create_date,
'update_date': user.update_date
})
return result
@staticmethod
def create_user(username, password, role="user") -> dict:
# Validate the email address
if not re.match(r"^[\w\._-]+@([\w_-]+\.)+[\w-]{2,}$", username):
raise AdminException(f"Invalid email address: {username}!")
# Check if the email address is already used
if UserService.query(email=username):
raise UserAlreadyExistsError(username)
# Construct user info data
user_info_dict = {
"email": username,
"nickname": "", # ask user to edit it manually in settings.
"password": decrypt(password),
"login_channel": "password",
"is_superuser": role == "admin",
}
return create_new_user(user_info_dict)
@staticmethod
def delete_user(username):
# use email to delete
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
if len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
usr = user_list[0]
return delete_user_data(usr.id)
@staticmethod
def update_user_password(username, new_password) -> str:
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check new_password different from old.
usr = user_list[0]
psw = decrypt(new_password)
if check_password_hash(usr.password, psw):
return "Same password, no need to update!"
# update password
UserService.update_user_password(usr.id, psw)
return "Password updated successfully!"
@staticmethod
def update_user_activate_status(username, activate_status: str):
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check activate status different from new
usr = user_list[0]
# format activate_status before handle
_activate_status = activate_status.lower()
target_status = {
'on': ActiveEnum.ACTIVE.value,
'off': ActiveEnum.INACTIVE.value,
}.get(_activate_status)
if not target_status:
raise AdminException(f"Invalid activate_status: {activate_status}")
if target_status == usr.is_active:
return f"User activate status is already {_activate_status}!"
# update is_active
UserService.update_user(usr.id, {"is_active": target_status})
return f"Turn {_activate_status} user activate status successfully!"
class UserServiceMgr:
@staticmethod
def get_user_datasets(username):
# use email to find user.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# find tenants
usr = user_list[0]
tenants = TenantService.get_joined_tenants_by_user_id(usr.id)
tenant_ids = [m["tenant_id"] for m in tenants]
# filter permitted kb and owned kb
return KnowledgebaseService.get_all_kb_by_tenant_ids(tenant_ids, usr.id)
@staticmethod
def get_user_agents(username):
# use email to find user.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# find tenants
usr = user_list[0]
tenants = TenantService.get_joined_tenants_by_user_id(usr.id)
tenant_ids = [m["tenant_id"] for m in tenants]
# filter permitted agents and owned agents
res = UserCanvasService.get_all_agents_by_tenant_ids(tenant_ids, usr.id)
return [{
'title': r['title'],
'permission': r['permission'],
'canvas_type': r['canvas_type'],
'canvas_category': r['canvas_category']
} for r in res]
class ServiceMgr:
@staticmethod
def get_all_services():
result = []
configs = SERVICE_CONFIGS.configs
for config in configs:
result.append(config.to_dict())
return result
@staticmethod
def get_services_by_type(service_type_str: str):
raise AdminException("get_services_by_type: not implemented")
@staticmethod
def get_service_details(service_id: int):
raise AdminException("get_service_details: not implemented")
@staticmethod
def shutdown_service(service_id: int):
raise AdminException("shutdown_service: not implemented")
@staticmethod
def restart_service(service_id: int):
raise AdminException("restart_service: not implemented")

View File

@ -13,3 +13,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from beartype.claw import beartype_this_package
beartype_this_package()

View File

@ -13,10 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import asyncio
import base64 import base64
import inspect
import binascii
import json import json
import logging import logging
import re import re
@ -29,11 +26,7 @@ from typing import Any, Union, Tuple
from agent.component import component_class from agent.component import component_class
from agent.component.base import ComponentBase from agent.component.base import ComponentBase
from api.db.services.file_service import FileService from api.db.services.file_service import FileService
from api.db.services.llm_service import LLMBundle from api.utils import get_uuid, hash_str2int
from api.db.services.task_service import has_canceled
from common.constants import LLMType
from common.misc_utils import get_uuid, hash_str2int
from common.exceptions import TaskCanceledException
from rag.prompts.generator import chunks_format from rag.prompts.generator import chunks_format
from rag.utils.redis_conn import REDIS_CONN from rag.utils.redis_conn import REDIS_CONN
@ -85,12 +78,14 @@ class Graph:
self.dsl = json.loads(dsl) self.dsl = json.loads(dsl)
self._tenant_id = tenant_id self._tenant_id = tenant_id
self.task_id = task_id if task_id else get_uuid() self.task_id = task_id if task_id else get_uuid()
self._thread_pool = ThreadPoolExecutor(max_workers=5)
self.load() self.load()
def load(self): def load(self):
self.components = self.dsl["components"] self.components = self.dsl["components"]
cpn_nms = set([]) cpn_nms = set([])
for k, cpn in self.components.items():
cpn_nms.add(cpn["obj"]["component_name"])
for k, cpn in self.components.items(): for k, cpn in self.components.items():
cpn_nms.add(cpn["obj"]["component_name"]) cpn_nms.add(cpn["obj"]["component_name"])
param = component_class(cpn["obj"]["component_name"] + "Param")() param = component_class(cpn["obj"]["component_name"] + "Param")()
@ -131,7 +126,6 @@ class Graph:
self.components[k]["obj"].reset() self.components[k]["obj"].reset()
try: try:
REDIS_CONN.delete(f"{self.task_id}-logs") REDIS_CONN.delete(f"{self.task_id}-logs")
REDIS_CONN.delete(f"{self.task_id}-cancel")
except Exception as e: except Exception as e:
logging.exception(e) logging.exception(e)
@ -159,33 +153,6 @@ class Graph:
def get_tenant_id(self): def get_tenant_id(self):
return self._tenant_id return self._tenant_id
def get_value_with_variable(self,value: str) -> Any:
pat = re.compile(r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z0-9_.-]+|sys\.[A-Za-z0-9_.]+|env\.[A-Za-z0-9_.]+)\} *\}*")
out_parts = []
last = 0
for m in pat.finditer(value):
out_parts.append(value[last:m.start()])
key = m.group(1)
v = self.get_variable_value(key)
if v is None:
rep = ""
elif isinstance(v, partial):
buf = []
for chunk in v():
buf.append(chunk)
rep = "".join(buf)
elif isinstance(v, str):
rep = v
else:
rep = json.dumps(v, ensure_ascii=False)
out_parts.append(rep)
last = m.end()
out_parts.append(value[last:])
return("".join(out_parts))
def get_variable_value(self, exp: str) -> Any: def get_variable_value(self, exp: str) -> Any:
exp = exp.strip("{").strip("}").strip(" ").strip("{").strip("}") exp = exp.strip("{").strip("}").strip(" ").strip("{").strip("}")
if exp.find("@") < 0: if exp.find("@") < 0:
@ -194,122 +161,33 @@ class Graph:
cpn = self.get_component(cpn_id) cpn = self.get_component(cpn_id)
if not cpn: if not cpn:
raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'") raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'")
parts = var_nm.split(".", 1) return cpn["obj"].output(var_nm)
root_key = parts[0]
rest = parts[1] if len(parts) > 1 else ""
root_val = cpn["obj"].output(root_key)
if not rest:
return root_val
return self.get_variable_param_value(root_val,rest)
def get_variable_param_value(self, obj: Any, path: str) -> Any:
cur = obj
if not path:
return cur
for key in path.split('.'):
if cur is None:
return None
if isinstance(cur, str):
try:
cur = json.loads(cur)
except Exception:
return None
if isinstance(cur, dict):
cur = cur.get(key)
continue
if isinstance(cur, (list, tuple)):
try:
idx = int(key)
cur = cur[idx]
except Exception:
return None
continue
cur = getattr(cur, key, None)
return cur
def set_variable_value(self, exp: str,value):
exp = exp.strip("{").strip("}").strip(" ").strip("{").strip("}")
if exp.find("@") < 0:
self.globals[exp] = value
return
cpn_id, var_nm = exp.split("@")
cpn = self.get_component(cpn_id)
if not cpn:
raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'")
parts = var_nm.split(".", 1)
root_key = parts[0]
rest = parts[1] if len(parts) > 1 else ""
if not rest:
cpn["obj"].set_output(root_key, value)
return
root_val = cpn["obj"].output(root_key)
if not root_val:
root_val = {}
cpn["obj"].set_output(root_key, self.set_variable_param_value(root_val,rest,value))
def set_variable_param_value(self, obj: Any, path: str, value) -> Any:
cur = obj
keys = path.split('.')
if not path:
return value
for key in keys:
if key not in cur or not isinstance(cur[key], dict):
cur[key] = {}
cur = cur[key]
cur[keys[-1]] = value
return obj
def is_canceled(self) -> bool:
return has_canceled(self.task_id)
def cancel_task(self) -> bool:
try:
REDIS_CONN.set(f"{self.task_id}-cancel", "x")
except Exception as e:
logging.exception(e)
return False
return True
class Canvas(Graph): class Canvas(Graph):
def __init__(self, dsl: str, tenant_id=None, task_id=None, canvas_id=None): def __init__(self, dsl: str, tenant_id=None, task_id=None):
self.globals = { self.globals = {
"sys.query": "", "sys.query": "",
"sys.user_id": tenant_id, "sys.user_id": tenant_id,
"sys.conversation_turns": 0, "sys.conversation_turns": 0,
"sys.files": [], "sys.files": []
"sys.history": []
} }
self.variables = {}
super().__init__(dsl, tenant_id, task_id) super().__init__(dsl, tenant_id, task_id)
self._id = canvas_id
def load(self): def load(self):
super().load() super().load()
self.history = self.dsl["history"] self.history = self.dsl["history"]
if "globals" in self.dsl: if "globals" in self.dsl:
self.globals = self.dsl["globals"] self.globals = self.dsl["globals"]
if "sys.history" not in self.globals:
self.globals["sys.history"] = []
else: else:
self.globals = { self.globals = {
"sys.query": "", "sys.query": "",
"sys.user_id": "", "sys.user_id": "",
"sys.conversation_turns": 0, "sys.conversation_turns": 0,
"sys.files": [], "sys.files": []
"sys.history": []
} }
if "variables" in self.dsl:
self.variables = self.dsl["variables"]
else:
self.variables = {}
self.retrieval = self.dsl["retrieval"] self.retrieval = self.dsl["retrieval"]
self.memory = self.dsl.get("memory", []) self.memory = self.dsl.get("memory", [])
@ -325,69 +203,33 @@ class Canvas(Graph):
self.history = [] self.history = []
self.retrieval = [] self.retrieval = []
self.memory = [] self.memory = []
print(self.variables)
for k in self.globals.keys():
if k.startswith("sys."):
if isinstance(self.globals[k], str):
self.globals[k] = ""
elif isinstance(self.globals[k], int):
self.globals[k] = 0
elif isinstance(self.globals[k], float):
self.globals[k] = 0
elif isinstance(self.globals[k], list):
self.globals[k] = []
elif isinstance(self.globals[k], dict):
self.globals[k] = {}
else:
self.globals[k] = None
if k.startswith("env."):
key = k[4:]
if key in self.variables:
variable = self.variables[key]
if variable["type"] == "string":
self.globals[k] = ""
variable["value"] = ""
elif variable["type"] == "number":
self.globals[k] = 0
variable["value"] = 0
elif variable["type"] == "boolean":
self.globals[k] = False
variable["value"] = False
elif variable["type"] == "object":
self.globals[k] = {}
variable["value"] = {}
elif variable["type"].startswith("array"):
self.globals[k] = []
variable["value"] = []
else:
self.globals[k] = ""
else:
self.globals[k] = ""
async def run(self, **kwargs): for k in self.globals.keys():
if isinstance(self.globals[k], str):
self.globals[k] = ""
elif isinstance(self.globals[k], int):
self.globals[k] = 0
elif isinstance(self.globals[k], float):
self.globals[k] = 0
elif isinstance(self.globals[k], list):
self.globals[k] = []
elif isinstance(self.globals[k], dict):
self.globals[k] = {}
else:
self.globals[k] = None
def run(self, **kwargs):
st = time.perf_counter() st = time.perf_counter()
self._loop = asyncio.get_running_loop()
self.message_id = get_uuid() self.message_id = get_uuid()
created_at = int(time.time()) created_at = int(time.time())
self.add_user_input(kwargs.get("query")) self.add_user_input(kwargs.get("query"))
for k, cpn in self.components.items(): for k, cpn in self.components.items():
self.components[k]["obj"].reset(True) self.components[k]["obj"].reset(True)
if kwargs.get("webhook_payload"):
for k, cpn in self.components.items():
if self.components[k]["obj"].component_name.lower() == "begin" and self.components[k]["obj"]._param.mode == "Webhook":
payload = kwargs.get("webhook_payload", {})
if "input" in payload:
self.components[k]["obj"].set_input_value("request", payload["input"])
for kk, vv in payload.items():
if kk == "input":
continue
self.components[k]["obj"].set_output(kk, vv)
for k in kwargs.keys(): for k in kwargs.keys():
if k in ["query", "user_id", "files"] and kwargs[k]: if k in ["query", "user_id", "files"] and kwargs[k]:
if k == "files": if k == "files":
self.globals[f"sys.{k}"] = await self.get_files_async(kwargs[k]) self.globals[f"sys.{k}"] = self.get_files(kwargs[k])
else: else:
self.globals[f"sys.{k}"] = kwargs[k] self.globals[f"sys.{k}"] = kwargs[k]
if not self.globals["sys.conversation_turns"] : if not self.globals["sys.conversation_turns"] :
@ -409,62 +251,20 @@ class Canvas(Graph):
self.path.append("begin") self.path.append("begin")
self.retrieval.append({"chunks": [], "doc_aggs": []}) self.retrieval.append({"chunks": [], "doc_aggs": []})
if self.is_canceled():
msg = f"Task {self.task_id} has been canceled before starting."
logging.info(msg)
raise TaskCanceledException(msg)
yield decorate("workflow_started", {"inputs": kwargs.get("inputs")}) yield decorate("workflow_started", {"inputs": kwargs.get("inputs")})
self.retrieval.append({"chunks": {}, "doc_aggs": {}}) self.retrieval.append({"chunks": {}, "doc_aggs": {}})
async def _run_batch(f, t): def _run_batch(f, t):
if self.is_canceled(): with ThreadPoolExecutor(max_workers=5) as executor:
msg = f"Task {self.task_id} has been canceled during batch execution." thr = []
logging.info(msg) for i in range(f, t):
raise TaskCanceledException(msg) cpn = self.get_component_obj(self.path[i])
if cpn.component_name.lower() in ["begin", "userfillup"]:
loop = asyncio.get_running_loop() thr.append(executor.submit(cpn.invoke, inputs=kwargs.get("inputs", {})))
tasks = []
max_concurrency = getattr(self._thread_pool, "_max_workers", 5)
sem = asyncio.Semaphore(max_concurrency)
async def _invoke_one(cpn_obj, sync_fn, call_kwargs, use_async: bool):
async with sem:
if use_async:
await cpn_obj.invoke_async(**(call_kwargs or {}))
return
await loop.run_in_executor(self._thread_pool, partial(sync_fn, **(call_kwargs or {})))
i = f
while i < t:
cpn = self.get_component_obj(self.path[i])
task_fn = None
call_kwargs = None
if cpn.component_name.lower() in ["begin", "userfillup"]:
call_kwargs = {"inputs": kwargs.get("inputs", {})}
task_fn = cpn.invoke
i += 1
else:
for _, ele in cpn.get_input_elements().items():
if isinstance(ele, dict) and ele.get("_cpn_id") and ele.get("_cpn_id") not in self.path[:i] and self.path[0].lower().find("userfillup") < 0:
self.path.pop(i)
t -= 1
break
else: else:
call_kwargs = cpn.get_input() thr.append(executor.submit(cpn.invoke, **cpn.get_input()))
task_fn = cpn.invoke for t in thr:
i += 1 t.result()
if task_fn is None:
continue
fn_invoke_async = getattr(cpn, "_invoke_async", None)
use_async = (fn_invoke_async and asyncio.iscoroutinefunction(fn_invoke_async)) or asyncio.iscoroutinefunction(getattr(cpn, "_invoke", None))
tasks.append(asyncio.create_task(_invoke_one(cpn, task_fn, call_kwargs, use_async)))
if tasks:
await asyncio.gather(*tasks)
def _node_finished(cpn_obj): def _node_finished(cpn_obj):
return decorate("node_finished",{ return decorate("node_finished",{
@ -481,7 +281,6 @@ class Canvas(Graph):
self.error = "" self.error = ""
idx = len(self.path) - 1 idx = len(self.path) - 1
partials = [] partials = []
tts_mdl = None
while idx < len(self.path): while idx < len(self.path):
to = len(self.path) to = len(self.path)
for i in range(idx, to): for i in range(idx, to):
@ -492,72 +291,31 @@ class Canvas(Graph):
"component_type": self.get_component_type(self.path[i]), "component_type": self.get_component_type(self.path[i]),
"thoughts": self.get_component_thoughts(self.path[i]) "thoughts": self.get_component_thoughts(self.path[i])
}) })
await _run_batch(idx, to) _run_batch(idx, to)
to = len(self.path)
# post-processing of components invocation # post processing of components invocation
for i in range(idx, to): for i in range(idx, to):
cpn = self.get_component(self.path[i]) cpn = self.get_component(self.path[i])
cpn_obj = self.get_component_obj(self.path[i]) cpn_obj = self.get_component_obj(self.path[i])
if cpn_obj.component_name.lower() == "message": if cpn_obj.component_name.lower() == "message":
if cpn_obj.get_param("auto_play"):
tts_mdl = LLMBundle(self._tenant_id, LLMType.TTS)
if isinstance(cpn_obj.output("content"), partial): if isinstance(cpn_obj.output("content"), partial):
_m = "" _m = ""
buff_m = "" for m in cpn_obj.output("content")():
stream = cpn_obj.output("content")()
async def _process_stream(m):
nonlocal buff_m, _m, tts_mdl
if not m: if not m:
return continue
if m == "<think>": if m == "<think>":
return decorate("message", {"content": "", "start_to_think": True}) yield decorate("message", {"content": "", "start_to_think": True})
elif m == "</think>": elif m == "</think>":
return decorate("message", {"content": "", "end_to_think": True}) yield decorate("message", {"content": "", "end_to_think": True})
else:
buff_m += m yield decorate("message", {"content": m})
_m += m _m += m
if len(buff_m) > 16:
ev = decorate(
"message",
{
"content": m,
"audio_binary": self.tts(tts_mdl, buff_m)
}
)
buff_m = ""
return ev
return decorate("message", {"content": m})
if inspect.isasyncgen(stream):
async for m in stream:
ev= await _process_stream(m)
if ev:
yield ev
else:
for m in stream:
ev= await _process_stream(m)
if ev:
yield ev
if buff_m:
yield decorate("message", {"content": "", "audio_binary": self.tts(tts_mdl, buff_m)})
buff_m = ""
cpn_obj.set_output("content", _m) cpn_obj.set_output("content", _m)
cite = re.search(r"\[ID:[ 0-9]+\]", _m) cite = re.search(r"\[ID:[ 0-9]+\]", _m)
else: else:
yield decorate("message", {"content": cpn_obj.output("content")}) yield decorate("message", {"content": cpn_obj.output("content")})
cite = re.search(r"\[ID:[ 0-9]+\]", cpn_obj.output("content")) cite = re.search(r"\[ID:[ 0-9]+\]", cpn_obj.output("content"))
yield decorate("message_end", {"reference": self.get_reference() if cite else None})
message_end = {}
if cpn_obj.get_param("status"):
message_end["status"] = cpn_obj.get_param("status")
if isinstance(cpn_obj.output("attachment"), dict):
message_end["attachment"] = cpn_obj.output("attachment")
if cite:
message_end["reference"] = self.get_reference()
yield decorate("message_end", message_end)
while partials: while partials:
_cpn_obj = self.get_component_obj(partials[0]) _cpn_obj = self.get_component_obj(partials[0])
@ -578,7 +336,7 @@ class Canvas(Graph):
else: else:
self.error = cpn_obj.error() self.error = cpn_obj.error()
if cpn_obj.component_name.lower() not in ("iteration","loop"): if cpn_obj.component_name.lower() != "iteration":
if isinstance(cpn_obj.output("content"), partial): if isinstance(cpn_obj.output("content"), partial):
if self.error: if self.error:
cpn_obj.set_output("content", None) cpn_obj.set_output("content", None)
@ -603,16 +361,14 @@ class Canvas(Graph):
for cpn_id in cpn_ids: for cpn_id in cpn_ids:
_append_path(cpn_id) _append_path(cpn_id)
if cpn_obj.component_name.lower() in ("iterationitem","loopitem") and cpn_obj.end(): if cpn_obj.component_name.lower() == "iterationitem" and cpn_obj.end():
iter = cpn_obj.get_parent() iter = cpn_obj.get_parent()
yield _node_finished(iter) yield _node_finished(iter)
_extend_path(self.get_component(cpn["parent_id"])["downstream"]) _extend_path(self.get_component(cpn["parent_id"])["downstream"])
elif cpn_obj.component_name.lower() in ["categorize", "switch"]: elif cpn_obj.component_name.lower() in ["categorize", "switch"]:
_extend_path(cpn_obj.output("_next")) _extend_path(cpn_obj.output("_next"))
elif cpn_obj.component_name.lower() in ("iteration", "loop"): elif cpn_obj.component_name.lower() == "iteration":
_append_path(cpn_obj.get_start()) _append_path(cpn_obj.get_start())
elif cpn_obj.component_name.lower() == "exitloop" and cpn_obj.get_parent().component_name.lower() == "loop":
_extend_path(self.get_component(cpn["parent_id"])["downstream"])
elif not cpn["downstream"] and cpn_obj.get_parent(): elif not cpn["downstream"] and cpn_obj.get_parent():
_append_path(cpn_obj.get_parent().get_start()) _append_path(cpn_obj.get_parent().get_start())
else: else:
@ -631,13 +387,13 @@ class Canvas(Graph):
for c in path: for c in path:
o = self.get_component_obj(c) o = self.get_component_obj(c)
if o.component_name.lower() == "userfillup": if o.component_name.lower() == "userfillup":
o.invoke()
another_inputs.update(o.get_input_elements()) another_inputs.update(o.get_input_elements())
if o.get_param("enable_tips"): if o.get_param("enable_tips"):
tips = o.output("tips") tips = o.get_param("tips")
self.path = path self.path = path
yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips}) yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips})
return return
self.path = self.path[:idx] self.path = self.path[:idx]
if not self.error: if not self.error:
yield decorate("workflow_finished", yield decorate("workflow_finished",
@ -648,15 +404,6 @@ class Canvas(Graph):
"created_at": st, "created_at": st,
}) })
self.history.append(("assistant", self.get_component_obj(self.path[-1]).output())) self.history.append(("assistant", self.get_component_obj(self.path[-1]).output()))
self.globals["sys.history"].append(f"{self.history[-1][0]}: {self.history[-1][1]}")
elif "Task has been canceled" in self.error:
yield decorate("workflow_finished",
{
"inputs": kwargs.get("inputs"),
"outputs": "Task has been canceled",
"elapsed_time": time.perf_counter() - st,
"created_at": st,
})
def is_reff(self, exp: str) -> bool: def is_reff(self, exp: str) -> bool:
exp = exp.strip("{").strip("}") exp = exp.strip("{").strip("}")
@ -669,50 +416,6 @@ class Canvas(Graph):
return False return False
return True return True
def tts(self,tts_mdl, text):
def clean_tts_text(text: str) -> str:
if not text:
return ""
text = text.encode("utf-8", "ignore").decode("utf-8", "ignore")
text = re.sub(r"[\x00-\x08\x0B-\x0C\x0E-\x1F\x7F]", "", text)
emoji_pattern = re.compile(
"[\U0001F600-\U0001F64F"
"\U0001F300-\U0001F5FF"
"\U0001F680-\U0001F6FF"
"\U0001F1E0-\U0001F1FF"
"\U00002700-\U000027BF"
"\U0001F900-\U0001F9FF"
"\U0001FA70-\U0001FAFF"
"\U0001FAD0-\U0001FAFF]+",
flags=re.UNICODE
)
text = emoji_pattern.sub("", text)
text = re.sub(r"\s+", " ", text).strip()
MAX_LEN = 500
if len(text) > MAX_LEN:
text = text[:MAX_LEN]
return text
if not tts_mdl or not text:
return None
text = clean_tts_text(text)
if not text:
return None
bin = b""
try:
for chunk in tts_mdl.tts(text):
bin += chunk
except Exception as e:
logging.error(f"TTS failed: {e}, text={text!r}")
return None
return binascii.hexlify(bin).decode("utf-8")
def get_history(self, window_size): def get_history(self, window_size):
convs = [] convs = []
if window_size <= 0: if window_size <= 0:
@ -726,7 +429,6 @@ class Canvas(Graph):
def add_user_input(self, question): def add_user_input(self, question):
self.history.append(("user", question)) self.history.append(("user", question))
self.globals["sys.history"].append(f"{self.history[-1][0]}: {self.history[-1][1]}")
def get_prologue(self): def get_prologue(self):
return self.components["begin"]["obj"]._param.prologue return self.components["begin"]["obj"]._param.prologue
@ -734,9 +436,6 @@ class Canvas(Graph):
def get_mode(self): def get_mode(self):
return self.components["begin"]["obj"]._param.mode return self.components["begin"]["obj"]._param.mode
def get_sys_query(self):
return self.globals.get("sys.query", "")
def set_global_param(self, **kwargs): def set_global_param(self, **kwargs):
self.globals.update(kwargs) self.globals.update(kwargs)
@ -746,33 +445,20 @@ class Canvas(Graph):
def get_component_input_elements(self, cpnnm): def get_component_input_elements(self, cpnnm):
return self.components[cpnnm]["obj"].get_input_elements() return self.components[cpnnm]["obj"].get_input_elements()
async def get_files_async(self, files: Union[None, list[dict]]) -> list[str]: def get_files(self, files: Union[None, list[dict]]) -> list[str]:
if not files: if not files:
return [] return []
def image_to_base64(file): def image_to_base64(file):
return "data:{};base64,{}".format(file["mime_type"], return "data:{};base64,{}".format(file["mime_type"],
base64.b64encode(FileService.get_blob(file["created_by"], file["id"])).decode("utf-8")) base64.b64encode(FileService.get_blob(file["created_by"], file["id"])).decode("utf-8"))
def parse_file(file): exe = ThreadPoolExecutor(max_workers=5)
blob = FileService.get_blob(file["created_by"], file["id"]) threads = []
return FileService.parse(file["name"], blob, True, file["created_by"])
loop = asyncio.get_running_loop()
tasks = []
for file in files: for file in files:
if file["mime_type"].find("image") >=0: if file["mime_type"].find("image") >=0:
tasks.append(loop.run_in_executor(self._thread_pool, image_to_base64, file)) threads.append(exe.submit(image_to_base64, file))
continue continue
tasks.append(loop.run_in_executor(self._thread_pool, parse_file, file)) threads.append(exe.submit(FileService.parse, file["name"], FileService.get_blob(file["created_by"], file["id"]), True, file["created_by"]))
return await asyncio.gather(*tasks) return [th.result() for th in threads]
def get_files(self, files: Union[None, list[dict]]) -> list[str]:
"""
Synchronous wrapper for get_files_async, used by sync component invoke paths.
"""
loop = getattr(self, "_loop", None)
if loop and loop.is_running():
return asyncio.run_coroutine_threadsafe(self.get_files_async(files), loop).result()
return asyncio.run(self.get_files_async(files))
def tool_use_callback(self, agent_id: str, func_name: str, params: dict, result: Any, elapsed_time=None): def tool_use_callback(self, agent_id: str, func_name: str, params: dict, result: Any, elapsed_time=None):
agent_ids = agent_id.split("-->") agent_ids = agent_id.split("-->")
@ -826,3 +512,4 @@ class Canvas(Graph):
def get_component_thoughts(self, cpn_id) -> str: def get_component_thoughts(self, cpn_id) -> str:
return self.components.get(cpn_id)["obj"].thoughts() return self.components.get(cpn_id)["obj"].thoughts()

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import os import os
import importlib import importlib
import inspect import inspect
@ -49,10 +50,9 @@ del _package_path, _import_submodules, _extract_classes_from_module
def component_class(class_name): def component_class(class_name):
for module_name in ["agent.component", "agent.tools", "rag.flow"]: for mdl in ["agent.component", "agent.tools", "rag.flow"]:
try: try:
return getattr(importlib.import_module(module_name), class_name) return getattr(importlib.import_module(mdl), class_name)
except Exception: except Exception:
# logging.warning(f"Can't import module: {module_name}, error: {e}")
pass pass
assert False, f"Can't import {class_name}" assert False, f"Can't import {class_name}"

View File

@ -13,11 +13,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import asyncio
import json
import logging import logging
import os import os
import re import re
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy from copy import deepcopy
from functools import partial from functools import partial
from typing import Any from typing import Any
@ -28,10 +27,10 @@ from agent.tools.base import LLMToolPluginCallSession, ToolParamBase, ToolBase,
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.mcp_server_service import MCPServerService from api.db.services.mcp_server_service import MCPServerService
from common.connection_utils import timeout from api.utils.api_utils import timeout
from rag.prompts.generator import next_step_async, COMPLETE_TASK, \ from rag.prompts.generator import next_step, COMPLETE_TASK, analyze_task, \
citation_prompt, kb_prompt, citation_plus, full_question, message_fit_in, structured_output_prompt citation_prompt, reflect, rank_memories, kb_prompt, citation_plus, full_question, message_fit_in
from common.mcp_tool_call_conn import MCPToolCallSession, mcp_tool_metadata_to_openai_tool from rag.utils.mcp_tool_call_conn import MCPToolCallSession, mcp_tool_metadata_to_openai_tool
from agent.component.llm import LLMParam, LLM from agent.component.llm import LLMParam, LLM
@ -84,11 +83,9 @@ class Agent(LLM, ToolBase):
def __init__(self, canvas, id, param: LLMParam): def __init__(self, canvas, id, param: LLMParam):
LLM.__init__(self, canvas, id, param) LLM.__init__(self, canvas, id, param)
self.tools = {} self.tools = {}
for idx, cpn in enumerate(self._param.tools): for cpn in self._param.tools:
cpn = self._load_tool_obj(cpn) cpn = self._load_tool_obj(cpn)
original_name = cpn.get_meta()["function"]["name"] self.tools[cpn.get_meta()["function"]["name"]] = cpn
indexed_name = f"{original_name}_{idx}"
self.tools[indexed_name] = cpn
self.chat_mdl = LLMBundle(self._canvas.get_tenant_id(), TenantLLMService.llm_id2llm_type(self._param.llm_id), self._param.llm_id, self.chat_mdl = LLMBundle(self._canvas.get_tenant_id(), TenantLLMService.llm_id2llm_type(self._param.llm_id), self._param.llm_id,
max_retries=self._param.max_retries, max_retries=self._param.max_retries,
@ -96,12 +93,7 @@ class Agent(LLM, ToolBase):
max_rounds=self._param.max_rounds, max_rounds=self._param.max_rounds,
verbose_tool_use=True verbose_tool_use=True
) )
self.tool_meta = [] self.tool_meta = [v.get_meta() for _,v in self.tools.items()]
for indexed_name, tool_obj in self.tools.items():
original_meta = tool_obj.get_meta()
indexed_meta = deepcopy(original_meta)
indexed_meta["function"]["name"] = indexed_name
self.tool_meta.append(indexed_meta)
for mcp in self._param.mcp: for mcp in self._param.mcp:
_, mcp_server = MCPServerService.get_by_id(mcp["mcp_id"]) _, mcp_server = MCPServerService.get_by_id(mcp["mcp_id"])
@ -115,8 +107,7 @@ class Agent(LLM, ToolBase):
def _load_tool_obj(self, cpn: dict) -> object: def _load_tool_obj(self, cpn: dict) -> object:
from agent.component import component_class from agent.component import component_class
tool_name = cpn["component_name"] param = component_class(cpn["component_name"] + "Param")()
param = component_class(tool_name + "Param")()
param.update(cpn["params"]) param.update(cpn["params"])
try: try:
param.check() param.check()
@ -146,37 +137,8 @@ class Agent(LLM, ToolBase):
res.update(cpn.get_input_form()) res.update(cpn.get_input_form())
return res return res
def _get_output_schema(self):
try:
cand = self._param.outputs.get("structured")
except Exception:
return None
if isinstance(cand, dict):
if isinstance(cand.get("properties"), dict) and len(cand["properties"]) > 0:
return cand
for k in ("schema", "structured"):
if isinstance(cand.get(k), dict) and isinstance(cand[k].get("properties"), dict) and len(cand[k]["properties"]) > 0:
return cand[k]
return None
async def _force_format_to_schema_async(self, text: str, schema_prompt: str) -> str:
fmt_msgs = [
{"role": "system", "content": schema_prompt + "\nIMPORTANT: Output ONLY valid JSON. No markdown, no extra text."},
{"role": "user", "content": text},
]
_, fmt_msgs = message_fit_in(fmt_msgs, int(self.chat_mdl.max_length * 0.97))
return await self._generate_async(fmt_msgs)
def _invoke(self, **kwargs):
return asyncio.run(self._invoke_async(**kwargs))
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 20*60))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 20*60)))
async def _invoke_async(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Agent processing"):
return
if kwargs.get("user_prompt"): if kwargs.get("user_prompt"):
usr_pmt = "" usr_pmt = ""
if kwargs.get("reasoning"): if kwargs.get("reasoning"):
@ -190,29 +152,20 @@ class Agent(LLM, ToolBase):
self._param.prompts = [{"role": "user", "content": usr_pmt}] self._param.prompts = [{"role": "user", "content": usr_pmt}]
if not self.tools: if not self.tools:
if self.check_if_canceled("Agent processing"): return LLM._invoke(self, **kwargs)
return
return await LLM._invoke_async(self, **kwargs)
prompt, msg, user_defined_prompt = self._prepare_prompt_variables() prompt, msg, user_defined_prompt = self._prepare_prompt_variables()
output_schema = self._get_output_schema()
schema_prompt = ""
if output_schema:
schema = json.dumps(output_schema, ensure_ascii=False, indent=2)
schema_prompt = structured_output_prompt(schema)
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else [] downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler() ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not (ex and ex["goto"]) and not output_schema: if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self.stream_output_with_tools_async, prompt, deepcopy(msg), user_defined_prompt)) self.set_output("content", partial(self.stream_output_with_tools, prompt, msg, user_defined_prompt))
return return
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97)) _, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
use_tools = [] use_tools = []
ans = "" ans = ""
async for delta_ans, _tk in self._react_with_tools_streamly_async_simple(prompt, msg, use_tools, user_defined_prompt,schema_prompt=schema_prompt): for delta_ans, tk in self._react_with_tools_streamly(prompt, msg, use_tools, user_defined_prompt):
if self.check_if_canceled("Agent processing"):
return
ans += delta_ans ans += delta_ans
if ans.find("**ERROR**") >= 0: if ans.find("**ERROR**") >= 0:
@ -223,48 +176,22 @@ class Agent(LLM, ToolBase):
self.set_output("_ERROR", ans) self.set_output("_ERROR", ans)
return return
if output_schema:
error = ""
for _ in range(self._param.max_retries + 1):
try:
def clean_formated_answer(ans: str) -> str:
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
ans = re.sub(r"^.*```json", "", ans, flags=re.DOTALL)
return re.sub(r"```\n*$", "", ans, flags=re.DOTALL)
obj = json_repair.loads(clean_formated_answer(ans))
self.set_output("structured", obj)
if use_tools:
self.set_output("use_tools", use_tools)
return obj
except Exception:
error = "The answer cannot be parsed as JSON"
ans = await self._force_format_to_schema_async(ans, schema_prompt)
if ans.find("**ERROR**") >= 0:
continue
self.set_output("_ERROR", error)
return
self.set_output("content", ans) self.set_output("content", ans)
if use_tools: if use_tools:
self.set_output("use_tools", use_tools) self.set_output("use_tools", use_tools)
return ans return ans
async def stream_output_with_tools_async(self, prompt, msg, user_defined_prompt={}): def stream_output_with_tools(self, prompt, msg, user_defined_prompt={}):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97)) _, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
answer_without_toolcall = "" answer_without_toolcall = ""
use_tools = [] use_tools = []
async for delta_ans, _ in self._react_with_tools_streamly_async_simple(prompt, msg, use_tools, user_defined_prompt): for delta_ans,_ in self._react_with_tools_streamly(prompt, msg, use_tools, user_defined_prompt):
if self.check_if_canceled("Agent streaming"):
return
if delta_ans.find("**ERROR**") >= 0: if delta_ans.find("**ERROR**") >= 0:
if self.get_exception_default_value(): if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value()) self.set_output("content", self.get_exception_default_value())
yield self.get_exception_default_value() yield self.get_exception_default_value()
else: else:
self.set_output("_ERROR", delta_ans) self.set_output("_ERROR", delta_ans)
return
answer_without_toolcall += delta_ans answer_without_toolcall += delta_ans
yield delta_ans yield delta_ans
@ -272,58 +199,55 @@ class Agent(LLM, ToolBase):
if use_tools: if use_tools:
self.set_output("use_tools", use_tools) self.set_output("use_tools", use_tools)
async def _react_with_tools_streamly_async_simple(self, prompt, history: list[dict], use_tools, user_defined_prompt={}, schema_prompt: str = ""): def _gen_citations(self, text):
retrievals = self._canvas.get_reference()
retrievals = {"chunks": list(retrievals["chunks"].values()), "doc_aggs": list(retrievals["doc_aggs"].values())}
formated_refer = kb_prompt(retrievals, self.chat_mdl.max_length, True)
for delta_ans in self._generate_streamly([{"role": "system", "content": citation_plus("\n\n".join(formated_refer))},
{"role": "user", "content": text}
]):
yield delta_ans
def _react_with_tools_streamly(self, prompt, history: list[dict], use_tools, user_defined_prompt={}):
token_count = 0 token_count = 0
tool_metas = self.tool_meta tool_metas = self.tool_meta
hist = deepcopy(history) hist = deepcopy(history)
last_calling = "" last_calling = ""
if len(hist) > 3: if len(hist) > 3:
st = timer() st = timer()
user_request = await full_question(messages=history, chat_mdl=self.chat_mdl) user_request = full_question(messages=history, chat_mdl=self.chat_mdl)
self.callback("Multi-turn conversation optimization", {}, user_request, elapsed_time=timer()-st) self.callback("Multi-turn conversation optimization", {}, user_request, elapsed_time=timer()-st)
else: else:
user_request = history[-1]["content"] user_request = history[-1]["content"]
def build_task_desc(prompt: str, user_request: str, user_defined_prompt: dict | None = None) -> str: def use_tool(name, args):
"""Build a minimal task_desc by concatenating prompt, query, and tool schemas.""" nonlocal hist, use_tools, token_count,last_calling,user_request
user_defined_prompt = user_defined_prompt or {}
task_desc = (
"### Agent Prompt\n"
f"{prompt}\n\n"
"### User Request\n"
f"{user_request}\n\n"
)
if user_defined_prompt:
udp_json = json.dumps(user_defined_prompt, ensure_ascii=False, indent=2)
task_desc += "\n### User Defined Prompts\n" + udp_json + "\n"
return task_desc
async def use_tool_async(name, args):
nonlocal hist, use_tools, last_calling
logging.info(f"{last_calling=} == {name=}") logging.info(f"{last_calling=} == {name=}")
# Summarize of function calling
#if all([
# isinstance(self.toolcall_session.get_tool_obj(name), Agent),
# last_calling,
# last_calling != name
#]):
# self.toolcall_session.get_tool_obj(name).add2system_prompt(f"The chat history with other agents are as following: \n" + self.get_useful_memory(user_request, str(args["user_prompt"]),user_defined_prompt))
last_calling = name last_calling = name
tool_response = await self.toolcall_session.tool_call_async(name, args) tool_response = self.toolcall_session.tool_call(name, args)
use_tools.append({ use_tools.append({
"name": name, "name": name,
"arguments": args, "arguments": args,
"results": tool_response "results": tool_response
}) })
# self.callback("add_memory", {}, "...")
#self.add_memory(hist[-2]["content"], hist[-1]["content"], name, args, str(tool_response), user_defined_prompt)
return name, tool_response return name, tool_response
async def complete(): def complete():
nonlocal hist nonlocal hist
need2cite = self._param.cite and self._canvas.get_reference()["chunks"] and self._id.find("-->") < 0 need2cite = self._param.cite and self._canvas.get_reference()["chunks"] and self._id.find("-->") < 0
if schema_prompt:
need2cite = False
cited = False cited = False
if hist and hist[0]["role"] == "system": if hist[0]["role"] == "system" and need2cite:
if schema_prompt: if len(hist) < 7:
hist[0]["content"] += "\n" + schema_prompt
if need2cite and len(hist) < 7:
hist[0]["content"] += citation_prompt() hist[0]["content"] += citation_prompt()
cited = True cited = True
yield "", token_count yield "", token_count
@ -332,7 +256,7 @@ class Agent(LLM, ToolBase):
if len(hist) > 12: if len(hist) > 12:
_hist = [hist[0], hist[1], *hist[-10:]] _hist = [hist[0], hist[1], *hist[-10:]]
entire_txt = "" entire_txt = ""
async for delta_ans in self._generate_streamly(_hist): for delta_ans in self._generate_streamly(_hist):
if not need2cite or cited: if not need2cite or cited:
yield delta_ans, 0 yield delta_ans, 0
entire_txt += delta_ans entire_txt += delta_ans
@ -341,29 +265,12 @@ class Agent(LLM, ToolBase):
st = timer() st = timer()
txt = "" txt = ""
async for delta_ans in self._gen_citations_async(entire_txt): for delta_ans in self._gen_citations(entire_txt):
if self.check_if_canceled("Agent streaming"):
return
yield delta_ans, 0 yield delta_ans, 0
txt += delta_ans txt += delta_ans
self.callback("gen_citations", {}, txt, elapsed_time=timer()-st) self.callback("gen_citations", {}, txt, elapsed_time=timer()-st)
def build_observation(tool_call_res: list[tuple]) -> str:
"""
Build a Observation from tool call results.
No LLM involved.
"""
if not tool_call_res:
return ""
lines = ["Observation:"]
for name, result in tool_call_res:
lines.append(f"[{name} result]")
lines.append(str(result))
return "\n".join(lines)
def append_user_content(hist, content): def append_user_content(hist, content):
if hist[-1]["role"] == "user": if hist[-1]["role"] == "user":
hist[-1]["content"] += content hist[-1]["content"] += content
@ -371,14 +278,12 @@ class Agent(LLM, ToolBase):
hist.append({"role": "user", "content": content}) hist.append({"role": "user", "content": content})
st = timer() st = timer()
task_desc = build_task_desc(prompt, user_request, user_defined_prompt) task_desc = analyze_task(self.chat_mdl, prompt, user_request, tool_metas, user_defined_prompt)
self.callback("analyze_task", {}, task_desc, elapsed_time=timer()-st) self.callback("analyze_task", {}, task_desc, elapsed_time=timer()-st)
for _ in range(self._param.max_rounds + 1): for _ in range(self._param.max_rounds + 1):
if self.check_if_canceled("Agent streaming"): response, tk = next_step(self.chat_mdl, hist, tool_metas, task_desc, user_defined_prompt)
return
response, tk = await next_step_async(self.chat_mdl, hist, tool_metas, task_desc, user_defined_prompt)
# self.callback("next_step", {}, str(response)[:256]+"...") # self.callback("next_step", {}, str(response)[:256]+"...")
token_count += tk or 0 token_count += tk
hist.append({"role": "assistant", "content": response}) hist.append({"role": "assistant", "content": response})
try: try:
functions = json_repair.loads(re.sub(r"```.*", "", response)) functions = json_repair.loads(re.sub(r"```.*", "", response))
@ -387,24 +292,23 @@ class Agent(LLM, ToolBase):
for f in functions: for f in functions:
if not isinstance(f, dict): if not isinstance(f, dict):
raise TypeError(f"An object type should be returned, but `{f}`") raise TypeError(f"An object type should be returned, but `{f}`")
with ThreadPoolExecutor(max_workers=5) as executor:
thr = []
for func in functions:
name = func["name"]
args = func["arguments"]
if name == COMPLETE_TASK:
append_user_content(hist, f"Respond with a formal answer. FORGET(DO NOT mention) about `{COMPLETE_TASK}`. The language for the response MUST be as the same as the first user request.\n")
for txt, tkcnt in complete():
yield txt, tkcnt
return
tool_tasks = [] thr.append(executor.submit(use_tool, name, args))
for func in functions:
name = func["name"]
args = func["arguments"]
if name == COMPLETE_TASK:
append_user_content(hist, f"Respond with a formal answer. FORGET(DO NOT mention) about `{COMPLETE_TASK}`. The language for the response MUST be as the same as the first user request.\n")
async for txt, tkcnt in complete():
yield txt, tkcnt
return
tool_tasks.append(asyncio.create_task(use_tool_async(name, args))) st = timer()
reflection = reflect(self.chat_mdl, hist, [th.result() for th in thr], user_defined_prompt)
results = await asyncio.gather(*tool_tasks) if tool_tasks else [] append_user_content(hist, reflection)
st = timer() self.callback("reflection", {}, str(reflection), elapsed_time=timer()-st)
reflection = build_observation(results)
append_user_content(hist, reflection)
self.callback("reflection", {}, str(reflection), elapsed_time=timer()-st)
except Exception as e: except Exception as e:
logging.exception(msg=f"Wrong JSON argument format in LLM ReAct response: {e}") logging.exception(msg=f"Wrong JSON argument format in LLM ReAct response: {e}")
@ -424,163 +328,21 @@ Instructions:
6. Focus on delivering VALUE with the information already gathered 6. Focus on delivering VALUE with the information already gathered
Respond immediately with your final comprehensive answer. Respond immediately with your final comprehensive answer.
""" """
if self.check_if_canceled("Agent final instruction"):
return
append_user_content(hist, final_instruction) append_user_content(hist, final_instruction)
async for txt, tkcnt in complete(): for txt, tkcnt in complete():
yield txt, tkcnt yield txt, tkcnt
# async def _react_with_tools_streamly_async(self, prompt, history: list[dict], use_tools, user_defined_prompt={}, schema_prompt: str = ""): def get_useful_memory(self, goal: str, sub_goal:str, topn=3, user_defined_prompt:dict={}) -> str:
# token_count = 0 # self.callback("get_useful_memory", {"topn": 3}, "...")
# tool_metas = self.tool_meta mems = self._canvas.get_memory()
# hist = deepcopy(history) rank = rank_memories(self.chat_mdl, goal, sub_goal, [summ for (user, assist, summ) in mems], user_defined_prompt)
# last_calling = "" try:
# if len(hist) > 3: rank = json_repair.loads(re.sub(r"```.*", "", rank))[:topn]
# st = timer() mems = [mems[r] for r in rank]
# user_request = await full_question(messages=history, chat_mdl=self.chat_mdl) return "\n\n".join([f"User: {u}\nAgent: {a}" for u, a,_ in mems])
# self.callback("Multi-turn conversation optimization", {}, user_request, elapsed_time=timer()-st) except Exception as e:
# else: logging.exception(e)
# user_request = history[-1]["content"]
# async def use_tool_async(name, args): return "Error occurred."
# nonlocal hist, use_tools, last_calling
# logging.info(f"{last_calling=} == {name=}")
# last_calling = name
# tool_response = await self.toolcall_session.tool_call_async(name, args)
# use_tools.append({
# "name": name,
# "arguments": args,
# "results": tool_response
# })
# # self.callback("add_memory", {}, "...")
# #self.add_memory(hist[-2]["content"], hist[-1]["content"], name, args, str(tool_response), user_defined_prompt)
# return name, tool_response
# async def complete():
# nonlocal hist
# need2cite = self._param.cite and self._canvas.get_reference()["chunks"] and self._id.find("-->") < 0
# if schema_prompt:
# need2cite = False
# cited = False
# if hist and hist[0]["role"] == "system":
# if schema_prompt:
# hist[0]["content"] += "\n" + schema_prompt
# if need2cite and len(hist) < 7:
# hist[0]["content"] += citation_prompt()
# cited = True
# yield "", token_count
# _hist = hist
# if len(hist) > 12:
# _hist = [hist[0], hist[1], *hist[-10:]]
# entire_txt = ""
# async for delta_ans in self._generate_streamly(_hist):
# if not need2cite or cited:
# yield delta_ans, 0
# entire_txt += delta_ans
# if not need2cite or cited:
# return
# st = timer()
# txt = ""
# async for delta_ans in self._gen_citations_async(entire_txt):
# if self.check_if_canceled("Agent streaming"):
# return
# yield delta_ans, 0
# txt += delta_ans
# self.callback("gen_citations", {}, txt, elapsed_time=timer()-st)
# def append_user_content(hist, content):
# if hist[-1]["role"] == "user":
# hist[-1]["content"] += content
# else:
# hist.append({"role": "user", "content": content})
# st = timer()
# task_desc = await analyze_task_async(self.chat_mdl, prompt, user_request, tool_metas, user_defined_prompt)
# self.callback("analyze_task", {}, task_desc, elapsed_time=timer()-st)
# for _ in range(self._param.max_rounds + 1):
# if self.check_if_canceled("Agent streaming"):
# return
# response, tk = await next_step_async(self.chat_mdl, hist, tool_metas, task_desc, user_defined_prompt)
# # self.callback("next_step", {}, str(response)[:256]+"...")
# token_count += tk or 0
# hist.append({"role": "assistant", "content": response})
# try:
# functions = json_repair.loads(re.sub(r"```.*", "", response))
# if not isinstance(functions, list):
# raise TypeError(f"List should be returned, but `{functions}`")
# for f in functions:
# if not isinstance(f, dict):
# raise TypeError(f"An object type should be returned, but `{f}`")
# tool_tasks = []
# for func in functions:
# name = func["name"]
# args = func["arguments"]
# if name == COMPLETE_TASK:
# append_user_content(hist, f"Respond with a formal answer. FORGET(DO NOT mention) about `{COMPLETE_TASK}`. The language for the response MUST be as the same as the first user request.\n")
# async for txt, tkcnt in complete():
# yield txt, tkcnt
# return
# tool_tasks.append(asyncio.create_task(use_tool_async(name, args)))
# results = await asyncio.gather(*tool_tasks) if tool_tasks else []
# st = timer()
# reflection = await reflect_async(self.chat_mdl, hist, results, user_defined_prompt)
# append_user_content(hist, reflection)
# self.callback("reflection", {}, str(reflection), elapsed_time=timer()-st)
# except Exception as e:
# logging.exception(msg=f"Wrong JSON argument format in LLM ReAct response: {e}")
# e = f"\nTool call error, please correct the input parameter of response format and call it again.\n *** Exception ***\n{e}"
# append_user_content(hist, str(e))
# logging.warning( f"Exceed max rounds: {self._param.max_rounds}")
# final_instruction = f"""
# {user_request}
# IMPORTANT: You have reached the conversation limit. Based on ALL the information and research you have gathered so far, please provide a DIRECT and COMPREHENSIVE final answer to the original request.
# Instructions:
# 1. SYNTHESIZE all information collected during this conversation
# 2. Provide a COMPLETE response using existing data - do not suggest additional research
# 3. Structure your response as a FINAL DELIVERABLE, not a plan
# 4. If information is incomplete, state what you found and provide the best analysis possible with available data
# 5. DO NOT mention conversation limits or suggest further steps
# 6. Focus on delivering VALUE with the information already gathered
# Respond immediately with your final comprehensive answer.
# """
# if self.check_if_canceled("Agent final instruction"):
# return
# append_user_content(hist, final_instruction)
# async for txt, tkcnt in complete():
# yield txt, tkcnt
async def _gen_citations_async(self, text):
retrievals = self._canvas.get_reference()
retrievals = {"chunks": list(retrievals["chunks"].values()), "doc_aggs": list(retrievals["doc_aggs"].values())}
formated_refer = kb_prompt(retrievals, self.chat_mdl.max_length, True)
async for delta_ans in self._generate_streamly([{"role": "system", "content": citation_plus("\n\n".join(formated_refer))},
{"role": "user", "content": text}
]):
yield delta_ans
def reset(self, only_output=False):
"""
Reset all tools if they have a reset method. This avoids errors for tools like MCPToolCallSession.
"""
for k in self._param.outputs.keys():
self._param.outputs[k]["value"] = None
for k, cpn in self.tools.items():
if hasattr(cpn, "reset") and callable(cpn.reset):
cpn.reset()
if only_output:
return
for k in self._param.inputs.keys():
self._param.inputs[k]["value"] = None
self._param.debug_inputs = {}

View File

@ -14,7 +14,6 @@
# limitations under the License. # limitations under the License.
# #
import asyncio
import re import re
import time import time
from abc import ABC from abc import ABC
@ -24,13 +23,11 @@ import os
import logging import logging
from typing import Any, List, Union from typing import Any, List, Union
import pandas as pd import pandas as pd
import trio
from agent import settings from agent import settings
from common.connection_utils import timeout from api.utils.api_utils import timeout
from common.misc_utils import thread_pool_exec
_FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params" _FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params"
_DEPRECATED_PARAMS = "_deprecated_params" _DEPRECATED_PARAMS = "_deprecated_params"
_USER_FEEDED_PARAMS = "_user_feeded_params" _USER_FEEDED_PARAMS = "_user_feeded_params"
@ -100,7 +97,7 @@ class ComponentParamBase(ABC):
def _recursive_convert_obj_to_dict(obj): def _recursive_convert_obj_to_dict(obj):
ret_dict = {} ret_dict = {}
if isinstance(obj, dict): if isinstance(obj, dict):
for k, v in obj.items(): for k,v in obj.items():
if isinstance(v, dict) or (v and type(v).__name__ not in dir(builtins)): if isinstance(v, dict) or (v and type(v).__name__ not in dir(builtins)):
ret_dict[k] = _recursive_convert_obj_to_dict(v) ret_dict[k] = _recursive_convert_obj_to_dict(v)
else: else:
@ -256,65 +253,96 @@ class ComponentParamBase(ABC):
self._validate_param(attr, validation_json) self._validate_param(attr, validation_json)
@staticmethod @staticmethod
def check_string(param, description): def check_string(param, descr):
if type(param).__name__ not in ["str"]: if type(param).__name__ not in ["str"]:
raise ValueError(description + " {} not supported, should be string type".format(param)) raise ValueError(
descr + " {} not supported, should be string type".format(param)
)
@staticmethod @staticmethod
def check_empty(param, description): def check_empty(param, descr):
if not param: if not param:
raise ValueError(description + " does not support empty value.") raise ValueError(
descr + " does not support empty value."
)
@staticmethod @staticmethod
def check_positive_integer(param, description): def check_positive_integer(param, descr):
if type(param).__name__ not in ["int", "long"] or param <= 0: if type(param).__name__ not in ["int", "long"] or param <= 0:
raise ValueError(description + " {} not supported, should be positive integer".format(param)) raise ValueError(
descr + " {} not supported, should be positive integer".format(param)
)
@staticmethod @staticmethod
def check_positive_number(param, description): def check_positive_number(param, descr):
if type(param).__name__ not in ["float", "int", "long"] or param <= 0: if type(param).__name__ not in ["float", "int", "long"] or param <= 0:
raise ValueError(description + " {} not supported, should be positive numeric".format(param)) raise ValueError(
descr + " {} not supported, should be positive numeric".format(param)
)
@staticmethod @staticmethod
def check_nonnegative_number(param, description): def check_nonnegative_number(param, descr):
if type(param).__name__ not in ["float", "int", "long"] or param < 0: if type(param).__name__ not in ["float", "int", "long"] or param < 0:
raise ValueError(description + " {} not supported, should be non-negative numeric".format(param)) raise ValueError(
descr
+ " {} not supported, should be non-negative numeric".format(param)
)
@staticmethod @staticmethod
def check_decimal_float(param, description): def check_decimal_float(param, descr):
if type(param).__name__ not in ["float", "int"] or param < 0 or param > 1: if type(param).__name__ not in ["float", "int"] or param < 0 or param > 1:
raise ValueError(description + " {} not supported, should be a float number in range [0, 1]".format(param)) raise ValueError(
descr
+ " {} not supported, should be a float number in range [0, 1]".format(
param
)
)
@staticmethod @staticmethod
def check_boolean(param, description): def check_boolean(param, descr):
if type(param).__name__ != "bool": if type(param).__name__ != "bool":
raise ValueError(description + " {} not supported, should be bool type".format(param)) raise ValueError(
descr + " {} not supported, should be bool type".format(param)
)
@staticmethod @staticmethod
def check_open_unit_interval(param, description): def check_open_unit_interval(param, descr):
if type(param).__name__ not in ["float"] or param <= 0 or param >= 1: if type(param).__name__ not in ["float"] or param <= 0 or param >= 1:
raise ValueError(description + " should be a numeric number between 0 and 1 exclusively") raise ValueError(
descr + " should be a numeric number between 0 and 1 exclusively"
)
@staticmethod @staticmethod
def check_valid_value(param, description, valid_values): def check_valid_value(param, descr, valid_values):
if param not in valid_values: if param not in valid_values:
raise ValueError(description + " {} is not supported, it should be in {}".format(param, valid_values)) raise ValueError(
descr
+ " {} is not supported, it should be in {}".format(param, valid_values)
)
@staticmethod @staticmethod
def check_defined_type(param, description, types): def check_defined_type(param, descr, types):
if type(param).__name__ not in types: if type(param).__name__ not in types:
raise ValueError(description + " {} not supported, should be one of {}".format(param, types)) raise ValueError(
descr + " {} not supported, should be one of {}".format(param, types)
)
@staticmethod @staticmethod
def check_and_change_lower(param, valid_list, description=""): def check_and_change_lower(param, valid_list, descr=""):
if type(param).__name__ != "str": if type(param).__name__ != "str":
raise ValueError(description + " {} not supported, should be one of {}".format(param, valid_list)) raise ValueError(
descr
+ " {} not supported, should be one of {}".format(param, valid_list)
)
lower_param = param.lower() lower_param = param.lower()
if lower_param in valid_list: if lower_param in valid_list:
return lower_param return lower_param
else: else:
raise ValueError(description + " {} not supported, should be one of {}".format(param, valid_list)) raise ValueError(
descr
+ " {} not supported, should be one of {}".format(param, valid_list)
)
@staticmethod @staticmethod
def _greater_equal_than(value, limit): def _greater_equal_than(value, limit):
@ -346,16 +374,16 @@ class ComponentParamBase(ABC):
def _not_in(value, wrong_value_list): def _not_in(value, wrong_value_list):
return value not in wrong_value_list return value not in wrong_value_list
def _warn_deprecated_param(self, param_name, description): def _warn_deprecated_param(self, param_name, descr):
if self._deprecated_params_set.get(param_name): if self._deprecated_params_set.get(param_name):
logging.warning( logging.warning(
f"{description} {param_name} is deprecated and ignored in this version." f"{descr} {param_name} is deprecated and ignored in this version."
) )
def _warn_to_deprecate_param(self, param_name, description, new_param): def _warn_to_deprecate_param(self, param_name, descr, new_param):
if self._deprecated_params_set.get(param_name): if self._deprecated_params_set.get(param_name):
logging.warning( logging.warning(
f"{description} {param_name} will be deprecated in future release; " f"{descr} {param_name} will be deprecated in future release; "
f"please use {new_param} instead." f"please use {new_param} instead."
) )
return True return True
@ -364,8 +392,8 @@ class ComponentParamBase(ABC):
class ComponentBase(ABC): class ComponentBase(ABC):
component_name: str component_name: str
thread_limiter = asyncio.Semaphore(int(os.environ.get("MAX_CONCURRENT_CHATS", 10))) thread_limiter = trio.CapacityLimiter(int(os.environ.get('MAX_CONCURRENT_CHATS', 10)))
variable_ref_patt = r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z0-9_.-]+|sys\.[A-Za-z0-9_.]+|env\.[A-Za-z0-9_.]+)\} *\}*" variable_ref_patt = r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z:0-9_.-]+|sys\.[a-z_]+)\} *\}*"
def __str__(self): def __str__(self):
""" """
@ -379,31 +407,16 @@ class ComponentBase(ABC):
"params": {} "params": {}
}}""".format(self.component_name, }}""".format(self.component_name,
self._param self._param
) )
def __init__(self, canvas, id, param: ComponentParamBase): def __init__(self, canvas, id, param: ComponentParamBase):
from agent.canvas import Graph # Local import to avoid cyclic dependency from agent.canvas import Graph # Local import to avoid cyclic dependency
assert isinstance(canvas, Graph), "canvas must be an instance of Canvas" assert isinstance(canvas, Graph), "canvas must be an instance of Canvas"
self._canvas = canvas self._canvas = canvas
self._id = id self._id = id
self._param = param self._param = param
self._param.check() self._param.check()
def is_canceled(self) -> bool:
return self._canvas.is_canceled()
def check_if_canceled(self, message: str = "") -> bool:
if self.is_canceled():
task_id = getattr(self._canvas, 'task_id', 'unknown')
log_message = f"Task {task_id} has been canceled"
if message:
log_message += f" during {message}"
logging.info(log_message)
self.set_output("_ERROR", "Task has been canceled")
return True
return False
def invoke(self, **kwargs) -> dict[str, Any]: def invoke(self, **kwargs) -> dict[str, Any]:
self.set_output("_created_time", time.perf_counter()) self.set_output("_created_time", time.perf_counter())
try: try:
@ -418,42 +431,14 @@ class ComponentBase(ABC):
self.set_output("_elapsed_time", time.perf_counter() - self.output("_created_time")) self.set_output("_elapsed_time", time.perf_counter() - self.output("_created_time"))
return self.output() return self.output()
async def invoke_async(self, **kwargs) -> dict[str, Any]: @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
"""
Async wrapper for component invocation.
Prefers coroutine `_invoke_async` if present; otherwise falls back to `_invoke`.
Handles timing and error recording consistently with `invoke`.
"""
self.set_output("_created_time", time.perf_counter())
try:
if self.check_if_canceled("Component processing"):
return
fn_async = getattr(self, "_invoke_async", None)
if fn_async and asyncio.iscoroutinefunction(fn_async):
await fn_async(**kwargs)
elif asyncio.iscoroutinefunction(self._invoke):
await self._invoke(**kwargs)
else:
await thread_pool_exec(self._invoke, **kwargs)
except Exception as e:
if self.get_exception_default_value():
self.set_exception_default_value()
else:
self.set_output("_ERROR", str(e))
logging.exception(e)
self._param.debug_inputs = {}
self.set_output("_elapsed_time", time.perf_counter() - self.output("_created_time"))
return self.output()
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10 * 60)))
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
raise NotImplementedError() raise NotImplementedError()
def output(self, var_nm: str = None) -> Union[dict[str, Any], Any]: def output(self, var_nm: str=None) -> Union[dict[str, Any], Any]:
if var_nm: if var_nm:
return self._param.outputs.get(var_nm, {}).get("value", "") return self._param.outputs.get(var_nm, {}).get("value", "")
return {k: o.get("value") for k, o in self._param.outputs.items()} return {k: o.get("value") for k,o in self._param.outputs.items()}
def set_output(self, key: str, value: Any): def set_output(self, key: str, value: Any):
if key not in self._param.outputs: if key not in self._param.outputs:
@ -464,18 +449,15 @@ class ComponentBase(ABC):
return self._param.outputs.get("_ERROR", {}).get("value") return self._param.outputs.get("_ERROR", {}).get("value")
def reset(self, only_output=False): def reset(self, only_output=False):
outputs: dict = self._param.outputs # for better performance for k in self._param.outputs.keys():
for k in outputs.keys(): self._param.outputs[k]["value"] = None
outputs[k]["value"] = None
if only_output: if only_output:
return return
for k in self._param.inputs.keys():
inputs: dict = self._param.inputs # for better performance self._param.inputs[k]["value"] = None
for k in inputs.keys():
inputs[k]["value"] = None
self._param.debug_inputs = {} self._param.debug_inputs = {}
def get_input(self, key: str = None) -> Union[Any, dict[str, Any]]: def get_input(self, key: str=None) -> Union[Any, dict[str, Any]]:
if key: if key:
return self._param.inputs.get(key, {}).get("value") return self._param.inputs.get(key, {}).get("value")
@ -499,13 +481,13 @@ class ComponentBase(ABC):
def get_input_elements_from_text(self, txt: str) -> dict[str, dict[str, str]]: def get_input_elements_from_text(self, txt: str) -> dict[str, dict[str, str]]:
res = {} res = {}
for r in re.finditer(self.variable_ref_patt, txt, flags=re.IGNORECASE | re.DOTALL): for r in re.finditer(self.variable_ref_patt, txt, flags=re.IGNORECASE|re.DOTALL):
exp = r.group(1) exp = r.group(1)
cpn_id, var_nm = exp.split("@") if exp.find("@") > 0 else ("", exp) cpn_id, var_nm = exp.split("@") if exp.find("@")>0 else ("", exp)
res[exp] = { res[exp] = {
"name": (self._canvas.get_component_name(cpn_id) + f"@{var_nm}") if cpn_id else exp, "name": (self._canvas.get_component_name(cpn_id) +f"@{var_nm}") if cpn_id else exp,
"value": self._canvas.get_variable_value(exp), "value": self._canvas.get_variable_value(exp),
"_retrieval": self._canvas.get_variable_value(f"{cpn_id}@_references") if cpn_id else None, "_retrival": self._canvas.get_variable_value(f"{cpn_id}@_references") if cpn_id else None,
"_cpn_id": cpn_id "_cpn_id": cpn_id
} }
return res return res
@ -532,7 +514,6 @@ class ComponentBase(ABC):
def get_param(self, name): def get_param(self, name):
if hasattr(self._param, name): if hasattr(self._param, name):
return getattr(self._param, name) return getattr(self._param, name)
return None
def debug(self, **kwargs): def debug(self, **kwargs):
return self._invoke(**kwargs) return self._invoke(**kwargs)
@ -540,7 +521,7 @@ class ComponentBase(ABC):
def get_parent(self) -> Union[object, None]: def get_parent(self) -> Union[object, None]:
pid = self._canvas.get_component(self._id).get("parent_id") pid = self._canvas.get_component(self._id).get("parent_id")
if not pid: if not pid:
return None return
return self._canvas.get_component(pid)["obj"] return self._canvas.get_component(pid)["obj"]
def get_upstream(self) -> List[str]: def get_upstream(self) -> List[str]:
@ -556,7 +537,6 @@ class ComponentBase(ABC):
for n, v in kv.items(): for n, v in kv.items():
def repl(_match, val=v): def repl(_match, val=v):
return str(val) if val is not None else "" return str(val) if val is not None else ""
content = re.sub( content = re.sub(
r"\{%s\}" % re.escape(n), r"\{%s\}" % re.escape(n),
repl, repl,
@ -566,7 +546,7 @@ class ComponentBase(ABC):
def exception_handler(self): def exception_handler(self):
if not self._param.exception_method: if not self._param.exception_method:
return None return
return { return {
"goto": self._param.exception_goto, "goto": self._param.exception_goto,
"default_value": self._param.exception_default_value "default_value": self._param.exception_default_value

View File

@ -14,7 +14,6 @@
# limitations under the License. # limitations under the License.
# #
from agent.component.fillup import UserFillUpParam, UserFillUp from agent.component.fillup import UserFillUpParam, UserFillUp
from api.db.services.file_service import FileService
class BeginParam(UserFillUpParam): class BeginParam(UserFillUpParam):
@ -28,7 +27,7 @@ class BeginParam(UserFillUpParam):
self.prologue = "Hi! I'm your smart assistant. What can I do for you?" self.prologue = "Hi! I'm your smart assistant. What can I do for you?"
def check(self): def check(self):
self.check_valid_value(self.mode, "The 'mode' should be either `conversational` or `task`", ["conversational", "task","Webhook"]) self.check_valid_value(self.mode, "The 'mode' should be either `conversational` or `task`", ["conversational", "task"])
def get_input_form(self) -> dict[str, dict]: def get_input_form(self) -> dict[str, dict]:
return getattr(self, "inputs") return getattr(self, "inputs")
@ -38,18 +37,12 @@ class Begin(UserFillUp):
component_name = "Begin" component_name = "Begin"
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Begin processing"):
return
for k, v in kwargs.get("inputs", {}).items(): for k, v in kwargs.get("inputs", {}).items():
if self.check_if_canceled("Begin processing"):
return
if isinstance(v, dict) and v.get("type", "").lower().find("file") >=0: if isinstance(v, dict) and v.get("type", "").lower().find("file") >=0:
if v.get("optional") and v.get("value", None) is None: if v.get("optional") and v.get("value", None) is None:
v = None v = None
else: else:
v = FileService.get_files([v["value"]]) v = self._canvas.get_files([v["value"]])
else: else:
v = v.get("value") v = v.get("value")
self.set_output(k, v) self.set_output(k, v)

View File

@ -13,16 +13,15 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import asyncio
import logging import logging
import os import os
import re import re
from abc import ABC from abc import ABC
from common.constants import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from agent.component.llm import LLMParam, LLM from agent.component.llm import LLMParam, LLM
from common.connection_utils import timeout from api.utils.api_utils import timeout
from rag.llm.chat_model import ERROR_PREFIX from rag.llm.chat_model import ERROR_PREFIX
@ -97,30 +96,17 @@ Here's description of each category:
class Categorize(LLM, ABC): class Categorize(LLM, ABC):
component_name = "Categorize" component_name = "Categorize"
def get_input_elements(self) -> dict[str, dict]:
query_key = self._param.query or "sys.query"
elements = self.get_input_elements_from_text(f"{{{query_key}}}")
if not elements:
logging.warning(f"[Categorize] input element not detected for query key: {query_key}")
return elements
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
async def _invoke_async(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Categorize processing"):
return
msg = self._canvas.get_history(self._param.message_history_window_size) msg = self._canvas.get_history(self._param.message_history_window_size)
if not msg: if not msg:
msg = [{"role": "user", "content": ""}] msg = [{"role": "user", "content": ""}]
query_key = self._param.query or "sys.query" if kwargs.get("sys.query"):
if query_key in kwargs: msg[-1]["content"] = kwargs["sys.query"]
query_value = kwargs[query_key] self.set_input_value("sys.query", kwargs["sys.query"])
else: else:
query_value = self._canvas.get_variable_value(query_key) msg[-1]["content"] = self._canvas.get_variable_value(self._param.query)
if query_value is None: self.set_input_value(self._param.query, msg[-1]["content"])
query_value = ""
msg[-1]["content"] = query_value
self.set_input_value(query_key, msg[-1]["content"])
self._param.update_prompt() self._param.update_prompt()
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id) chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
@ -128,18 +114,10 @@ class Categorize(LLM, ABC):
---- Real Data ---- ---- Real Data ----
{} {}
""".format(" | ".join(["{}: \"{}\"".format(c["role"].upper(), re.sub(r"\n", "", c["content"], flags=re.DOTALL)) for c in msg])) """.format(" | ".join(["{}: \"{}\"".format(c["role"].upper(), re.sub(r"\n", "", c["content"], flags=re.DOTALL)) for c in msg]))
ans = chat_mdl.chat(self._param.sys_prompt, [{"role": "user", "content": user_prompt}], self._param.gen_conf())
if self.check_if_canceled("Categorize processing"):
return
ans = await chat_mdl.async_chat(self._param.sys_prompt, [{"role": "user", "content": user_prompt}], self._param.gen_conf())
logging.info(f"input: {user_prompt}, answer: {str(ans)}") logging.info(f"input: {user_prompt}, answer: {str(ans)}")
if ERROR_PREFIX in ans: if ERROR_PREFIX in ans:
raise Exception(ans) raise Exception(ans)
if self.check_if_canceled("Categorize processing"):
return
# Count the number of times each category appears in the answer. # Count the number of times each category appears in the answer.
category_counts = {} category_counts = {}
for c in self._param.category_description.keys(): for c in self._param.category_description.keys():
@ -147,7 +125,7 @@ class Categorize(LLM, ABC):
category_counts[c] = count category_counts[c] = count
cpn_ids = list(self._param.category_description.items())[-1][1]["to"] cpn_ids = list(self._param.category_description.items())[-1][1]["to"]
max_category = list(self._param.category_description.keys())[-1] max_category = list(self._param.category_description.keys())[0]
if any(category_counts.values()): if any(category_counts.values()):
max_category = max(category_counts.items(), key=lambda x: x[1])[0] max_category = max(category_counts.items(), key=lambda x: x[1])[0]
cpn_ids = self._param.category_description[max_category]["to"] cpn_ids = self._param.category_description[max_category]["to"]
@ -155,9 +133,5 @@ class Categorize(LLM, ABC):
self.set_output("category_name", max_category) self.set_output("category_name", max_category)
self.set_output("_next", cpn_ids) self.set_output("_next", cpn_ids)
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
return asyncio.run(self._invoke_async(**kwargs))
def thoughts(self) -> str: def thoughts(self) -> str:
return "Which should it falls into {}? ...".format(",".join([f"`{c}`" for c, _ in self._param.category_description.items()])) return "Which should it falls into {}? ...".format(",".join([f"`{c}`" for c, _ in self._param.category_description.items()]))

View File

@ -1,218 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import ast
import os
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
class DataOperationsParam(ComponentParamBase):
"""
Define the Data Operations component parameters.
"""
def __init__(self):
super().__init__()
self.query = []
self.operations = "literal_eval"
self.select_keys = []
self.filter_values=[]
self.updates=[]
self.remove_keys=[]
self.rename_keys=[]
self.outputs = {
"result": {
"value": [],
"type": "Array of Object"
}
}
def check(self):
self.check_valid_value(self.operations, "Support operations", ["select_keys", "literal_eval","combine","filter_values","append_or_update","remove_keys","rename_keys"])
class DataOperations(ComponentBase,ABC):
component_name = "DataOperations"
def get_input_form(self) -> dict[str, dict]:
return {
k: {"name": o.get("name", ""), "type": "line"}
for input_item in (self._param.query or [])
for k, o in self.get_input_elements_from_text(input_item).items()
}
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
self.input_objects=[]
inputs = getattr(self._param, "query", None)
if not isinstance(inputs, (list, tuple)):
inputs = [inputs]
for input_ref in inputs:
input_object=self._canvas.get_variable_value(input_ref)
self.set_input_value(input_ref, input_object)
if input_object is None:
continue
if isinstance(input_object,dict):
self.input_objects.append(input_object)
elif isinstance(input_object,list):
self.input_objects.extend(x for x in input_object if isinstance(x, dict))
else:
continue
if self._param.operations == "select_keys":
self._select_keys()
elif self._param.operations == "recursive_eval":
self._literal_eval()
elif self._param.operations == "combine":
self._combine()
elif self._param.operations == "filter_values":
self._filter_values()
elif self._param.operations == "append_or_update":
self._append_or_update()
elif self._param.operations == "remove_keys":
self._remove_keys()
else:
self._rename_keys()
def _select_keys(self):
filter_criteria: list[str] = self._param.select_keys
results = [{key: value for key, value in data_dict.items() if key in filter_criteria} for data_dict in self.input_objects]
self.set_output("result", results)
def _recursive_eval(self, data):
if isinstance(data, dict):
return {k: self.recursive_eval(v) for k, v in data.items()}
if isinstance(data, list):
return [self.recursive_eval(item) for item in data]
if isinstance(data, str):
try:
if (
data.strip().startswith(("{", "[", "(", "'", '"'))
or data.strip().lower() in ("true", "false", "none")
or data.strip().replace(".", "").isdigit()
):
return ast.literal_eval(data)
except (ValueError, SyntaxError, TypeError, MemoryError):
return data
else:
return data
return data
def _literal_eval(self):
self.set_output("result", self._recursive_eval(self.input_objects))
def _combine(self):
result={}
for obj in self.input_objects:
for key, value in obj.items():
if key not in result:
result[key] = value
elif isinstance(result[key], list):
if isinstance(value, list):
result[key].extend(value)
else:
result[key].append(value)
else:
result[key] = (
[result[key], value] if not isinstance(value, list) else [result[key], *value]
)
self.set_output("result", result)
def norm(self,v):
s = "" if v is None else str(v)
return s
def match_rule(self, obj, rule):
key = rule.get("key")
op = (rule.get("operator") or "equals").lower()
target = self.norm(rule.get("value"))
target = self._canvas.get_value_with_variable(target) or target
if key not in obj:
return False
val = obj.get(key, None)
v = self.norm(val)
if op == "=":
return v == target
if op == "":
return v != target
if op == "contains":
return target in v
if op == "start with":
return v.startswith(target)
if op == "end with":
return v.endswith(target)
return False
def _filter_values(self):
results=[]
rules = (getattr(self._param, "filter_values", None) or [])
for obj in self.input_objects:
if not rules:
results.append(obj)
continue
if all(self.match_rule(obj, r) for r in rules):
results.append(obj)
self.set_output("result", results)
def _append_or_update(self):
results=[]
updates = getattr(self._param, "updates", []) or []
for obj in self.input_objects:
new_obj = dict(obj)
for item in updates:
if not isinstance(item, dict):
continue
k = (item.get("key") or "").strip()
if not k:
continue
new_obj[k] = self._canvas.get_value_with_variable(item.get("value")) or item.get("value")
results.append(new_obj)
self.set_output("result", results)
def _remove_keys(self):
results = []
remove_keys = getattr(self._param, "remove_keys", []) or []
for obj in (self.input_objects or []):
new_obj = dict(obj)
for k in remove_keys:
if not isinstance(k, str):
continue
new_obj.pop(k, None)
results.append(new_obj)
self.set_output("result", results)
def _rename_keys(self):
results = []
rename_pairs = getattr(self._param, "rename_keys", []) or []
for obj in (self.input_objects or []):
new_obj = dict(obj)
for pair in rename_pairs:
if not isinstance(pair, dict):
continue
old = (pair.get("old_key") or "").strip()
new = (pair.get("new_key") or "").strip()
if not old or not new or old == new:
continue
if old in new_obj:
new_obj[new] = new_obj.pop(old)
results.append(new_obj)
self.set_output("result", results)
def thoughts(self) -> str:
return "DataOperation in progress"

File diff suppressed because it is too large Load Diff

View File

@ -1,401 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
ExcelProcessor Component
A component for reading, processing, and generating Excel files in RAGFlow agents.
Supports multiple Excel file inputs, data transformation, and Excel output generation.
"""
import logging
import os
from abc import ABC
from io import BytesIO
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
from api.db.services.file_service import FileService
from api.utils.api_utils import timeout
from common import settings
from common.misc_utils import get_uuid
class ExcelProcessorParam(ComponentParamBase):
"""
Define the ExcelProcessor component parameters.
"""
def __init__(self):
super().__init__()
# Input configuration
self.input_files = [] # Variable references to uploaded files
self.operation = "read" # read, merge, transform, output
# Processing options
self.sheet_selection = "all" # all, first, or comma-separated sheet names
self.merge_strategy = "concat" # concat, join
self.join_on = "" # Column name for join operations
# Transform options (for LLM-guided transformations)
self.transform_instructions = ""
self.transform_data = "" # Variable reference to transformation data
# Output options
self.output_format = "xlsx" # xlsx, csv
self.output_filename = "output"
# Component outputs
self.outputs = {
"data": {
"type": "object",
"value": {}
},
"summary": {
"type": "str",
"value": ""
},
"markdown": {
"type": "str",
"value": ""
}
}
def check(self):
self.check_valid_value(
self.operation,
"[ExcelProcessor] Operation",
["read", "merge", "transform", "output"]
)
self.check_valid_value(
self.output_format,
"[ExcelProcessor] Output format",
["xlsx", "csv"]
)
return True
class ExcelProcessor(ComponentBase, ABC):
"""
Excel processing component for RAGFlow agents.
Operations:
- read: Parse Excel files into structured data
- merge: Combine multiple Excel files
- transform: Apply data transformations based on instructions
- output: Generate Excel file output
"""
component_name = "ExcelProcessor"
def get_input_form(self) -> dict[str, dict]:
"""Define input form for the component."""
res = {}
for ref in (self._param.input_files or []):
for k, o in self.get_input_elements_from_text(ref).items():
res[k] = {"name": o.get("name", ""), "type": "file"}
if self._param.transform_data:
for k, o in self.get_input_elements_from_text(self._param.transform_data).items():
res[k] = {"name": o.get("name", ""), "type": "object"}
return res
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if self.check_if_canceled("ExcelProcessor processing"):
return
operation = self._param.operation.lower()
if operation == "read":
self._read_excels()
elif operation == "merge":
self._merge_excels()
elif operation == "transform":
self._transform_data()
elif operation == "output":
self._output_excel()
else:
self.set_output("summary", f"Unknown operation: {operation}")
def _get_file_content(self, file_ref: str) -> tuple[bytes, str]:
"""
Get file content from a variable reference.
Returns (content_bytes, filename).
"""
value = self._canvas.get_variable_value(file_ref)
if value is None:
return None, None
# Handle different value formats
if isinstance(value, dict):
# File reference from Begin/UserFillUp component
file_id = value.get("id") or value.get("file_id")
created_by = value.get("created_by") or self._canvas.get_tenant_id()
filename = value.get("name") or value.get("filename", "unknown.xlsx")
if file_id:
content = FileService.get_blob(created_by, file_id)
return content, filename
elif isinstance(value, list) and len(value) > 0:
# List of file references - return first
return self._get_file_content_from_list(value[0])
elif isinstance(value, str):
# Could be base64 encoded or a path
if value.startswith("data:"):
import base64
# Extract base64 content
_, encoded = value.split(",", 1)
return base64.b64decode(encoded), "uploaded.xlsx"
return None, None
def _get_file_content_from_list(self, item) -> tuple[bytes, str]:
"""Extract file content from a list item."""
if isinstance(item, dict):
return self._get_file_content(item)
return None, None
def _parse_excel_to_dataframes(self, content: bytes, filename: str) -> dict[str, pd.DataFrame]:
"""Parse Excel content into a dictionary of DataFrames (one per sheet)."""
try:
excel_file = BytesIO(content)
if filename.lower().endswith(".csv"):
df = pd.read_csv(excel_file)
return {"Sheet1": df}
else:
# Read all sheets
xlsx = pd.ExcelFile(excel_file, engine='openpyxl')
sheet_selection = self._param.sheet_selection
if sheet_selection == "all":
sheets_to_read = xlsx.sheet_names
elif sheet_selection == "first":
sheets_to_read = [xlsx.sheet_names[0]] if xlsx.sheet_names else []
else:
# Comma-separated sheet names
requested = [s.strip() for s in sheet_selection.split(",")]
sheets_to_read = [s for s in requested if s in xlsx.sheet_names]
dfs = {}
for sheet in sheets_to_read:
dfs[sheet] = pd.read_excel(xlsx, sheet_name=sheet)
return dfs
except Exception as e:
logging.error(f"Error parsing Excel file {filename}: {e}")
return {}
def _read_excels(self):
"""Read and parse Excel files into structured data."""
all_data = {}
summaries = []
markdown_parts = []
for file_ref in (self._param.input_files or []):
if self.check_if_canceled("ExcelProcessor reading"):
return
# Get variable value
value = self._canvas.get_variable_value(file_ref)
self.set_input_value(file_ref, str(value)[:200] if value else "")
if value is None:
continue
# Handle file content
content, filename = self._get_file_content(file_ref)
if content is None:
continue
# Parse Excel
dfs = self._parse_excel_to_dataframes(content, filename)
for sheet_name, df in dfs.items():
key = f"{filename}_{sheet_name}" if len(dfs) > 1 else filename
all_data[key] = df.to_dict(orient="records")
# Build summary
summaries.append(f"**{key}**: {len(df)} rows, {len(df.columns)} columns ({', '.join(df.columns.tolist()[:5])}{'...' if len(df.columns) > 5 else ''})")
# Build markdown table
markdown_parts.append(f"### {key}\n\n{df.head(10).to_markdown(index=False)}\n")
# Set outputs
self.set_output("data", all_data)
self.set_output("summary", "\n".join(summaries) if summaries else "No Excel files found")
self.set_output("markdown", "\n\n".join(markdown_parts) if markdown_parts else "No data")
def _merge_excels(self):
"""Merge multiple Excel files/sheets into one."""
all_dfs = []
for file_ref in (self._param.input_files or []):
if self.check_if_canceled("ExcelProcessor merging"):
return
value = self._canvas.get_variable_value(file_ref)
self.set_input_value(file_ref, str(value)[:200] if value else "")
if value is None:
continue
content, filename = self._get_file_content(file_ref)
if content is None:
continue
dfs = self._parse_excel_to_dataframes(content, filename)
all_dfs.extend(dfs.values())
if not all_dfs:
self.set_output("data", {})
self.set_output("summary", "No data to merge")
return
# Merge strategy
if self._param.merge_strategy == "concat":
merged_df = pd.concat(all_dfs, ignore_index=True)
elif self._param.merge_strategy == "join" and self._param.join_on:
# Join on specified column
merged_df = all_dfs[0]
for df in all_dfs[1:]:
merged_df = merged_df.merge(df, on=self._param.join_on, how="outer")
else:
merged_df = pd.concat(all_dfs, ignore_index=True)
self.set_output("data", {"merged": merged_df.to_dict(orient="records")})
self.set_output("summary", f"Merged {len(all_dfs)} sources into {len(merged_df)} rows, {len(merged_df.columns)} columns")
self.set_output("markdown", merged_df.head(20).to_markdown(index=False))
def _transform_data(self):
"""Apply transformations to data based on instructions or input data."""
# Get the data to transform
transform_ref = self._param.transform_data
if not transform_ref:
self.set_output("summary", "No transform data reference provided")
return
data = self._canvas.get_variable_value(transform_ref)
self.set_input_value(transform_ref, str(data)[:300] if data else "")
if data is None:
self.set_output("summary", "Transform data is empty")
return
# Convert to DataFrame
if isinstance(data, dict):
# Could be {"sheet": [rows]} format
if all(isinstance(v, list) for v in data.values()):
# Multiple sheets
all_markdown = []
for sheet_name, rows in data.items():
df = pd.DataFrame(rows)
all_markdown.append(f"### {sheet_name}\n\n{df.to_markdown(index=False)}")
self.set_output("data", data)
self.set_output("markdown", "\n\n".join(all_markdown))
else:
df = pd.DataFrame([data])
self.set_output("data", df.to_dict(orient="records"))
self.set_output("markdown", df.to_markdown(index=False))
elif isinstance(data, list):
df = pd.DataFrame(data)
self.set_output("data", df.to_dict(orient="records"))
self.set_output("markdown", df.to_markdown(index=False))
else:
self.set_output("data", {"raw": str(data)})
self.set_output("markdown", str(data))
self.set_output("summary", "Transformed data ready for processing")
def _output_excel(self):
"""Generate Excel file output from data."""
# Get data from transform_data reference
transform_ref = self._param.transform_data
if not transform_ref:
self.set_output("summary", "No data reference for output")
return
data = self._canvas.get_variable_value(transform_ref)
self.set_input_value(transform_ref, str(data)[:300] if data else "")
if data is None:
self.set_output("summary", "No data to output")
return
try:
# Prepare DataFrames
if isinstance(data, dict):
if all(isinstance(v, list) for v in data.values()):
# Multi-sheet format
dfs = {k: pd.DataFrame(v) for k, v in data.items()}
else:
dfs = {"Sheet1": pd.DataFrame([data])}
elif isinstance(data, list):
dfs = {"Sheet1": pd.DataFrame(data)}
else:
self.set_output("summary", "Invalid data format for Excel output")
return
# Generate output
doc_id = get_uuid()
if self._param.output_format == "csv":
# For CSV, only output first sheet
first_df = list(dfs.values())[0]
binary_content = first_df.to_csv(index=False).encode("utf-8")
filename = f"{self._param.output_filename}.csv"
else:
# Excel output
excel_io = BytesIO()
with pd.ExcelWriter(excel_io, engine='openpyxl') as writer:
for sheet_name, df in dfs.items():
# Sanitize sheet name (max 31 chars, no special chars)
safe_name = sheet_name[:31].replace("/", "_").replace("\\", "_")
df.to_excel(writer, sheet_name=safe_name, index=False)
excel_io.seek(0)
binary_content = excel_io.read()
filename = f"{self._param.output_filename}.xlsx"
# Store file
settings.STORAGE_IMPL.put(self._canvas._tenant_id, doc_id, binary_content)
# Set attachment output
self.set_output("attachment", {
"doc_id": doc_id,
"format": self._param.output_format,
"file_name": filename
})
total_rows = sum(len(df) for df in dfs.values())
self.set_output("summary", f"Generated {filename} with {len(dfs)} sheet(s), {total_rows} total rows")
self.set_output("data", {k: v.to_dict(orient="records") for k, v in dfs.items()})
logging.info(f"ExcelProcessor: Generated {filename} as {doc_id}")
except Exception as e:
logging.error(f"ExcelProcessor output error: {e}")
self.set_output("summary", f"Error generating output: {str(e)}")
def thoughts(self) -> str:
"""Return component thoughts for UI display."""
op = self._param.operation
if op == "read":
return "Reading Excel files..."
elif op == "merge":
return "Merging Excel data..."
elif op == "transform":
return "Transforming data..."
elif op == "output":
return "Generating Excel output..."
return "Processing Excel..."

View File

@ -1,32 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class ExitLoopParam(ComponentParamBase, ABC):
def check(self):
return True
class ExitLoop(ComponentBase, ABC):
component_name = "ExitLoop"
def _invoke(self, **kwargs):
pass
def thoughts(self) -> str:
return ""

View File

@ -13,12 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import json from agent.component.base import ComponentBase, ComponentParamBase
import re
from functools import partial
from agent.component.base import ComponentParamBase, ComponentBase
from api.db.services.file_service import FileService
class UserFillUpParam(ComponentParamBase): class UserFillUpParam(ComponentParamBase):
@ -36,42 +31,10 @@ class UserFillUp(ComponentBase):
component_name = "UserFillUp" component_name = "UserFillUp"
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("UserFillUp processing"):
return
if self._param.enable_tips:
content = self._param.tips
for k, v in self.get_input_elements_from_text(self._param.tips).items():
v = v["value"]
ans = ""
if isinstance(v, partial):
for t in v():
ans += t
elif isinstance(v, list):
ans = ",".join([str(vv) for vv in v])
elif not isinstance(v, str):
try:
ans = json.dumps(v, ensure_ascii=False)
except Exception:
pass
else:
ans = v
if not ans:
ans = ""
content = re.sub(r"\{%s\}"%k, ans, content)
self.set_output("tips", content)
for k, v in kwargs.get("inputs", {}).items(): for k, v in kwargs.get("inputs", {}).items():
if self.check_if_canceled("UserFillUp processing"):
return
if isinstance(v, dict) and v.get("type", "").lower().find("file") >=0:
if v.get("optional") and v.get("value", None) is None:
v = None
else:
v = FileService.get_files([v["value"]])
else:
v = v.get("value")
self.set_output(k, v) self.set_output(k, v)
def thoughts(self) -> str: def thoughts(self) -> str:
return "Waiting for your input..." return "Waiting for your input..."

View File

@ -19,12 +19,11 @@ import os
import re import re
import time import time
from abc import ABC from abc import ABC
import requests import requests
from agent.component.base import ComponentBase, ComponentParamBase from api.utils.api_utils import timeout
from common.connection_utils import timeout
from deepdoc.parser import HtmlParser from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase
class InvokeParam(ComponentParamBase): class InvokeParam(ComponentParamBase):
@ -44,11 +43,11 @@ class InvokeParam(ComponentParamBase):
self.datatype = "json" # New parameter to determine data posting type self.datatype = "json" # New parameter to determine data posting type
def check(self): def check(self):
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ["get", "post", "put"]) self.check_valid_value(self.method.lower(), "Type of content from the crawler", ['get', 'post', 'put'])
self.check_empty(self.url, "End point URL") self.check_empty(self.url, "End point URL")
self.check_positive_integer(self.timeout, "Timeout time in second") self.check_positive_integer(self.timeout, "Timeout time in second")
self.check_boolean(self.clean_html, "Clean HTML") self.check_boolean(self.clean_html, "Clean HTML")
self.check_valid_value(self.datatype.lower(), "Data post type", ["json", "formdata"]) # Check for valid datapost value self.check_valid_value(self.datatype.lower(), "Data post type", ['json', 'formdata']) # Check for valid datapost value
class Invoke(ComponentBase, ABC): class Invoke(ComponentBase, ABC):
@ -56,9 +55,6 @@ class Invoke(ComponentBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3)))
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Invoke processing"):
return
args = {} args = {}
for para in self._param.variables: for para in self._param.variables:
if para.get("value"): if para.get("value"):
@ -67,18 +63,6 @@ class Invoke(ComponentBase, ABC):
args[para["key"]] = self._canvas.get_variable_value(para["ref"]) args[para["key"]] = self._canvas.get_variable_value(para["ref"])
url = self._param.url.strip() url = self._param.url.strip()
def replace_variable(match):
var_name = match.group(1)
try:
value = self._canvas.get_variable_value(var_name)
return str(value or "")
except Exception:
return ""
# {base_url} or {component_id@variable_name}
url = re.sub(r"\{([a-zA-Z_][a-zA-Z0-9_.@-]*)\}", replace_variable, url)
if url.find("http") != 0: if url.find("http") != 0:
url = "http://" + url url = "http://" + url
@ -91,35 +75,52 @@ class Invoke(ComponentBase, ABC):
proxies = {"http": self._param.proxy, "https": self._param.proxy} proxies = {"http": self._param.proxy, "https": self._param.proxy}
last_e = "" last_e = ""
for _ in range(self._param.max_retries + 1): for _ in range(self._param.max_retries+1):
if self.check_if_canceled("Invoke processing"):
return
try: try:
if method == "get": if method == 'get':
response = requests.get(url=url, params=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.get(url=url,
params=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
self.set_output("result", response.text) self.set_output("result", response.text)
if method == "put": if method == 'put':
if self._param.datatype.lower() == "json": if self._param.datatype.lower() == 'json':
response = requests.put(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.put(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else: else:
response = requests.put(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.put(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
self.set_output("result", response.text) self.set_output("result", response.text)
if method == "post": if method == 'post':
if self._param.datatype.lower() == "json": if self._param.datatype.lower() == 'json':
response = requests.post(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.post(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else: else:
response = requests.post(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout) response = requests.post(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
self.set_output("result", "\n".join(sections)) self.set_output("result", "\n".join(sections))
else: else:
@ -127,9 +128,6 @@ class Invoke(ComponentBase, ABC):
return self.output("result") return self.output("result")
except Exception as e: except Exception as e:
if self.check_if_canceled("Invoke processing"):
return
last_e = e last_e = e
logging.exception(f"Http request error: {e}") logging.exception(f"Http request error: {e}")
time.sleep(self._param.delay_after_error) time.sleep(self._param.delay_after_error)

View File

@ -16,13 +16,6 @@
from abc import ABC from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
"""
class VariableModel(BaseModel):
data_type: Annotated[Literal["string", "number", "Object", "Boolean", "Array<string>", "Array<number>", "Array<object>", "Array<boolean>"], Field(default="Array<string>")]
input_mode: Annotated[Literal["constant", "variable"], Field(default="constant")]
value: Annotated[Any, Field(default=None)]
model_config = ConfigDict(extra="forbid")
"""
class IterationParam(ComponentParamBase): class IterationParam(ComponentParamBase):
""" """
@ -32,7 +25,6 @@ class IterationParam(ComponentParamBase):
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.items_ref = "" self.items_ref = ""
self.variable={}
def get_input_form(self) -> dict[str, dict]: def get_input_form(self) -> dict[str, dict]:
return { return {
@ -57,9 +49,6 @@ class Iteration(ComponentBase, ABC):
return cid return cid
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Iteration processing"):
return
arr = self._canvas.get_variable_value(self._param.items_ref) arr = self._canvas.get_variable_value(self._param.items_ref)
if not isinstance(arr, list): if not isinstance(arr, list):
self.set_output("_ERROR", self._param.items_ref + " must be an array, but its type is "+str(type(arr))) self.set_output("_ERROR", self._param.items_ref + " must be an array, but its type is "+str(type(arr)))

View File

@ -33,9 +33,6 @@ class IterationItem(ComponentBase, ABC):
self._idx = 0 self._idx = 0
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("IterationItem processing"):
return
parent = self.get_parent() parent = self.get_parent()
arr = self._canvas.get_variable_value(parent._param.items_ref) arr = self._canvas.get_variable_value(parent._param.items_ref)
if not isinstance(arr, list): if not isinstance(arr, list):
@ -43,17 +40,12 @@ class IterationItem(ComponentBase, ABC):
raise Exception(parent._param.items_ref + " must be an array, but its type is "+str(type(arr))) raise Exception(parent._param.items_ref + " must be an array, but its type is "+str(type(arr)))
if self._idx > 0: if self._idx > 0:
if self.check_if_canceled("IterationItem processing"):
return
self.output_collation() self.output_collation()
if self._idx >= len(arr): if self._idx >= len(arr):
self._idx = -1 self._idx = -1
return return
if self.check_if_canceled("IterationItem processing"):
return
self.set_output("item", arr[self._idx]) self.set_output("item", arr[self._idx])
self.set_output("index", self._idx) self.set_output("index", self._idx)
@ -88,4 +80,4 @@ class IterationItem(ComponentBase, ABC):
return self._idx == -1 return self._idx == -1
def thoughts(self) -> str: def thoughts(self) -> str:
return "Next turn..." return "Next turn..."

View File

@ -1,168 +0,0 @@
from abc import ABC
import os
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
class ListOperationsParam(ComponentParamBase):
"""
Define the List Operations component parameters.
"""
def __init__(self):
super().__init__()
self.query = ""
self.operations = "topN"
self.n=0
self.sort_method = "asc"
self.filter = {
"operator": "=",
"value": ""
}
self.outputs = {
"result": {
"value": [],
"type": "Array of ?"
},
"first": {
"value": "",
"type": "?"
},
"last": {
"value": "",
"type": "?"
}
}
def check(self):
self.check_empty(self.query, "query")
self.check_valid_value(self.operations, "Support operations", ["topN","head","tail","filter","sort","drop_duplicates"])
def get_input_form(self) -> dict[str, dict]:
return {}
class ListOperations(ComponentBase,ABC):
component_name = "ListOperations"
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
self.input_objects=[]
inputs = getattr(self._param, "query", None)
self.inputs = self._canvas.get_variable_value(inputs)
if not isinstance(self.inputs, list):
raise TypeError("The input of List Operations should be an array.")
self.set_input_value(inputs, self.inputs)
if self._param.operations == "topN":
self._topN()
elif self._param.operations == "head":
self._head()
elif self._param.operations == "tail":
self._tail()
elif self._param.operations == "filter":
self._filter()
elif self._param.operations == "sort":
self._sort()
elif self._param.operations == "drop_duplicates":
self._drop_duplicates()
def _coerce_n(self):
try:
return int(getattr(self._param, "n", 0))
except Exception:
return 0
def _set_outputs(self, outputs):
self._param.outputs["result"]["value"] = outputs
self._param.outputs["first"]["value"] = outputs[0] if outputs else None
self._param.outputs["last"]["value"] = outputs[-1] if outputs else None
def _topN(self):
n = self._coerce_n()
if n < 1:
outputs = []
else:
n = min(n, len(self.inputs))
outputs = self.inputs[:n]
self._set_outputs(outputs)
def _head(self):
n = self._coerce_n()
if 1 <= n <= len(self.inputs):
outputs = [self.inputs[n - 1]]
else:
outputs = []
self._set_outputs(outputs)
def _tail(self):
n = self._coerce_n()
if 1 <= n <= len(self.inputs):
outputs = [self.inputs[-n]]
else:
outputs = []
self._set_outputs(outputs)
def _filter(self):
self._set_outputs([i for i in self.inputs if self._eval(self._norm(i),self._param.filter["operator"],self._param.filter["value"])])
def _norm(self,v):
s = "" if v is None else str(v)
return s
def _eval(self, v, operator, value):
if operator == "=":
return v == value
elif operator == "":
return v != value
elif operator == "contains":
return value in v
elif operator == "start with":
return v.startswith(value)
elif operator == "end with":
return v.endswith(value)
else:
return False
def _sort(self):
items = self.inputs or []
method = getattr(self._param, "sort_method", "asc") or "asc"
reverse = method == "desc"
if not items:
self._set_outputs([])
return
first = items[0]
if isinstance(first, dict):
outputs = sorted(
items,
key=lambda x: self._hashable(x),
reverse=reverse,
)
else:
outputs = sorted(items, reverse=reverse)
self._set_outputs(outputs)
def _drop_duplicates(self):
seen = set()
outs = []
for item in self.inputs:
k = self._hashable(item)
if k in seen:
continue
seen.add(k)
outs.append(item)
self._set_outputs(outs)
def _hashable(self,x):
if isinstance(x, dict):
return tuple(sorted((k, self._hashable(v)) for k, v in x.items()))
if isinstance(x, (list, tuple)):
return tuple(self._hashable(v) for v in x)
if isinstance(x, set):
return tuple(sorted(self._hashable(v) for v in x))
return x
def thoughts(self) -> str:
return "ListOperation in progress"

View File

@ -13,21 +13,20 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import asyncio
import json import json
import logging import logging
import os import os
import re import re
from copy import deepcopy from copy import deepcopy
from typing import Any, AsyncGenerator from typing import Any, Generator
import json_repair import json_repair
from functools import partial from functools import partial
from common.constants import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService from api.db.services.tenant_llm_service import TenantLLMService
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
from rag.prompts.generator import tool_call_summary, message_fit_in, citation_prompt, structured_output_prompt from rag.prompts.generator import tool_call_summary, message_fit_in, citation_prompt
class LLMParam(ComponentParamBase): class LLMParam(ComponentParamBase):
@ -56,6 +55,7 @@ class LLMParam(ComponentParamBase):
self.check_nonnegative_number(int(self.max_tokens), "[Agent] Max tokens") self.check_nonnegative_number(int(self.max_tokens), "[Agent] Max tokens")
self.check_decimal_float(float(self.top_p), "[Agent] Top P") self.check_decimal_float(float(self.top_p), "[Agent] Top P")
self.check_empty(self.llm_id, "[Agent] LLM") self.check_empty(self.llm_id, "[Agent] LLM")
self.check_empty(self.sys_prompt, "[Agent] System prompt")
self.check_empty(self.prompts, "[Agent] User prompt") self.check_empty(self.prompts, "[Agent] User prompt")
def gen_conf(self): def gen_conf(self):
@ -166,67 +166,25 @@ class LLM(ComponentBase):
sys_prompt = re.sub(rf"<{tag}>(.*?)</{tag}>", "", sys_prompt, flags=re.DOTALL|re.IGNORECASE) sys_prompt = re.sub(rf"<{tag}>(.*?)</{tag}>", "", sys_prompt, flags=re.DOTALL|re.IGNORECASE)
return pts, sys_prompt return pts, sys_prompt
async def _generate_async(self, msg: list[dict], **kwargs) -> str: def _generate(self, msg:list[dict], **kwargs) -> str:
if not self.imgs: if not self.imgs:
return await self.chat_mdl.async_chat(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs) return self.chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs)
return await self.chat_mdl.async_chat(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs) return self.chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs)
async def _generate_streamly(self, msg: list[dict], **kwargs) -> AsyncGenerator[str, None]: def _generate_streamly(self, msg:list[dict], **kwargs) -> Generator[str, None, None]:
async def delta_wrapper(txt_iter): ans = ""
ans = ""
last_idx = 0
endswith_think = False
def delta(txt):
nonlocal ans, last_idx, endswith_think
delta_ans = txt[last_idx:]
ans = txt
if delta_ans.find("<think>") == 0:
last_idx += len("<think>")
return "<think>"
elif delta_ans.find("<think>") > 0:
delta_ans = txt[last_idx:last_idx + delta_ans.find("<think>")]
last_idx += delta_ans.find("<think>")
return delta_ans
elif delta_ans.endswith("</think>"):
endswith_think = True
elif endswith_think:
endswith_think = False
return "</think>"
last_idx = len(ans)
if ans.endswith("</think>"):
last_idx -= len("</think>")
return re.sub(r"(<think>|</think>)", "", delta_ans)
async for t in txt_iter:
yield delta(t)
if not self.imgs:
async for t in delta_wrapper(self.chat_mdl.async_chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs)):
yield t
return
async for t in delta_wrapper(self.chat_mdl.async_chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs)):
yield t
async def _stream_output_async(self, prompt, msg):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
answer = ""
last_idx = 0 last_idx = 0
endswith_think = False endswith_think = False
def delta(txt): def delta(txt):
nonlocal answer, last_idx, endswith_think nonlocal ans, last_idx, endswith_think
delta_ans = txt[last_idx:] delta_ans = txt[last_idx:]
answer = txt ans = txt
if delta_ans.find("<think>") == 0: if delta_ans.find("<think>") == 0:
last_idx += len("<think>") last_idx += len("<think>")
return "<think>" return "<think>"
elif delta_ans.find("<think>") > 0: elif delta_ans.find("<think>") > 0:
delta_ans = txt[last_idx:last_idx + delta_ans.find("<think>")] delta_ans = txt[last_idx:last_idx+delta_ans.find("<think>")]
last_idx += delta_ans.find("<think>") last_idx += delta_ans.find("<think>")
return delta_ans return delta_ans
elif delta_ans.endswith("</think>"): elif delta_ans.endswith("</think>"):
@ -235,36 +193,20 @@ class LLM(ComponentBase):
endswith_think = False endswith_think = False
return "</think>" return "</think>"
last_idx = len(answer) last_idx = len(ans)
if answer.endswith("</think>"): if ans.endswith("</think>"):
last_idx -= len("</think>") last_idx -= len("</think>")
return re.sub(r"(<think>|</think>)", "", delta_ans) return re.sub(r"(<think>|</think>)", "", delta_ans)
stream_kwargs = {"images": self.imgs} if self.imgs else {} if not self.imgs:
async for ans in self.chat_mdl.async_chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), **stream_kwargs): for txt in self.chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs):
if self.check_if_canceled("LLM streaming"): yield delta(txt)
return else:
for txt in self.chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs):
if isinstance(ans, int): yield delta(txt)
continue
if ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
yield self.get_exception_default_value()
else:
self.set_output("_ERROR", ans)
return
yield delta(ans)
self.set_output("content", answer)
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
async def _invoke_async(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("LLM processing"):
return
def clean_formated_answer(ans: str) -> str: def clean_formated_answer(ans: str) -> str:
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL) ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
ans = re.sub(r"^.*```json", "", ans, flags=re.DOTALL) ans = re.sub(r"^.*```json", "", ans, flags=re.DOTALL)
@ -272,34 +214,24 @@ class LLM(ComponentBase):
prompt, msg, _ = self._prepare_prompt_variables() prompt, msg, _ = self._prepare_prompt_variables()
error: str = "" error: str = ""
output_structure = None
try:
output_structure = self._param.outputs["structured"]
except Exception:
pass
if output_structure and isinstance(output_structure, dict) and output_structure.get("properties") and len(output_structure["properties"]) > 0:
schema = json.dumps(output_structure, ensure_ascii=False, indent=2)
prompt_with_schema = prompt + structured_output_prompt(schema)
for _ in range(self._param.max_retries + 1):
if self.check_if_canceled("LLM processing"):
return
_, msg_fit = message_fit_in( if self._param.output_structure:
[{"role": "system", "content": prompt_with_schema}, *deepcopy(msg)], prompt += "\nThe output MUST follow this JSON format:\n"+json.dumps(self._param.output_structure, ensure_ascii=False, indent=2)
int(self.chat_mdl.max_length * 0.97), prompt += "\nRedundant information is FORBIDDEN."
) for _ in range(self._param.max_retries+1):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
error = "" error = ""
ans = await self._generate_async(msg_fit) ans = self._generate(msg)
msg_fit.pop(0) msg.pop(0)
if ans.find("**ERROR**") >= 0: if ans.find("**ERROR**") >= 0:
logging.error(f"LLM response error: {ans}") logging.error(f"LLM response error: {ans}")
error = ans error = ans
continue continue
try: try:
self.set_output("structured", json_repair.loads(clean_formated_answer(ans))) self.set_output("structured_content", json_repair.loads(clean_formated_answer(ans)))
return return
except Exception: except Exception:
msg_fit.append({"role": "user", "content": "The answer can't not be parsed as JSON"}) msg.append({"role": "user", "content": "The answer can't not be parsed as JSON"})
error = "The answer can't not be parsed as JSON" error = "The answer can't not be parsed as JSON"
if error: if error:
self.set_output("_ERROR", error) self.set_output("_ERROR", error)
@ -307,23 +239,15 @@ class LLM(ComponentBase):
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else [] downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler() ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower() == "message" for cid in downstreams]) and not ( if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
ex and ex["goto"] self.set_output("content", partial(self._stream_output, prompt, msg))
):
self.set_output("content", partial(self._stream_output_async, prompt, deepcopy(msg)))
return return
error = "" for _ in range(self._param.max_retries+1):
for _ in range(self._param.max_retries + 1): _, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
if self.check_if_canceled("LLM processing"):
return
_, msg_fit = message_fit_in(
[{"role": "system", "content": prompt}, *deepcopy(msg)], int(self.chat_mdl.max_length * 0.97)
)
error = "" error = ""
ans = await self._generate_async(msg_fit) ans = self._generate(msg)
msg_fit.pop(0) msg.pop(0)
if ans.find("**ERROR**") >= 0: if ans.find("**ERROR**") >= 0:
logging.error(f"LLM response error: {ans}") logging.error(f"LLM response error: {ans}")
error = ans error = ans
@ -337,15 +261,26 @@ class LLM(ComponentBase):
else: else:
self.set_output("_ERROR", error) self.set_output("_ERROR", error)
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))) def _stream_output(self, prompt, msg):
def _invoke(self, **kwargs): _, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
return asyncio.run(self._invoke_async(**kwargs)) answer = ""
for ans in self._generate_streamly(msg):
if ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
yield self.get_exception_default_value()
else:
self.set_output("_ERROR", ans)
return
yield ans
answer += ans
self.set_output("content", answer)
async def add_memory(self, user:str, assist:str, func_name: str, params: dict, results: str, user_defined_prompt:dict={}): def add_memory(self, user:str, assist:str, func_name: str, params: dict, results: str, user_defined_prompt:dict={}):
summ = await tool_call_summary(self.chat_mdl, func_name, params, results, user_defined_prompt) summ = tool_call_summary(self.chat_mdl, func_name, params, results, user_defined_prompt)
logging.info(f"[MEMORY]: {summ}") logging.info(f"[MEMORY]: {summ}")
self._canvas.add_memory(user, assist, summ) self._canvas.add_memory(user, assist, summ)
def thoughts(self) -> str: def thoughts(self) -> str:
_, msg,_ = self._prepare_prompt_variables() _, msg,_ = self._prepare_prompt_variables()
return "⌛Give me a moment—starting from: \n\n" + re.sub(r"(User's query:|[\\]+)", '', msg[-1]['content'], flags=re.DOTALL) + "\n\nIll figure out our best next move." return "⌛Give me a moment—starting from: \n\n" + re.sub(r"(User's query:|[\\]+)", '', msg[-1]['content'], flags=re.DOTALL) + "\n\nIll figure out our best next move."

View File

@ -1,80 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class LoopParam(ComponentParamBase):
"""
Define the Loop component parameters.
"""
def __init__(self):
super().__init__()
self.loop_variables = []
self.loop_termination_condition=[]
self.maximum_loop_count = 0
def get_input_form(self) -> dict[str, dict]:
return {
"items": {
"type": "json",
"name": "Items"
}
}
def check(self):
return True
class Loop(ComponentBase, ABC):
component_name = "Loop"
def get_start(self):
for cid in self._canvas.components.keys():
if self._canvas.get_component(cid)["obj"].component_name.lower() != "loopitem":
continue
if self._canvas.get_component(cid)["parent_id"] == self._id:
return cid
def _invoke(self, **kwargs):
if self.check_if_canceled("Loop processing"):
return
for item in self._param.loop_variables:
if any([not item.get("variable"), not item.get("input_mode"), not item.get("value"),not item.get("type")]):
assert "Loop Variable is not complete."
if item["input_mode"]=="variable":
self.set_output(item["variable"],self._canvas.get_variable_value(item["value"]))
elif item["input_mode"]=="constant":
self.set_output(item["variable"],item["value"])
else:
if item["type"] == "number":
self.set_output(item["variable"], 0)
elif item["type"] == "string":
self.set_output(item["variable"], "")
elif item["type"] == "boolean":
self.set_output(item["variable"], False)
elif item["type"].startswith("object"):
self.set_output(item["variable"], {})
elif item["type"].startswith("array"):
self.set_output(item["variable"], [])
else:
self.set_output(item["variable"], "")
def thoughts(self) -> str:
return "Loop from canvas."

View File

@ -1,167 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase
class LoopItemParam(ComponentParamBase):
"""
Define the LoopItem component parameters.
"""
def check(self):
return True
class LoopItem(ComponentBase, ABC):
component_name = "LoopItem"
def __init__(self, canvas, id, param: ComponentParamBase):
super().__init__(canvas, id, param)
self._idx = 0
def _invoke(self, **kwargs):
if self.check_if_canceled("LoopItem processing"):
return
parent = self.get_parent()
maximum_loop_count = parent._param.maximum_loop_count
if self._idx >= maximum_loop_count:
self._idx = -1
return
if self._idx > 0:
if self.check_if_canceled("LoopItem processing"):
return
self._idx += 1
def evaluate_condition(self,var, operator, value):
if isinstance(var, str):
if operator == "contains":
return value in var
elif operator == "not contains":
return value not in var
elif operator == "start with":
return var.startswith(value)
elif operator == "end with":
return var.endswith(value)
elif operator == "is":
return var == value
elif operator == "is not":
return var != value
elif operator == "empty":
return var == ""
elif operator == "not empty":
return var != ""
elif isinstance(var, (int, float)):
if operator == "=":
return var == value
elif operator == "":
return var != value
elif operator == ">":
return var > value
elif operator == "<":
return var < value
elif operator == "":
return var >= value
elif operator == "":
return var <= value
elif operator == "empty":
return var is None
elif operator == "not empty":
return var is not None
elif isinstance(var, bool):
if operator == "is":
return var is value
elif operator == "is not":
return var is not value
elif operator == "empty":
return var is None
elif operator == "not empty":
return var is not None
elif isinstance(var, dict):
if operator == "empty":
return len(var) == 0
elif operator == "not empty":
return len(var) > 0
elif isinstance(var, list):
if operator == "contains":
return value in var
elif operator == "not contains":
return value not in var
elif operator == "is":
return var == value
elif operator == "is not":
return var != value
elif operator == "empty":
return len(var) == 0
elif operator == "not empty":
return len(var) > 0
elif var is None:
if operator == "empty":
return True
return False
raise Exception(f"Invalid operator: {operator}")
def end(self):
if self._idx == -1:
return True
parent = self.get_parent()
logical_operator = parent._param.logical_operator if hasattr(parent._param, "logical_operator") else "and"
conditions = []
for item in parent._param.loop_termination_condition:
if not item.get("variable") or not item.get("operator"):
raise ValueError("Loop condition is incomplete.")
var = self._canvas.get_variable_value(item["variable"])
operator = item["operator"]
input_mode = item.get("input_mode", "constant")
if input_mode == "variable":
value = self._canvas.get_variable_value(item.get("value", ""))
elif input_mode == "constant":
value = item.get("value", "")
else:
raise ValueError("Invalid input mode.")
conditions.append(self.evaluate_condition(var, operator, value))
should_end = (
all(conditions) if logical_operator == "and"
else any(conditions) if logical_operator == "or"
else None
)
if should_end is None:
raise ValueError("Invalid logical operator,should be 'and' or 'or'.")
if should_end:
self._idx = -1
return True
return False
def next(self):
if self._idx == -1:
self._idx = 0
else:
self._idx += 1
if self._idx >= len(self._items):
self._idx = -1
return False
def thoughts(self) -> str:
return "Next turn..."

View File

@ -13,27 +13,17 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import asyncio
import nest_asyncio
nest_asyncio.apply()
import inspect
import json import json
import os import os
import random import random
import re import re
import logging
import tempfile
from functools import partial from functools import partial
from typing import Any from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from jinja2 import Template as Jinja2Template from jinja2 import Template as Jinja2Template
from common.connection_utils import timeout from api.utils.api_utils import timeout
from common.misc_utils import get_uuid
from common import settings
from api.db.joint_services.memory_message_service import queue_save_to_memory_task
class MessageParam(ComponentParamBase): class MessageParam(ComponentParamBase):
@ -44,8 +34,6 @@ class MessageParam(ComponentParamBase):
super().__init__() super().__init__()
self.content = [] self.content = []
self.stream = True self.stream = True
self.output_format = None # default output format
self.auto_play = False
self.outputs = { self.outputs = {
"content": { "content": {
"type": "str" "type": "str"
@ -61,9 +49,6 @@ class MessageParam(ComponentParamBase):
class Message(ComponentBase): class Message(ComponentBase):
component_name = "Message" component_name = "Message"
def get_input_elements(self) -> dict[str, Any]:
return self.get_input_elements_from_text("".join(self._param.content))
def get_kwargs(self, script:str, kwargs:dict = {}, delimiter:str=None) -> tuple[str, dict[str, str | list | Any]]: def get_kwargs(self, script:str, kwargs:dict = {}, delimiter:str=None) -> tuple[str, dict[str, str | list | Any]]:
for k,v in self.get_input_elements_from_text(script).items(): for k,v in self.get_input_elements_from_text(script).items():
if k in kwargs: if k in kwargs:
@ -73,12 +58,8 @@ class Message(ComponentBase):
v = "" v = ""
ans = "" ans = ""
if isinstance(v, partial): if isinstance(v, partial):
iter_obj = v() for t in v():
if inspect.isasyncgen(iter_obj): ans += t
ans = asyncio.run(self._consume_async_gen(iter_obj))
else:
for t in iter_obj:
ans += t
elif isinstance(v, list) and delimiter: elif isinstance(v, list) and delimiter:
ans = delimiter.join([str(vv) for vv in v]) ans = delimiter.join([str(vv) for vv in v])
elif not isinstance(v, str): elif not isinstance(v, str):
@ -100,20 +81,11 @@ class Message(ComponentBase):
_kwargs[_n] = v _kwargs[_n] = v
return script, _kwargs return script, _kwargs
async def _consume_async_gen(self, agen): def _stream(self, rand_cnt:str):
buf = ""
async for t in agen:
buf += t
return buf
async def _stream(self, rand_cnt:str):
s = 0 s = 0
all_content = "" all_content = ""
cache = {} cache = {}
for r in re.finditer(self.variable_ref_patt, rand_cnt, flags=re.DOTALL): for r in re.finditer(self.variable_ref_patt, rand_cnt, flags=re.DOTALL):
if self.check_if_canceled("Message streaming"):
return
all_content += rand_cnt[s: r.start()] all_content += rand_cnt[s: r.start()]
yield rand_cnt[s: r.start()] yield rand_cnt[s: r.start()]
s = r.end() s = r.end()
@ -124,51 +96,30 @@ class Message(ComponentBase):
continue continue
v = self._canvas.get_variable_value(exp) v = self._canvas.get_variable_value(exp)
if v is None: if not v:
v = "" v = ""
if isinstance(v, partial): if isinstance(v, partial):
cnt = "" cnt = ""
iter_obj = v() for t in v():
if inspect.isasyncgen(iter_obj): all_content += t
async for t in iter_obj: cnt += t
if self.check_if_canceled("Message streaming"): yield t
return
all_content += t
cnt += t
yield t
else:
for t in iter_obj:
if self.check_if_canceled("Message streaming"):
return
all_content += t
cnt += t
yield t
self.set_input_value(exp, cnt)
continue continue
elif inspect.isawaitable(v):
v = await v
elif not isinstance(v, str): elif not isinstance(v, str):
try: try:
v = json.dumps(v, ensure_ascii=False) v = json.dumps(v, ensure_ascii=False, indent=2)
except Exception: except Exception:
v = str(v) v = str(v)
yield v yield v
self.set_input_value(exp, v)
all_content += v all_content += v
cache[exp] = v cache[exp] = v
if s < len(rand_cnt): if s < len(rand_cnt):
if self.check_if_canceled("Message streaming"):
return
all_content += rand_cnt[s: ] all_content += rand_cnt[s: ]
yield rand_cnt[s: ] yield rand_cnt[s: ]
self.set_output("content", all_content) self.set_output("content", all_content)
self._convert_content(all_content)
await self._save_to_memory(all_content)
def _is_jinjia2(self, content:str) -> bool: def _is_jinjia2(self, content:str) -> bool:
patt = [ patt = [
@ -178,9 +129,6 @@ class Message(ComponentBase):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Message processing"):
return
rand_cnt = random.choice(self._param.content) rand_cnt = random.choice(self._param.content)
if self._param.stream and not self._is_jinjia2(rand_cnt): if self._param.stream and not self._is_jinjia2(rand_cnt):
self.set_output("content", partial(self._stream, rand_cnt)) self.set_output("content", partial(self._stream, rand_cnt))
@ -193,248 +141,10 @@ class Message(ComponentBase):
except Exception: except Exception:
pass pass
if self.check_if_canceled("Message processing"):
return
for n, v in kwargs.items(): for n, v in kwargs.items():
content = re.sub(n, v, content) content = re.sub(n, v, content)
self.set_output("content", content) self.set_output("content", content)
self._convert_content(content)
self._save_to_memory(content)
def thoughts(self) -> str: def thoughts(self) -> str:
return "" return ""
def _parse_markdown_table_lines(self, table_lines: list):
"""
Parse a list of Markdown table lines into a pandas DataFrame.
Args:
table_lines: List of strings, each representing a row in the Markdown table
(excluding separator lines like |---|---|)
Returns:
pandas DataFrame with the table data, or None if parsing fails
"""
import pandas as pd
if not table_lines:
return None
rows = []
headers = None
for line in table_lines:
# Split by | and clean up
cells = [cell.strip() for cell in line.split('|')]
# Remove empty first and last elements from split (caused by leading/trailing |)
cells = [c for c in cells if c]
if headers is None:
headers = cells
else:
rows.append(cells)
if headers and rows:
# Ensure all rows have same number of columns as headers
normalized_rows = []
for row in rows:
while len(row) < len(headers):
row.append('')
normalized_rows.append(row[:len(headers)])
return pd.DataFrame(normalized_rows, columns=headers)
return None
def _convert_content(self, content):
if not self._param.output_format:
return
import pypandoc
doc_id = get_uuid()
if self._param.output_format.lower() not in {"markdown", "html", "pdf", "docx", "xlsx"}:
self._param.output_format = "markdown"
try:
if self._param.output_format in {"markdown", "html"}:
if isinstance(content, str):
converted = pypandoc.convert_text(
content,
to=self._param.output_format,
format="markdown",
)
else:
converted = pypandoc.convert_file(
content,
to=self._param.output_format,
format="markdown",
)
binary_content = converted.encode("utf-8")
elif self._param.output_format == "xlsx":
import pandas as pd
from io import BytesIO
# Debug: log the content being parsed
logging.info(f"XLSX Parser: Content length={len(content) if content else 0}, first 500 chars: {content[:500] if content else 'None'}")
# Try to parse ALL Markdown tables from the content
# Each table will be written to a separate sheet
tables = [] # List of (sheet_name, dataframe)
if isinstance(content, str):
lines = content.strip().split('\n')
logging.info(f"XLSX Parser: Total lines={len(lines)}, lines starting with '|': {sum(1 for line in lines if line.strip().startswith('|'))}")
current_table_lines = []
current_table_title = None
pending_title = None
in_table = False
table_count = 0
for i, line in enumerate(lines):
stripped = line.strip()
# Check for potential table title (lines before a table)
# Look for patterns like "Table 1:", "## Table", or markdown headers
if not in_table and stripped and not stripped.startswith('|'):
# Check if this could be a table title
lower_stripped = stripped.lower()
if (lower_stripped.startswith('table') or
stripped.startswith('#') or
':' in stripped):
pending_title = stripped.lstrip('#').strip()
if stripped.startswith('|') and '|' in stripped[1:]:
# Check if this is a separator line (|---|---|)
cleaned = stripped.replace(' ', '').replace('|', '').replace('-', '').replace(':', '')
if cleaned == '':
continue # Skip separator line
if not in_table:
# Starting a new table
in_table = True
current_table_lines = []
current_table_title = pending_title
pending_title = None
current_table_lines.append(stripped)
elif in_table and not stripped.startswith('|'):
# End of current table - save it
if current_table_lines:
df = self._parse_markdown_table_lines(current_table_lines)
if df is not None and not df.empty:
table_count += 1
# Generate sheet name
if current_table_title:
# Clean and truncate title for sheet name
sheet_name = current_table_title[:31]
sheet_name = sheet_name.replace('/', '_').replace('\\', '_').replace('*', '').replace('?', '').replace('[', '').replace(']', '').replace(':', '')
else:
sheet_name = f"Table_{table_count}"
tables.append((sheet_name, df))
# Reset for next table
in_table = False
current_table_lines = []
current_table_title = None
# Check if this line could be a title for the next table
if stripped:
lower_stripped = stripped.lower()
if (lower_stripped.startswith('table') or
stripped.startswith('#') or
':' in stripped):
pending_title = stripped.lstrip('#').strip()
# Don't forget the last table if content ends with a table
if in_table and current_table_lines:
df = self._parse_markdown_table_lines(current_table_lines)
if df is not None and not df.empty:
table_count += 1
if current_table_title:
sheet_name = current_table_title[:31]
sheet_name = sheet_name.replace('/', '_').replace('\\', '_').replace('*', '').replace('?', '').replace('[', '').replace(']', '').replace(':', '')
else:
sheet_name = f"Table_{table_count}"
tables.append((sheet_name, df))
# Fallback: if no tables found, create single sheet with content
if not tables:
df = pd.DataFrame({"Content": [content if content else ""]})
tables = [("Data", df)]
# Write all tables to Excel, each in a separate sheet
excel_io = BytesIO()
with pd.ExcelWriter(excel_io, engine='openpyxl') as writer:
used_names = set()
for sheet_name, df in tables:
# Ensure unique sheet names
original_name = sheet_name
counter = 1
while sheet_name in used_names:
suffix = f"_{counter}"
sheet_name = original_name[:31-len(suffix)] + suffix
counter += 1
used_names.add(sheet_name)
df.to_excel(writer, sheet_name=sheet_name, index=False)
excel_io.seek(0)
binary_content = excel_io.read()
logging.info(f"Generated Excel with {len(tables)} sheet(s): {[t[0] for t in tables]}")
else: # pdf, docx
with tempfile.NamedTemporaryFile(suffix=f".{self._param.output_format}", delete=False) as tmp:
tmp_name = tmp.name
try:
if isinstance(content, str):
pypandoc.convert_text(
content,
to=self._param.output_format,
format="markdown",
outputfile=tmp_name,
)
else:
pypandoc.convert_file(
content,
to=self._param.output_format,
format="markdown",
outputfile=tmp_name,
)
with open(tmp_name, "rb") as f:
binary_content = f.read()
finally:
if os.path.exists(tmp_name):
os.remove(tmp_name)
settings.STORAGE_IMPL.put(self._canvas._tenant_id, doc_id, binary_content)
self.set_output("attachment", {
"doc_id":doc_id,
"format":self._param.output_format,
"file_name":f"{doc_id[:8]}.{self._param.output_format}"})
logging.info(f"Converted content uploaded as {doc_id} (format={self._param.output_format})")
except Exception as e:
logging.error(f"Error converting content to {self._param.output_format}: {e}")
async def _save_to_memory(self, content):
if not hasattr(self._param, "memory_ids") or not self._param.memory_ids:
return True, "No memory selected."
message_dict = {
"user_id": self._canvas._tenant_id,
"agent_id": self._canvas._id,
"session_id": self._canvas.task_id,
"user_input": self._canvas.get_sys_query(),
"agent_response": content
}
return await queue_save_to_memory_task(self._param.memory_ids, message_dict)

View File

@ -16,11 +16,9 @@
import os import os
import re import re
from abc import ABC from abc import ABC
from typing import Any
from jinja2 import Template as Jinja2Template from jinja2 import Template as Jinja2Template
from agent.component.base import ComponentParamBase from agent.component.base import ComponentParamBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
from .message import Message from .message import Message
@ -45,9 +43,6 @@ class StringTransformParam(ComponentParamBase):
class StringTransform(Message, ABC): class StringTransform(Message, ABC):
component_name = "StringTransform" component_name = "StringTransform"
def get_input_elements(self) -> dict[str, Any]:
return self.get_input_elements_from_text(self._param.script)
def get_input_form(self) -> dict[str, dict]: def get_input_form(self) -> dict[str, dict]:
if self._param.method == "split": if self._param.method == "split":
return { return {
@ -63,24 +58,17 @@ class StringTransform(Message, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("StringTransform processing"):
return
if self._param.method == "split": if self._param.method == "split":
self._split(kwargs.get("line")) self._split(kwargs.get("line"))
else: else:
self._merge(kwargs) self._merge(kwargs)
def _split(self, line:str|None = None): def _split(self, line:str|None = None):
if self.check_if_canceled("StringTransform split processing"):
return
var = self._canvas.get_variable_value(self._param.split_ref) if not line else line var = self._canvas.get_variable_value(self._param.split_ref) if not line else line
if not var: if not var:
var = "" var = ""
assert isinstance(var, str), "The input variable is not a string: {}".format(type(var)) assert isinstance(var, str), "The input variable is not a string: {}".format(type(var))
self.set_input_value(self._param.split_ref, var) self.set_input_value(self._param.split_ref, var)
res = [] res = []
for i,s in enumerate(re.split(r"(%s)"%("|".join([re.escape(d) for d in self._param.delimiters])), var, flags=re.DOTALL)): for i,s in enumerate(re.split(r"(%s)"%("|".join([re.escape(d) for d in self._param.delimiters])), var, flags=re.DOTALL)):
if i % 2 == 1: if i % 2 == 1:
@ -89,9 +77,6 @@ class StringTransform(Message, ABC):
self.set_output("result", res) self.set_output("result", res)
def _merge(self, kwargs:dict[str, str] = {}): def _merge(self, kwargs:dict[str, str] = {}):
if self.check_if_canceled("StringTransform merge processing"):
return
script = self._param.script script = self._param.script
script, kwargs = self.get_kwargs(script, kwargs, self._param.delimiters[0]) script, kwargs = self.get_kwargs(script, kwargs, self._param.delimiters[0])

View File

@ -19,7 +19,7 @@ from abc import ABC
from typing import Any from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from common.connection_utils import timeout from api.utils.api_utils import timeout
class SwitchParam(ComponentParamBase): class SwitchParam(ComponentParamBase):
@ -63,18 +63,9 @@ class Switch(ComponentBase, ABC):
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3))) @timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3)))
def _invoke(self, **kwargs): def _invoke(self, **kwargs):
if self.check_if_canceled("Switch processing"):
return
for cond in self._param.conditions: for cond in self._param.conditions:
if self.check_if_canceled("Switch processing"):
return
res = [] res = []
for item in cond["items"]: for item in cond["items"]:
if self.check_if_canceled("Switch processing"):
return
if not item["cpn_id"]: if not item["cpn_id"]:
continue continue
cpn_v = self._canvas.get_variable_value(item["cpn_id"]) cpn_v = self._canvas.get_variable_value(item["cpn_id"])
@ -137,4 +128,4 @@ class Switch(ComponentBase, ABC):
raise ValueError('Not supported operator' + operator) raise ValueError('Not supported operator' + operator)
def thoughts(self) -> str: def thoughts(self) -> str:
return "Im weighing a few options and will pick the next step shortly." return "Im weighing a few options and will pick the next step shortly."

View File

@ -1,84 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any
import os
from common.connection_utils import timeout
from agent.component.base import ComponentBase, ComponentParamBase
class VariableAggregatorParam(ComponentParamBase):
"""
Parameters for VariableAggregator
- groups: list of dicts {"group_name": str, "variables": [variable selectors]}
"""
def __init__(self):
super().__init__()
# each group expects: {"group_name": str, "variables": List[str]}
self.groups = []
def check(self):
self.check_empty(self.groups, "[VariableAggregator] groups")
for g in self.groups:
if not g.get("group_name"):
raise ValueError("[VariableAggregator] group_name can not be empty!")
if not g.get("variables"):
raise ValueError(
f"[VariableAggregator] variables of group `{g.get('group_name')}` can not be empty"
)
if not isinstance(g.get("variables"), list):
raise ValueError(
f"[VariableAggregator] variables of group `{g.get('group_name')}` should be a list of strings"
)
def get_input_form(self) -> dict[str, dict]:
return {
"variables": {
"name": "Variables",
"type": "list",
}
}
class VariableAggregator(ComponentBase):
component_name = "VariableAggregator"
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3)))
def _invoke(self, **kwargs):
# Group mode: for each group, pick the first available variable
for group in self._param.groups:
gname = group.get("group_name")
# record candidate selectors within this group
self.set_input_value(f"{gname}.variables", list(group.get("variables", [])))
for selector in group.get("variables", []):
val = self._canvas.get_variable_value(selector['value'])
if val:
self.set_output(gname, val)
break
@staticmethod
def _to_object(value: Any) -> Any:
# Try to convert value to serializable object if it has to_object()
try:
return value.to_object() # type: ignore[attr-defined]
except Exception:
return value
def thoughts(self) -> str:
return "Aggregating variables from canvas and grouping as configured."

View File

@ -1,192 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import os
import numbers
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
class VariableAssignerParam(ComponentParamBase):
"""
Define the Variable Assigner component parameters.
"""
def __init__(self):
super().__init__()
self.variables=[]
def check(self):
return True
def get_input_form(self) -> dict[str, dict]:
return {
"items": {
"type": "json",
"name": "Items"
}
}
class VariableAssigner(ComponentBase,ABC):
component_name = "VariableAssigner"
@timeout(int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
def _invoke(self, **kwargs):
if not isinstance(self._param.variables,list):
return
else:
for item in self._param.variables:
if any([not item.get("variable"), not item.get("operator"), not item.get("parameter")]):
assert "Variable is not complete."
variable=item["variable"]
operator=item["operator"]
parameter=item["parameter"]
variable_value=self._canvas.get_variable_value(variable)
new_variable=self._operate(variable_value,operator,parameter)
self._canvas.set_variable_value(variable, new_variable)
def _operate(self,variable,operator,parameter):
if operator == "overwrite":
return self._overwrite(parameter)
elif operator == "clear":
return self._clear(variable)
elif operator == "set":
return self._set(variable,parameter)
elif operator == "append":
return self._append(variable,parameter)
elif operator == "extend":
return self._extend(variable,parameter)
elif operator == "remove_first":
return self._remove_first(variable)
elif operator == "remove_last":
return self._remove_last(variable)
elif operator == "+=":
return self._add(variable,parameter)
elif operator == "-=":
return self._subtract(variable,parameter)
elif operator == "*=":
return self._multiply(variable,parameter)
elif operator == "/=":
return self._divide(variable,parameter)
else:
return
def _overwrite(self,parameter):
return self._canvas.get_variable_value(parameter)
def _clear(self,variable):
if isinstance(variable,list):
return []
elif isinstance(variable,str):
return ""
elif isinstance(variable,dict):
return {}
elif isinstance(variable,int):
return 0
elif isinstance(variable,float):
return 0.0
elif isinstance(variable,bool):
return False
else:
return None
def _set(self,variable,parameter):
if variable is None:
return self._canvas.get_value_with_variable(parameter)
elif isinstance(variable,str):
return self._canvas.get_value_with_variable(parameter)
elif isinstance(variable,bool):
return parameter
elif isinstance(variable,int):
return parameter
elif isinstance(variable,float):
return parameter
else:
return parameter
def _append(self,variable,parameter):
parameter=self._canvas.get_variable_value(parameter)
if variable is None:
variable=[]
if not isinstance(variable,list):
return "ERROR:VARIABLE_NOT_LIST"
elif len(variable)!=0 and not isinstance(parameter,type(variable[0])):
return "ERROR:PARAMETER_NOT_LIST_ELEMENT_TYPE"
else:
variable.append(parameter)
return variable
def _extend(self,variable,parameter):
parameter=self._canvas.get_variable_value(parameter)
if variable is None:
variable=[]
if not isinstance(variable,list):
return "ERROR:VARIABLE_NOT_LIST"
elif not isinstance(parameter,list):
return "ERROR:PARAMETER_NOT_LIST"
elif len(variable)!=0 and len(parameter)!=0 and not isinstance(parameter[0],type(variable[0])):
return "ERROR:PARAMETER_NOT_LIST_ELEMENT_TYPE"
else:
return variable + parameter
def _remove_first(self,variable):
if len(variable)==0:
return variable
if not isinstance(variable,list):
return "ERROR:VARIABLE_NOT_LIST"
else:
return variable[1:]
def _remove_last(self,variable):
if len(variable)==0:
return variable
if not isinstance(variable,list):
return "ERROR:VARIABLE_NOT_LIST"
else:
return variable[:-1]
def is_number(self, value):
if isinstance(value, bool):
return False
return isinstance(value, numbers.Number)
def _add(self,variable,parameter):
if self.is_number(variable) and self.is_number(parameter):
return variable + parameter
else:
return "ERROR:VARIABLE_NOT_NUMBER or PARAMETER_NOT_NUMBER"
def _subtract(self,variable,parameter):
if self.is_number(variable) and self.is_number(parameter):
return variable - parameter
else:
return "ERROR:VARIABLE_NOT_NUMBER or PARAMETER_NOT_NUMBER"
def _multiply(self,variable,parameter):
if self.is_number(variable) and self.is_number(parameter):
return variable * parameter
else:
return "ERROR:VARIABLE_NOT_NUMBER or PARAMETER_NOT_NUMBER"
def _divide(self,variable,parameter):
if self.is_number(variable) and self.is_number(parameter):
if parameter==0:
return "ERROR:DIVIDE_BY_ZERO"
else:
return variable/parameter
else:
return "ERROR:VARIABLE_NOT_NUMBER or PARAMETER_NOT_NUMBER"
def thoughts(self) -> str:
return "Assign variables from canvas."

View File

@ -1,239 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Sandbox client for agent components.
This module provides a unified interface for agent components to interact
with the configured sandbox provider.
"""
import json
import logging
from typing import Dict, Any, Optional
from api.db.services.system_settings_service import SystemSettingsService
from agent.sandbox.providers import ProviderManager
from agent.sandbox.providers.base import ExecutionResult
logger = logging.getLogger(__name__)
# Global provider manager instance
_provider_manager: Optional[ProviderManager] = None
def get_provider_manager() -> ProviderManager:
"""
Get the global provider manager instance.
Returns:
ProviderManager instance with active provider loaded
"""
global _provider_manager
if _provider_manager is not None:
return _provider_manager
# Initialize provider manager with system settings
_provider_manager = ProviderManager()
_load_provider_from_settings()
return _provider_manager
def _load_provider_from_settings() -> None:
"""
Load sandbox provider from system settings and configure the provider manager.
This function reads the system settings to determine which provider is active
and initializes it with the appropriate configuration.
"""
global _provider_manager
if _provider_manager is None:
return
try:
# Get active provider type
provider_type_settings = SystemSettingsService.get_by_name("sandbox.provider_type")
if not provider_type_settings:
raise RuntimeError(
"Sandbox provider type not configured. Please set 'sandbox.provider_type' in system settings."
)
provider_type = provider_type_settings[0].value
# Get provider configuration
provider_config_settings = SystemSettingsService.get_by_name(f"sandbox.{provider_type}")
if not provider_config_settings:
logger.warning(f"No configuration found for provider: {provider_type}")
config = {}
else:
try:
config = json.loads(provider_config_settings[0].value)
except json.JSONDecodeError as e:
logger.error(f"Failed to parse sandbox config for {provider_type}: {e}")
config = {}
# Import and instantiate the provider
from agent.sandbox.providers import (
SelfManagedProvider,
AliyunCodeInterpreterProvider,
E2BProvider,
)
provider_classes = {
"self_managed": SelfManagedProvider,
"aliyun_codeinterpreter": AliyunCodeInterpreterProvider,
"e2b": E2BProvider,
}
if provider_type not in provider_classes:
logger.error(f"Unknown provider type: {provider_type}")
return
provider_class = provider_classes[provider_type]
provider = provider_class()
# Initialize the provider
if not provider.initialize(config):
logger.error(f"Failed to initialize sandbox provider: {provider_type}. Config keys: {list(config.keys())}")
return
# Set the active provider
_provider_manager.set_provider(provider_type, provider)
logger.info(f"Sandbox provider '{provider_type}' initialized successfully")
except Exception as e:
logger.error(f"Failed to load sandbox provider from settings: {e}")
import traceback
traceback.print_exc()
def reload_provider() -> None:
"""
Reload the sandbox provider from system settings.
Use this function when sandbox settings have been updated.
"""
global _provider_manager
_provider_manager = None
_load_provider_from_settings()
def execute_code(
code: str,
language: str = "python",
timeout: int = 30,
arguments: Optional[Dict[str, Any]] = None
) -> ExecutionResult:
"""
Execute code in the configured sandbox.
This is the main entry point for agent components to execute code.
Args:
code: Source code to execute
language: Programming language (python, nodejs, javascript)
timeout: Maximum execution time in seconds
arguments: Optional arguments dict to pass to main() function
Returns:
ExecutionResult containing stdout, stderr, exit_code, and metadata
Raises:
RuntimeError: If no provider is configured or execution fails
"""
provider_manager = get_provider_manager()
if not provider_manager.is_configured():
raise RuntimeError(
"No sandbox provider configured. Please configure sandbox settings in the admin panel."
)
provider = provider_manager.get_provider()
# Create a sandbox instance
instance = provider.create_instance(template=language)
try:
# Execute the code
result = provider.execute_code(
instance_id=instance.instance_id,
code=code,
language=language,
timeout=timeout,
arguments=arguments
)
return result
finally:
# Clean up the instance
try:
provider.destroy_instance(instance.instance_id)
except Exception as e:
logger.warning(f"Failed to destroy sandbox instance {instance.instance_id}: {e}")
def health_check() -> bool:
"""
Check if the sandbox provider is healthy.
Returns:
True if provider is configured and healthy, False otherwise
"""
try:
provider_manager = get_provider_manager()
if not provider_manager.is_configured():
return False
provider = provider_manager.get_provider()
return provider.health_check()
except Exception as e:
logger.error(f"Sandbox health check failed: {e}")
return False
def get_provider_info() -> Dict[str, Any]:
"""
Get information about the current sandbox provider.
Returns:
Dictionary with provider information:
- provider_type: Type of the active provider
- configured: Whether provider is configured
- healthy: Whether provider is healthy
"""
try:
provider_manager = get_provider_manager()
return {
"provider_type": provider_manager.get_provider_name(),
"configured": provider_manager.is_configured(),
"healthy": health_check(),
}
except Exception as e:
logger.error(f"Failed to get provider info: {e}")
return {
"provider_type": None,
"configured": False,
"healthy": False,
}

View File

@ -1,37 +0,0 @@
FROM python:3.11-slim-bookworm
RUN grep -rl 'deb.debian.org' /etc/apt/ | xargs sed -i 's|http[s]*://deb.debian.org|https://mirrors.tuna.tsinghua.edu.cn|g' && \
apt-get update && \
apt-get install -y curl gcc && \
rm -rf /var/lib/apt/lists/*
ARG TARGETARCH
ARG TARGETVARIANT
RUN set -eux; \
case "${TARGETARCH}${TARGETVARIANT}" in \
amd64) DOCKER_ARCH=x86_64 ;; \
arm64) DOCKER_ARCH=aarch64 ;; \
armv7) DOCKER_ARCH=armhf ;; \
armv6) DOCKER_ARCH=armel ;; \
arm64v8) DOCKER_ARCH=aarch64 ;; \
arm64v7) DOCKER_ARCH=armhf ;; \
arm*) DOCKER_ARCH=armhf ;; \
ppc64le) DOCKER_ARCH=ppc64le ;; \
s390x) DOCKER_ARCH=s390x ;; \
*) echo "Unsupported architecture: ${TARGETARCH}${TARGETVARIANT}" && exit 1 ;; \
esac; \
echo "Downloading Docker for architecture: ${DOCKER_ARCH}"; \
curl -fsSL "https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-29.1.0.tgz" | \
tar xz -C /usr/local/bin --strip-components=1 docker/docker; \
ln -sf /usr/local/bin/docker /usr/bin/docker
COPY --from=ghcr.io/astral-sh/uv:0.7.5 /uv /uvx /bin/
ENV UV_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple
WORKDIR /app
COPY . .
RUN uv pip install --system -r requirements.txt
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "9385"]

View File

@ -1,43 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Sandbox providers package.
This package contains:
- base.py: Base interface for all sandbox providers
- manager.py: Provider manager for managing active provider
- self_managed.py: Self-managed provider implementation (wraps existing executor_manager)
- aliyun_codeinterpreter.py: Aliyun Code Interpreter provider implementation
Official Documentation: https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter
- e2b.py: E2B provider implementation
"""
from .base import SandboxProvider, SandboxInstance, ExecutionResult
from .manager import ProviderManager
from .self_managed import SelfManagedProvider
from .aliyun_codeinterpreter import AliyunCodeInterpreterProvider
from .e2b import E2BProvider
__all__ = [
"SandboxProvider",
"SandboxInstance",
"ExecutionResult",
"ProviderManager",
"SelfManagedProvider",
"AliyunCodeInterpreterProvider",
"E2BProvider",
]

View File

@ -1,512 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Aliyun Code Interpreter provider implementation.
This provider integrates with Aliyun Function Compute Code Interpreter service
for secure code execution in serverless microVMs using the official agentrun-sdk.
Official Documentation: https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter
Official SDK: https://github.com/Serverless-Devs/agentrun-sdk-python
https://api.aliyun.com/api/AgentRun/2025-09-10/CreateTemplate?lang=PYTHON
https://api.aliyun.com/api/AgentRun/2025-09-10/CreateSandbox?lang=PYTHON
"""
import logging
import os
import time
from typing import Dict, Any, List, Optional
from datetime import datetime, timezone
from agentrun.sandbox import TemplateType, CodeLanguage, Template, TemplateInput, Sandbox
from agentrun.utils.config import Config
from agentrun.utils.exception import ServerError
from .base import SandboxProvider, SandboxInstance, ExecutionResult
logger = logging.getLogger(__name__)
class AliyunCodeInterpreterProvider(SandboxProvider):
"""
Aliyun Code Interpreter provider implementation.
This provider uses the official agentrun-sdk to interact with
Aliyun Function Compute's Code Interpreter service.
"""
def __init__(self):
self.access_key_id: Optional[str] = None
self.access_key_secret: Optional[str] = None
self.account_id: Optional[str] = None
self.region: str = "cn-hangzhou"
self.template_name: str = ""
self.timeout: int = 30
self._initialized: bool = False
self._config: Optional[Config] = None
def initialize(self, config: Dict[str, Any]) -> bool:
"""
Initialize the provider with Aliyun credentials.
Args:
config: Configuration dictionary with keys:
- access_key_id: Aliyun AccessKey ID
- access_key_secret: Aliyun AccessKey Secret
- account_id: Aliyun primary account ID (主账号ID)
- region: Region (default: "cn-hangzhou")
- template_name: Optional sandbox template name
- timeout: Request timeout in seconds (default: 30, max 30)
Returns:
True if initialization successful, False otherwise
"""
# Get values from config or environment variables
access_key_id = config.get("access_key_id") or os.getenv("AGENTRUN_ACCESS_KEY_ID")
access_key_secret = config.get("access_key_secret") or os.getenv("AGENTRUN_ACCESS_KEY_SECRET")
account_id = config.get("account_id") or os.getenv("AGENTRUN_ACCOUNT_ID")
region = config.get("region") or os.getenv("AGENTRUN_REGION", "cn-hangzhou")
self.access_key_id = access_key_id
self.access_key_secret = access_key_secret
self.account_id = account_id
self.region = region
self.template_name = config.get("template_name", "")
self.timeout = min(config.get("timeout", 30), 30) # Max 30 seconds
logger.info(f"Aliyun Code Interpreter: Initializing with account_id={self.account_id}, region={self.region}")
# Validate required fields
if not self.access_key_id or not self.access_key_secret:
logger.error("Aliyun Code Interpreter: Missing access_key_id or access_key_secret")
return False
if not self.account_id:
logger.error("Aliyun Code Interpreter: Missing account_id (主账号ID)")
return False
# Create SDK configuration
try:
logger.info(f"Aliyun Code Interpreter: Creating Config object with account_id={self.account_id}")
self._config = Config(
access_key_id=self.access_key_id,
access_key_secret=self.access_key_secret,
account_id=self.account_id,
region_id=self.region,
timeout=self.timeout,
)
logger.info("Aliyun Code Interpreter: Config object created successfully")
# Verify connection with health check
if not self.health_check():
logger.error(f"Aliyun Code Interpreter: Health check failed for region {self.region}")
return False
self._initialized = True
logger.info(f"Aliyun Code Interpreter: Initialized successfully for region {self.region}")
return True
except Exception as e:
logger.error(f"Aliyun Code Interpreter: Initialization failed - {str(e)}")
return False
def create_instance(self, template: str = "python") -> SandboxInstance:
"""
Create a new sandbox instance in Aliyun Code Interpreter.
Args:
template: Programming language (python, javascript)
Returns:
SandboxInstance object
Raises:
RuntimeError: If instance creation fails
"""
if not self._initialized or not self._config:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# Normalize language
language = self._normalize_language(template)
try:
# Get or create template
from agentrun.sandbox import Sandbox
if self.template_name:
# Use existing template
template_name = self.template_name
else:
# Try to get default template, or create one if it doesn't exist
default_template_name = f"ragflow-{language}-default"
try:
# Check if template exists
Template.get_by_name(default_template_name, config=self._config)
template_name = default_template_name
except Exception:
# Create default template if it doesn't exist
template_input = TemplateInput(
template_name=default_template_name,
template_type=TemplateType.CODE_INTERPRETER,
)
Template.create(template_input, config=self._config)
template_name = default_template_name
# Create sandbox directly
sandbox = Sandbox.create(
template_type=TemplateType.CODE_INTERPRETER,
template_name=template_name,
sandbox_idle_timeout_seconds=self.timeout,
config=self._config,
)
instance_id = sandbox.sandbox_id
return SandboxInstance(
instance_id=instance_id,
provider="aliyun_codeinterpreter",
status="READY",
metadata={
"language": language,
"region": self.region,
"account_id": self.account_id,
"template_name": template_name,
"created_at": datetime.now(timezone.utc).isoformat(),
},
)
except ServerError as e:
raise RuntimeError(f"Failed to create sandbox instance: {str(e)}")
except Exception as e:
raise RuntimeError(f"Unexpected error creating instance: {str(e)}")
def execute_code(self, instance_id: str, code: str, language: str, timeout: int = 10, arguments: Optional[Dict[str, Any]] = None) -> ExecutionResult:
"""
Execute code in the Aliyun Code Interpreter instance.
Args:
instance_id: ID of the sandbox instance
code: Source code to execute
language: Programming language (python, javascript)
timeout: Maximum execution time in seconds (max 30)
arguments: Optional arguments dict to pass to main() function
Returns:
ExecutionResult containing stdout, stderr, exit_code, and metadata
Raises:
RuntimeError: If execution fails
TimeoutError: If execution exceeds timeout
"""
if not self._initialized or not self._config:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# Normalize language
normalized_lang = self._normalize_language(language)
# Enforce 30-second hard limit
timeout = min(timeout or self.timeout, 30)
try:
# Connect to existing sandbox instance
sandbox = Sandbox.connect(sandbox_id=instance_id, config=self._config)
# Convert language string to CodeLanguage enum
code_language = CodeLanguage.PYTHON if normalized_lang == "python" else CodeLanguage.JAVASCRIPT
# Wrap code to call main() function
# Matches self_managed provider behavior: call main(**arguments)
if normalized_lang == "python":
# Build arguments string for main() call
if arguments:
import json as json_module
args_json = json_module.dumps(arguments)
wrapped_code = f'''{code}
if __name__ == "__main__":
import json
result = main(**{args_json})
print(json.dumps(result) if isinstance(result, dict) else result)
'''
else:
wrapped_code = f'''{code}
if __name__ == "__main__":
import json
result = main()
print(json.dumps(result) if isinstance(result, dict) else result)
'''
else: # javascript
if arguments:
import json as json_module
args_json = json_module.dumps(arguments)
wrapped_code = f'''{code}
// Call main and output result
const result = main({args_json});
console.log(typeof result === 'object' ? JSON.stringify(result) : String(result));
'''
else:
wrapped_code = f'''{code}
// Call main and output result
const result = main();
console.log(typeof result === 'object' ? JSON.stringify(result) : String(result));
'''
logger.debug(f"Aliyun Code Interpreter: Wrapped code (first 200 chars): {wrapped_code[:200]}")
start_time = time.time()
# Execute code using SDK's simplified execute endpoint
logger.info(f"Aliyun Code Interpreter: Executing code (language={normalized_lang}, timeout={timeout})")
logger.debug(f"Aliyun Code Interpreter: Original code (first 200 chars): {code[:200]}")
result = sandbox.context.execute(
code=wrapped_code,
language=code_language,
timeout=timeout,
)
execution_time = time.time() - start_time
logger.info(f"Aliyun Code Interpreter: Execution completed in {execution_time:.2f}s")
logger.debug(f"Aliyun Code Interpreter: Raw SDK result: {result}")
# Parse execution result
results = result.get("results", []) if isinstance(result, dict) else []
logger.info(f"Aliyun Code Interpreter: Parsed {len(results)} result items")
# Extract stdout and stderr from results
stdout_parts = []
stderr_parts = []
exit_code = 0
execution_status = "ok"
for item in results:
result_type = item.get("type", "")
text = item.get("text", "")
if result_type == "stdout":
stdout_parts.append(text)
elif result_type == "stderr":
stderr_parts.append(text)
exit_code = 1 # Error occurred
elif result_type == "endOfExecution":
execution_status = item.get("status", "ok")
if execution_status != "ok":
exit_code = 1
elif result_type == "error":
stderr_parts.append(text)
exit_code = 1
stdout = "\n".join(stdout_parts)
stderr = "\n".join(stderr_parts)
logger.info(f"Aliyun Code Interpreter: stdout length={len(stdout)}, stderr length={len(stderr)}, exit_code={exit_code}")
if stdout:
logger.debug(f"Aliyun Code Interpreter: stdout (first 200 chars): {stdout[:200]}")
if stderr:
logger.debug(f"Aliyun Code Interpreter: stderr (first 200 chars): {stderr[:200]}")
return ExecutionResult(
stdout=stdout,
stderr=stderr,
exit_code=exit_code,
execution_time=execution_time,
metadata={
"instance_id": instance_id,
"language": normalized_lang,
"context_id": result.get("contextId") if isinstance(result, dict) else None,
"timeout": timeout,
},
)
except ServerError as e:
if "timeout" in str(e).lower():
raise TimeoutError(f"Execution timed out after {timeout} seconds")
raise RuntimeError(f"Failed to execute code: {str(e)}")
except Exception as e:
raise RuntimeError(f"Unexpected error during execution: {str(e)}")
def destroy_instance(self, instance_id: str) -> bool:
"""
Destroy an Aliyun Code Interpreter instance.
Args:
instance_id: ID of the instance to destroy
Returns:
True if destruction successful, False otherwise
"""
if not self._initialized or not self._config:
raise RuntimeError("Provider not initialized. Call initialize() first.")
try:
# Delete sandbox by ID directly
Sandbox.delete_by_id(sandbox_id=instance_id)
logger.info(f"Successfully destroyed sandbox instance {instance_id}")
return True
except ServerError as e:
logger.error(f"Failed to destroy instance {instance_id}: {str(e)}")
return False
except Exception as e:
logger.error(f"Unexpected error destroying instance {instance_id}: {str(e)}")
return False
def health_check(self) -> bool:
"""
Check if the Aliyun Code Interpreter service is accessible.
Returns:
True if provider is healthy, False otherwise
"""
if not self._initialized and not (self.access_key_id and self.account_id):
return False
try:
# Try to list templates to verify connection
from agentrun.sandbox import Template
templates = Template.list(config=self._config)
return templates is not None
except Exception as e:
logger.warning(f"Aliyun Code Interpreter health check failed: {str(e)}")
# If we get any response (even an error), the service is reachable
return "connection" not in str(e).lower()
def get_supported_languages(self) -> List[str]:
"""
Get list of supported programming languages.
Returns:
List of language identifiers
"""
return ["python", "javascript"]
@staticmethod
def get_config_schema() -> Dict[str, Dict]:
"""
Return configuration schema for Aliyun Code Interpreter provider.
Returns:
Dictionary mapping field names to their schema definitions
"""
return {
"access_key_id": {
"type": "string",
"required": True,
"label": "Access Key ID",
"placeholder": "LTAI5t...",
"description": "Aliyun AccessKey ID for authentication",
"secret": False,
},
"access_key_secret": {
"type": "string",
"required": True,
"label": "Access Key Secret",
"placeholder": "••••••••••••••••",
"description": "Aliyun AccessKey Secret for authentication",
"secret": True,
},
"account_id": {
"type": "string",
"required": True,
"label": "Account ID",
"placeholder": "1234567890...",
"description": "Aliyun primary account ID (主账号ID), required for API calls",
},
"region": {
"type": "string",
"required": False,
"label": "Region",
"default": "cn-hangzhou",
"description": "Aliyun region for Code Interpreter service",
"options": ["cn-hangzhou", "cn-beijing", "cn-shanghai", "cn-shenzhen", "cn-guangzhou"],
},
"template_name": {
"type": "string",
"required": False,
"label": "Template Name",
"placeholder": "my-interpreter",
"description": "Optional sandbox template name for pre-configured environments",
},
"timeout": {
"type": "integer",
"required": False,
"label": "Execution Timeout (seconds)",
"default": 30,
"min": 1,
"max": 30,
"description": "Code execution timeout (max 30 seconds - hard limit)",
},
}
def validate_config(self, config: Dict[str, Any]) -> tuple[bool, Optional[str]]:
"""
Validate Aliyun-specific configuration.
Args:
config: Configuration dictionary to validate
Returns:
Tuple of (is_valid, error_message)
"""
# Validate access key format
access_key_id = config.get("access_key_id", "")
if access_key_id and not access_key_id.startswith("LTAI"):
return False, "Invalid AccessKey ID format (should start with 'LTAI')"
# Validate account ID
account_id = config.get("account_id", "")
if not account_id:
return False, "Account ID is required"
# Validate region
valid_regions = ["cn-hangzhou", "cn-beijing", "cn-shanghai", "cn-shenzhen", "cn-guangzhou"]
region = config.get("region", "cn-hangzhou")
if region and region not in valid_regions:
return False, f"Invalid region. Must be one of: {', '.join(valid_regions)}"
# Validate timeout range (max 30 seconds)
timeout = config.get("timeout", 30)
if isinstance(timeout, int) and (timeout < 1 or timeout > 30):
return False, "Timeout must be between 1 and 30 seconds"
return True, None
def _normalize_language(self, language: str) -> str:
"""
Normalize language identifier to Aliyun format.
Args:
language: Language identifier (python, python3, javascript, nodejs)
Returns:
Normalized language identifier
"""
if not language:
return "python"
lang_lower = language.lower()
if lang_lower in ("python", "python3"):
return "python"
elif lang_lower in ("javascript", "nodejs"):
return "javascript"
else:
return language

View File

@ -1,212 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Base interface for sandbox providers.
Each sandbox provider (self-managed, SaaS) implements this interface
to provide code execution capabilities.
"""
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Dict, Any, Optional, List
@dataclass
class SandboxInstance:
"""Represents a sandbox execution instance"""
instance_id: str
provider: str
status: str # running, stopped, error
metadata: Dict[str, Any]
def __post_init__(self):
if self.metadata is None:
self.metadata = {}
@dataclass
class ExecutionResult:
"""Result of code execution in a sandbox"""
stdout: str
stderr: str
exit_code: int
execution_time: float # in seconds
metadata: Dict[str, Any]
def __post_init__(self):
if self.metadata is None:
self.metadata = {}
class SandboxProvider(ABC):
"""
Base interface for all sandbox providers.
Each provider implementation (self-managed, Aliyun OpenSandbox, E2B, etc.)
must implement these methods to provide code execution capabilities.
"""
@abstractmethod
def initialize(self, config: Dict[str, Any]) -> bool:
"""
Initialize the provider with configuration.
Args:
config: Provider-specific configuration dictionary
Returns:
True if initialization successful, False otherwise
"""
pass
@abstractmethod
def create_instance(self, template: str = "python") -> SandboxInstance:
"""
Create a new sandbox instance.
Args:
template: Programming language/template for the instance
(e.g., "python", "nodejs", "bash")
Returns:
SandboxInstance object representing the created instance
Raises:
RuntimeError: If instance creation fails
"""
pass
@abstractmethod
def execute_code(
self,
instance_id: str,
code: str,
language: str,
timeout: int = 10,
arguments: Optional[Dict[str, Any]] = None
) -> ExecutionResult:
"""
Execute code in a sandbox instance.
Args:
instance_id: ID of the sandbox instance
code: Source code to execute
language: Programming language (python, javascript, etc.)
timeout: Maximum execution time in seconds
arguments: Optional arguments dict to pass to main() function
Returns:
ExecutionResult containing stdout, stderr, exit_code, and metadata
Raises:
RuntimeError: If execution fails
TimeoutError: If execution exceeds timeout
"""
pass
@abstractmethod
def destroy_instance(self, instance_id: str) -> bool:
"""
Destroy a sandbox instance.
Args:
instance_id: ID of the instance to destroy
Returns:
True if destruction successful, False otherwise
Raises:
RuntimeError: If destruction fails
"""
pass
@abstractmethod
def health_check(self) -> bool:
"""
Check if the provider is healthy and accessible.
Returns:
True if provider is healthy, False otherwise
"""
pass
@abstractmethod
def get_supported_languages(self) -> List[str]:
"""
Get list of supported programming languages.
Returns:
List of language identifiers (e.g., ["python", "javascript", "go"])
"""
pass
@staticmethod
def get_config_schema() -> Dict[str, Dict]:
"""
Return configuration schema for this provider.
The schema defines what configuration fields are required/optional,
their types, validation rules, and UI labels.
Returns:
Dictionary mapping field names to their schema definitions.
Example:
{
"endpoint": {
"type": "string",
"required": True,
"label": "API Endpoint",
"placeholder": "http://localhost:9385"
},
"timeout": {
"type": "integer",
"default": 30,
"label": "Timeout (seconds)",
"min": 5,
"max": 300
}
}
"""
return {}
def validate_config(self, config: Dict[str, Any]) -> tuple[bool, Optional[str]]:
"""
Validate provider-specific configuration.
This method allows providers to implement custom validation logic beyond
the basic schema validation. Override this method to add provider-specific
checks like URL format validation, API key format validation, etc.
Args:
config: Configuration dictionary to validate
Returns:
Tuple of (is_valid, error_message):
- is_valid: True if configuration is valid, False otherwise
- error_message: Error message if invalid, None if valid
Example:
>>> def validate_config(self, config):
>>> endpoint = config.get("endpoint", "")
>>> if not endpoint.startswith(("http://", "https://")):
>>> return False, "Endpoint must start with http:// or https://"
>>> return True, None
"""
# Default implementation: no custom validation
return True, None

View File

@ -1,233 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
E2B provider implementation.
This provider integrates with E2B Cloud for cloud-based code execution
using Firecracker microVMs.
"""
import uuid
from typing import Dict, Any, List
from .base import SandboxProvider, SandboxInstance, ExecutionResult
class E2BProvider(SandboxProvider):
"""
E2B provider implementation.
This provider uses E2B Cloud service for secure code execution
in Firecracker microVMs.
"""
def __init__(self):
self.api_key: str = ""
self.region: str = "us"
self.timeout: int = 30
self._initialized: bool = False
def initialize(self, config: Dict[str, Any]) -> bool:
"""
Initialize the provider with E2B credentials.
Args:
config: Configuration dictionary with keys:
- api_key: E2B API key
- region: Region (us, eu) (default: "us")
- timeout: Request timeout in seconds (default: 30)
Returns:
True if initialization successful, False otherwise
"""
self.api_key = config.get("api_key", "")
self.region = config.get("region", "us")
self.timeout = config.get("timeout", 30)
# Validate required fields
if not self.api_key:
return False
# TODO: Implement actual E2B API client initialization
# For now, we'll mark as initialized but actual API calls will fail
self._initialized = True
return True
def create_instance(self, template: str = "python") -> SandboxInstance:
"""
Create a new sandbox instance in E2B.
Args:
template: Programming language template (python, nodejs, go, bash)
Returns:
SandboxInstance object
Raises:
RuntimeError: If instance creation fails
"""
if not self._initialized:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# Normalize language
language = self._normalize_language(template)
# TODO: Implement actual E2B API call
# POST /sandbox with template
instance_id = str(uuid.uuid4())
return SandboxInstance(
instance_id=instance_id,
provider="e2b",
status="running",
metadata={
"language": language,
"region": self.region,
}
)
def execute_code(
self,
instance_id: str,
code: str,
language: str,
timeout: int = 10
) -> ExecutionResult:
"""
Execute code in the E2B instance.
Args:
instance_id: ID of the sandbox instance
code: Source code to execute
language: Programming language (python, nodejs, go, bash)
timeout: Maximum execution time in seconds
Returns:
ExecutionResult containing stdout, stderr, exit_code, and metadata
Raises:
RuntimeError: If execution fails
TimeoutError: If execution exceeds timeout
"""
if not self._initialized:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# TODO: Implement actual E2B API call
# POST /sandbox/{sandboxID}/execute
raise RuntimeError(
"E2B provider is not yet fully implemented. "
"Please use the self-managed provider or implement the E2B API integration. "
"See https://github.com/e2b-dev/e2b for API documentation."
)
def destroy_instance(self, instance_id: str) -> bool:
"""
Destroy an E2B instance.
Args:
instance_id: ID of the instance to destroy
Returns:
True if destruction successful, False otherwise
"""
if not self._initialized:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# TODO: Implement actual E2B API call
# DELETE /sandbox/{sandboxID}
return True
def health_check(self) -> bool:
"""
Check if the E2B service is accessible.
Returns:
True if provider is healthy, False otherwise
"""
if not self._initialized:
return False
# TODO: Implement actual E2B health check API call
# GET /healthz or similar
# For now, return True if initialized with API key
return bool(self.api_key)
def get_supported_languages(self) -> List[str]:
"""
Get list of supported programming languages.
Returns:
List of language identifiers
"""
return ["python", "nodejs", "javascript", "go", "bash"]
@staticmethod
def get_config_schema() -> Dict[str, Dict]:
"""
Return configuration schema for E2B provider.
Returns:
Dictionary mapping field names to their schema definitions
"""
return {
"api_key": {
"type": "string",
"required": True,
"label": "API Key",
"placeholder": "e2b_sk_...",
"description": "E2B API key for authentication",
"secret": True,
},
"region": {
"type": "string",
"required": False,
"label": "Region",
"default": "us",
"description": "E2B service region (us or eu)",
},
"timeout": {
"type": "integer",
"required": False,
"label": "Request Timeout (seconds)",
"default": 30,
"min": 5,
"max": 300,
"description": "API request timeout for code execution",
}
}
def _normalize_language(self, language: str) -> str:
"""
Normalize language identifier to E2B template format.
Args:
language: Language identifier
Returns:
Normalized language identifier
"""
if not language:
return "python"
lang_lower = language.lower()
if lang_lower in ("python", "python3"):
return "python"
elif lang_lower in ("javascript", "nodejs"):
return "nodejs"
else:
return language

View File

@ -1,78 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Provider manager for sandbox providers.
Since sandbox configuration is global (system-level), we only use one
active provider at a time. This manager is a thin wrapper that holds a reference
to the currently active provider.
"""
from typing import Optional
from .base import SandboxProvider
class ProviderManager:
"""
Manages the currently active sandbox provider.
With global configuration, there's only one active provider at a time.
This manager simply holds a reference to that provider.
"""
def __init__(self):
"""Initialize an empty provider manager."""
self.current_provider: Optional[SandboxProvider] = None
self.current_provider_name: Optional[str] = None
def set_provider(self, name: str, provider: SandboxProvider):
"""
Set the active provider.
Args:
name: Provider identifier (e.g., "self_managed", "e2b")
provider: Provider instance
"""
self.current_provider = provider
self.current_provider_name = name
def get_provider(self) -> Optional[SandboxProvider]:
"""
Get the active provider.
Returns:
Currently active SandboxProvider instance, or None if not set
"""
return self.current_provider
def get_provider_name(self) -> Optional[str]:
"""
Get the active provider name.
Returns:
Provider name (e.g., "self_managed"), or None if not set
"""
return self.current_provider_name
def is_configured(self) -> bool:
"""
Check if a provider is configured.
Returns:
True if a provider is set, False otherwise
"""
return self.current_provider is not None

View File

@ -1,359 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Self-managed sandbox provider implementation.
This provider wraps the existing executor_manager HTTP API which manages
a pool of Docker containers with gVisor for secure code execution.
"""
import base64
import time
import uuid
from typing import Dict, Any, List, Optional
import requests
from .base import SandboxProvider, SandboxInstance, ExecutionResult
class SelfManagedProvider(SandboxProvider):
"""
Self-managed sandbox provider using Daytona/Docker.
This provider communicates with the executor_manager HTTP API
which manages a pool of containers for code execution.
"""
def __init__(self):
self.endpoint: str = "http://localhost:9385"
self.timeout: int = 30
self.max_retries: int = 3
self.pool_size: int = 10
self._initialized: bool = False
def initialize(self, config: Dict[str, Any]) -> bool:
"""
Initialize the provider with configuration.
Args:
config: Configuration dictionary with keys:
- endpoint: HTTP endpoint (default: "http://localhost:9385")
- timeout: Request timeout in seconds (default: 30)
- max_retries: Maximum retry attempts (default: 3)
- pool_size: Container pool size for info (default: 10)
Returns:
True if initialization successful, False otherwise
"""
self.endpoint = config.get("endpoint", "http://localhost:9385")
self.timeout = config.get("timeout", 30)
self.max_retries = config.get("max_retries", 3)
self.pool_size = config.get("pool_size", 10)
# Validate endpoint is accessible
if not self.health_check():
# Try to fall back to SANDBOX_HOST from settings if we are using localhost
if "localhost" in self.endpoint or "127.0.0.1" in self.endpoint:
try:
from api import settings
if settings.SANDBOX_HOST and settings.SANDBOX_HOST not in self.endpoint:
original_endpoint = self.endpoint
self.endpoint = f"http://{settings.SANDBOX_HOST}:9385"
if self.health_check():
import logging
logging.warning(f"Sandbox self_managed: Connected using settings.SANDBOX_HOST fallback: {self.endpoint} (original: {original_endpoint})")
self._initialized = True
return True
else:
self.endpoint = original_endpoint # Restore if fallback also fails
except ImportError:
pass
return False
self._initialized = True
return True
def create_instance(self, template: str = "python") -> SandboxInstance:
"""
Create a new sandbox instance.
Note: For self-managed provider, instances are managed internally
by the executor_manager's container pool. This method returns
a logical instance handle.
Args:
template: Programming language (python, nodejs)
Returns:
SandboxInstance object
Raises:
RuntimeError: If instance creation fails
"""
if not self._initialized:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# Normalize language
language = self._normalize_language(template)
# The executor_manager manages instances internally via container pool
# We create a logical instance ID for tracking
instance_id = str(uuid.uuid4())
return SandboxInstance(
instance_id=instance_id,
provider="self_managed",
status="running",
metadata={
"language": language,
"endpoint": self.endpoint,
"pool_size": self.pool_size,
}
)
def execute_code(
self,
instance_id: str,
code: str,
language: str,
timeout: int = 10,
arguments: Optional[Dict[str, Any]] = None
) -> ExecutionResult:
"""
Execute code in the sandbox.
Args:
instance_id: ID of the sandbox instance (not used for self-managed)
code: Source code to execute
language: Programming language (python, nodejs, javascript)
timeout: Maximum execution time in seconds
arguments: Optional arguments dict to pass to main() function
Returns:
ExecutionResult containing stdout, stderr, exit_code, and metadata
Raises:
RuntimeError: If execution fails
TimeoutError: If execution exceeds timeout
"""
if not self._initialized:
raise RuntimeError("Provider not initialized. Call initialize() first.")
# Normalize language
normalized_lang = self._normalize_language(language)
# Prepare request
code_b64 = base64.b64encode(code.encode("utf-8")).decode("utf-8")
payload = {
"code_b64": code_b64,
"language": normalized_lang,
"arguments": arguments or {}
}
url = f"{self.endpoint}/run"
exec_timeout = timeout or self.timeout
start_time = time.time()
try:
response = requests.post(
url,
json=payload,
timeout=exec_timeout,
headers={"Content-Type": "application/json"}
)
execution_time = time.time() - start_time
if response.status_code != 200:
raise RuntimeError(
f"HTTP {response.status_code}: {response.text}"
)
result = response.json()
return ExecutionResult(
stdout=result.get("stdout", ""),
stderr=result.get("stderr", ""),
exit_code=result.get("exit_code", 0),
execution_time=execution_time,
metadata={
"status": result.get("status"),
"time_used_ms": result.get("time_used_ms"),
"memory_used_kb": result.get("memory_used_kb"),
"detail": result.get("detail"),
"instance_id": instance_id,
}
)
except requests.Timeout:
execution_time = time.time() - start_time
raise TimeoutError(
f"Execution timed out after {exec_timeout} seconds"
)
except requests.RequestException as e:
raise RuntimeError(f"HTTP request failed: {str(e)}")
def destroy_instance(self, instance_id: str) -> bool:
"""
Destroy a sandbox instance.
Note: For self-managed provider, instances are returned to the
internal pool automatically by executor_manager after execution.
This is a no-op for tracking purposes.
Args:
instance_id: ID of the instance to destroy
Returns:
True (always succeeds for self-managed)
"""
# The executor_manager manages container lifecycle internally
# Container is returned to pool after execution
return True
def health_check(self) -> bool:
"""
Check if the provider is healthy and accessible.
Returns:
True if provider is healthy, False otherwise
"""
try:
url = f"{self.endpoint}/healthz"
response = requests.get(url, timeout=5)
return response.status_code == 200
except Exception:
return False
def get_supported_languages(self) -> List[str]:
"""
Get list of supported programming languages.
Returns:
List of language identifiers
"""
return ["python", "nodejs", "javascript"]
@staticmethod
def get_config_schema() -> Dict[str, Dict]:
"""
Return configuration schema for self-managed provider.
Returns:
Dictionary mapping field names to their schema definitions
"""
return {
"endpoint": {
"type": "string",
"required": True,
"label": "Executor Manager Endpoint",
"placeholder": "http://localhost:9385",
"default": "http://localhost:9385",
"description": "HTTP endpoint of the executor_manager service"
},
"timeout": {
"type": "integer",
"required": False,
"label": "Request Timeout (seconds)",
"default": 30,
"min": 5,
"max": 300,
"description": "HTTP request timeout for code execution"
},
"max_retries": {
"type": "integer",
"required": False,
"label": "Max Retries",
"default": 3,
"min": 0,
"max": 10,
"description": "Maximum number of retry attempts for failed requests"
},
"pool_size": {
"type": "integer",
"required": False,
"label": "Container Pool Size",
"default": 10,
"min": 1,
"max": 100,
"description": "Size of the container pool (configured in executor_manager)"
}
}
def _normalize_language(self, language: str) -> str:
"""
Normalize language identifier to executor_manager format.
Args:
language: Language identifier (python, python3, nodejs, javascript)
Returns:
Normalized language identifier
"""
if not language:
return "python"
lang_lower = language.lower()
if lang_lower in ("python", "python3"):
return "python"
elif lang_lower in ("javascript", "nodejs"):
return "nodejs"
else:
return language
def validate_config(self, config: dict) -> tuple[bool, Optional[str]]:
"""
Validate self-managed provider configuration.
Performs custom validation beyond the basic schema validation,
such as checking URL format.
Args:
config: Configuration dictionary to validate
Returns:
Tuple of (is_valid, error_message)
"""
# Validate endpoint URL format
endpoint = config.get("endpoint", "")
if endpoint:
# Check if it's a valid HTTP/HTTPS URL or localhost
import re
url_pattern = r'^(https?://|http://localhost|http://[\d\.]+:[a-z]+:[/]|http://[\w\.]+:)'
if not re.match(url_pattern, endpoint):
return False, f"Invalid endpoint format: {endpoint}. Must start with http:// or https://"
# Validate pool_size is positive
pool_size = config.get("pool_size", 10)
if isinstance(pool_size, int) and pool_size <= 0:
return False, "Pool size must be greater than 0"
# Validate timeout is reasonable
timeout = config.get("timeout", 30)
if isinstance(timeout, int) and (timeout < 1 or timeout > 600):
return False, "Timeout must be between 1 and 600 seconds"
# Validate max_retries
max_retries = config.get("max_retries", 3)
if isinstance(max_retries, int) and (max_retries < 0 or max_retries > 10):
return False, "Max retries must be between 0 and 10"
return True, None

View File

@ -1,261 +0,0 @@
# Aliyun Code Interpreter Provider - 使用官方 SDK
## 重要变更
### 官方资源
- **Code Interpreter API**: https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter
- **官方 SDK**: https://github.com/Serverless-Devs/agentrun-sdk-python
- **SDK 文档**: https://docs.agent.run
## 使用官方 SDK 的优势
从手动 HTTP 请求迁移到官方 SDK (`agentrun-sdk`) 有以下优势:
### 1. **自动签名认证**
- SDK 自动处理 Aliyun API 签名(无需手动实现 `Authorization` 头)
- 支持多种认证方式AccessKey、STS Token
- 自动读取环境变量
### 2. **简化的 API**
```python
# 旧实现(手动 HTTP 请求)
response = requests.post(
f"{DATA_ENDPOINT}/sandboxes/{sandbox_id}/execute",
headers={"X-Acs-Parent-Id": account_id},
json={"code": code, "language": "python"}
)
# 新实现(使用 SDK
sandbox = CodeInterpreterSandbox(template_name="python-sandbox", config=config)
result = sandbox.context.execute(code="print('hello')")
```
### 3. **更好的错误处理**
- 结构化的异常类型 (`ServerError`)
- 自动重试机制
- 详细的错误信息
## 主要变更
### 1. 文件重命名
| 旧文件名 | 新文件名 | 说明 |
|---------|---------|------|
| `aliyun_opensandbox.py` | `aliyun_codeinterpreter.py` | 提供商实现 |
| `test_aliyun_provider.py` | `test_aliyun_codeinterpreter.py` | 单元测试 |
| `test_aliyun_integration.py` | `test_aliyun_codeinterpreter_integration.py` | 集成测试 |
### 2. 配置字段变更
#### 旧配置OpenSandbox
```json
{
"access_key_id": "LTAI5t...",
"access_key_secret": "...",
"region": "cn-hangzhou",
"workspace_id": "ws-xxxxx"
}
```
#### 新配置Code Interpreter
```json
{
"access_key_id": "LTAI5t...",
"access_key_secret": "...",
"account_id": "1234567890...", // 新增阿里云主账号ID必需
"region": "cn-hangzhou",
"template_name": "python-sandbox", // 新增:沙箱模板名称
"timeout": 30 // 最大 30 秒(硬限制)
}
```
### 3. 关键差异
| 特性 | OpenSandbox | Code Interpreter |
|------|-------------|-----------------|
| **API 端点** | `opensandbox.{region}.aliyuncs.com` | `agentrun.{region}.aliyuncs.com` (控制面) |
| **API 版本** | `2024-01-01` | `2025-09-10` |
| **认证** | 需要 AccessKey | 需要 AccessKey + 主账号ID |
| **请求头** | 标准签名 | 需要 `X-Acs-Parent-Id` 头 |
| **超时限制** | 可配置 | **最大 30 秒**(硬限制) |
| **上下文** | 不支持 | 支持上下文Jupyter kernel |
### 4. API 调用方式变更
#### 旧实现(假设的 OpenSandbox
```python
# 单一端点
API_ENDPOINT = "https://opensandbox.cn-hangzhou.aliyuncs.com"
# 简单的请求/响应
response = requests.post(
f"{API_ENDPOINT}/execute",
json={"code": "print('hello')", "language": "python"}
)
```
#### 新实现Code Interpreter
```python
# 控制面 API - 管理沙箱生命周期
CONTROL_ENDPOINT = "https://agentrun.cn-hangzhou.aliyuncs.com/2025-09-10"
# 数据面 API - 执行代码
DATA_ENDPOINT = "https://{account_id}.agentrun-data.cn-hangzhou.aliyuncs.com"
# 创建沙箱(控制面)
response = requests.post(
f"{CONTROL_ENDPOINT}/sandboxes",
headers={"X-Acs-Parent-Id": account_id},
json={"templateName": "python-sandbox"}
)
# 执行代码(数据面)
response = requests.post(
f"{DATA_ENDPOINT}/sandboxes/{sandbox_id}/execute",
headers={"X-Acs-Parent-Id": account_id},
json={"code": "print('hello')", "language": "python", "timeout": 30}
)
```
### 5. 迁移步骤
#### 步骤 1: 更新配置
如果您之前使用的是 `aliyun_opensandbox`
**旧配置**:
```json
{
"name": "sandbox.provider_type",
"value": "aliyun_opensandbox"
}
```
**新配置**:
```json
{
"name": "sandbox.provider_type",
"value": "aliyun_codeinterpreter"
}
```
#### 步骤 2: 添加必需的 account_id
在 Aliyun 控制台右上角点击头像,获取主账号 ID
1. 登录 [阿里云控制台](https://ram.console.aliyun.com/manage/ak)
2. 点击右上角头像
3. 复制主账号 ID16 位数字)
#### 步骤 3: 更新环境变量
```bash
# 新增必需的环境变量
export ALIYUN_ACCOUNT_ID="1234567890123456"
# 其他环境变量保持不变
export ALIYUN_ACCESS_KEY_ID="LTAI5t..."
export ALIYUN_ACCESS_KEY_SECRET="..."
export ALIYUN_REGION="cn-hangzhou"
```
#### 步骤 4: 运行测试
```bash
# 单元测试(不需要真实凭据)
pytest agent/sandbox/tests/test_aliyun_codeinterpreter.py -v
# 集成测试(需要真实凭据)
pytest agent/sandbox/tests/test_aliyun_codeinterpreter_integration.py -v -m integration
```
## 文件变更清单
### ✅ 已完成
- [x] 创建 `aliyun_codeinterpreter.py` - 新的提供商实现
- [x] 更新 `sandbox_spec.md` - 规范文档
- [x] 更新 `admin/services.py` - 服务管理器
- [x] 更新 `providers/__init__.py` - 包导出
- [x] 创建 `test_aliyun_codeinterpreter.py` - 单元测试
- [x] 创建 `test_aliyun_codeinterpreter_integration.py` - 集成测试
### 📝 可选清理
如果您想删除旧的 OpenSandbox 实现:
```bash
# 删除旧文件(可选)
rm agent/sandbox/providers/aliyun_opensandbox.py
rm agent/sandbox/tests/test_aliyun_provider.py
rm agent/sandbox/tests/test_aliyun_integration.py
```
**注意**: 保留旧文件不会影响新功能,只是代码冗余。
## API 参考
### 控制面 API沙箱管理
| 端点 | 方法 | 说明 |
|------|------|------|
| `/sandboxes` | POST | 创建沙箱实例 |
| `/sandboxes/{id}/stop` | POST | 停止实例 |
| `/sandboxes/{id}` | DELETE | 删除实例 |
| `/templates` | GET | 列出模板 |
### 数据面 API代码执行
| 端点 | 方法 | 说明 |
|------|------|------|
| `/sandboxes/{id}/execute` | POST | 执行代码(简化版) |
| `/sandboxes/{id}/contexts` | POST | 创建上下文 |
| `/sandboxes/{id}/contexts/{ctx_id}/execute` | POST | 在上下文中执行 |
| `/sandboxes/{id}/health` | GET | 健康检查 |
| `/sandboxes/{id}/files` | GET/POST | 文件读写 |
| `/sandboxes/{id}/processes/cmd` | POST | 执行 Shell 命令 |
## 常见问题
### Q: 为什么要添加 account_id
**A**: Code Interpreter API 需要在请求头中提供 `X-Acs-Parent-Id`阿里云主账号ID进行身份验证。这是 Aliyun Code Interpreter API 的必需参数。
### Q: 30 秒超时限制可以绕过吗?
**A**: 不可以。这是 Aliyun Code Interpreter 的**硬限制**,无法通过配置或请求参数绕过。如果代码执行时间超过 30 秒,请考虑:
1. 优化代码逻辑
2. 分批处理数据
3. 使用上下文保持状态
### Q: 旧的 OpenSandbox 配置还能用吗?
**A**: 不能。OpenSandbox 和 Code Interpreter 是两个不同的服务API 不兼容。必须迁移到新的配置格式。
### Q: 如何获取阿里云主账号 ID
**A**:
1. 登录阿里云控制台
2. 点击右上角的头像
3. 在弹出的信息中可以看到"主账号ID"
### Q: 迁移后会影响现有功能吗?
**A**:
- **自我管理提供商self_managed**: 不受影响
- **E2B 提供商**: 不受影响
- **Aliyun 提供商**: 需要更新配置并重新测试
## 相关文档
- [官方文档](https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter)
- [sandbox 规范](../docs/develop/sandbox_spec.md)
- [测试指南](./README.md)
- [快速开始](./QUICKSTART.md)
## 技术支持
如有问题,请:
1. 查看官方文档
2. 检查配置是否正确
3. 查看测试输出中的错误信息
4. 联系 RAGFlow 团队

View File

@ -1,178 +0,0 @@
# Aliyun OpenSandbox Provider - 快速测试指南
## 测试说明
### 1. 单元测试(不需要真实凭据)
单元测试使用 mock**不需要**真实的 Aliyun 凭据,可以随时运行。
```bash
# 运行 Aliyun 提供商的单元测试
pytest agent/sandbox/tests/test_aliyun_provider.py -v
# 预期输出:
# test_aliyun_provider.py::TestAliyunOpenSandboxProvider::test_provider_initialization PASSED
# test_aliyun_provider.py::TestAliyunOpenSandboxProvider::test_initialize_success PASSED
# ...
# ========================= 48 passed in 2.34s ==========================
```
### 2. 集成测试(需要真实凭据)
集成测试会调用真实的 Aliyun API需要配置凭据。
#### 步骤 1: 配置环境变量
```bash
export ALIYUN_ACCESS_KEY_ID="LTAI5t..." # 替换为真实的 Access Key ID
export ALIYUN_ACCESS_KEY_SECRET="..." # 替换为真实的 Access Key Secret
export ALIYUN_REGION="cn-hangzhou" # 可选,默认为 cn-hangzhou
```
#### 步骤 2: 运行集成测试
```bash
# 运行所有集成测试
pytest agent/sandbox/tests/test_aliyun_integration.py -v -m integration
# 运行特定测试
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_health_check -v
```
#### 步骤 3: 预期输出
```
test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_initialize_provider PASSED
test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_health_check PASSED
test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_execute_python_code PASSED
...
========================== 10 passed in 15.67s ==========================
```
### 3. 测试场景
#### 基础功能测试
```bash
# 健康检查
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_health_check -v
# 创建实例
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_create_python_instance -v
# 执行代码
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_execute_python_code -v
# 销毁实例
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_destroy_instance -v
```
#### 错误处理测试
```bash
# 代码执行错误
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_execute_python_code_with_error -v
# 超时处理
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_execute_python_code_timeout -v
```
#### 真实场景测试
```bash
# 数据处理工作流
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunRealWorldScenarios::test_data_processing_workflow -v
# 字符串操作
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunRealWorldScenarios::test_string_manipulation -v
# 多次执行
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunRealWorldScenarios::test_multiple_executions_same_instance -v
```
## 常见问题
### Q: 没有凭据怎么办?
**A:** 运行单元测试即可,不需要真实凭据:
```bash
pytest agent/sandbox/tests/test_aliyun_provider.py -v
```
### Q: 如何跳过集成测试?
**A:** 使用 pytest 标记跳过:
```bash
# 只运行单元测试,跳过集成测试
pytest agent/sandbox/tests/ -v -m "not integration"
```
### Q: 集成测试失败怎么办?
**A:** 检查以下几点:
1. **凭据是否正确**
```bash
echo $ALIYUN_ACCESS_KEY_ID
echo $ALIYUN_ACCESS_KEY_SECRET
```
2. **网络连接是否正常**
```bash
curl -I https://opensandbox.cn-hangzhou.aliyuncs.com
```
3. **是否有 OpenSandbox 服务权限**
- 登录阿里云控制台
- 检查是否已开通 OpenSandbox 服务
- 检查 AccessKey 权限
4. **查看详细错误信息**
```bash
pytest agent/sandbox/tests/test_aliyun_integration.py -v -s
```
### Q: 测试超时怎么办?
**A:** 增加超时时间或检查网络:
```bash
# 使用更长的超时
pytest agent/sandbox/tests/test_aliyun_integration.py -v --timeout=60
```
## 测试命令速查表
| 命令 | 说明 | 需要凭据 |
|------|------|---------|
| `pytest agent/sandbox/tests/test_aliyun_provider.py -v` | 单元测试 | ❌ |
| `pytest agent/sandbox/tests/test_aliyun_integration.py -v` | 集成测试 | ✅ |
| `pytest agent/sandbox/tests/ -v -m "not integration"` | 仅单元测试 | ❌ |
| `pytest agent/sandbox/tests/ -v -m integration` | 仅集成测试 | ✅ |
| `pytest agent/sandbox/tests/ -v` | 所有测试 | 部分需要 |
## 获取 Aliyun 凭据
1. 访问 [阿里云控制台](https://ram.console.aliyun.com/manage/ak)
2. 创建 AccessKey
3. 保存 AccessKey ID 和 AccessKey Secret
4. 设置环境变量
⚠️ **安全提示:**
- 不要在代码中硬编码凭据
- 使用环境变量或配置文件
- 定期轮换 AccessKey
- 限制 AccessKey 权限
## 下一步
1.**运行单元测试** - 验证代码逻辑
2. 🔧 **配置凭据** - 设置环境变量
3. 🚀 **运行集成测试** - 测试真实 API
4. 📊 **查看结果** - 确保所有测试通过
5. 🎯 **集成到系统** - 使用 admin API 配置提供商
## 需要帮助?
- 查看 [完整文档](README.md)
- 检查 [sandbox 规范](../../../../../docs/develop/sandbox_spec.md)
- 联系 RAGFlow 团队

View File

@ -1,213 +0,0 @@
# Sandbox Provider Tests
This directory contains tests for the RAGFlow sandbox provider system.
## Test Structure
```
tests/
├── pytest.ini # Pytest configuration
├── test_providers.py # Unit tests for all providers (mocked)
├── test_aliyun_provider.py # Unit tests for Aliyun provider (mocked)
├── test_aliyun_integration.py # Integration tests for Aliyun (real API)
└── sandbox_security_tests_full.py # Security tests for self-managed provider
```
## Test Types
### 1. Unit Tests (No Credentials Required)
Unit tests use mocks and don't require any external services or credentials.
**Files:**
- `test_providers.py` - Tests for base provider interface and manager
- `test_aliyun_provider.py` - Tests for Aliyun provider with mocked API calls
**Run unit tests:**
```bash
# Run all unit tests
pytest agent/sandbox/tests/test_providers.py -v
pytest agent/sandbox/tests/test_aliyun_provider.py -v
# Run specific test
pytest agent/sandbox/tests/test_aliyun_provider.py::TestAliyunOpenSandboxProvider::test_initialize_success -v
# Run all unit tests (skip integration)
pytest agent/sandbox/tests/ -v -m "not integration"
```
### 2. Integration Tests (Real Credentials Required)
Integration tests make real API calls to Aliyun OpenSandbox service.
**Files:**
- `test_aliyun_integration.py` - Tests with real Aliyun API calls
**Setup environment variables:**
```bash
export ALIYUN_ACCESS_KEY_ID="LTAI5t..."
export ALIYUN_ACCESS_KEY_SECRET="..."
export ALIYUN_REGION="cn-hangzhou" # Optional, defaults to cn-hangzhou
export ALIYUN_WORKSPACE_ID="ws-..." # Optional
```
**Run integration tests:**
```bash
# Run only integration tests
pytest agent/sandbox/tests/test_aliyun_integration.py -v -m integration
# Run all tests including integration
pytest agent/sandbox/tests/ -v
# Run specific integration test
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_health_check -v
```
### 3. Security Tests
Security tests validate the security features of the self-managed sandbox provider.
**Files:**
- `sandbox_security_tests_full.py` - Comprehensive security tests
**Run security tests:**
```bash
# Run all security tests
pytest agent/sandbox/tests/sandbox_security_tests_full.py -v
# Run specific security test
pytest agent/sandbox/tests/sandbox_security_tests_full.py -k "test_dangerous_imports" -v
```
## Test Commands
### Quick Test Commands
```bash
# Run all sandbox tests (unit only, fast)
pytest agent/sandbox/tests/ -v -m "not integration" --tb=short
# Run tests with coverage
pytest agent/sandbox/tests/ -v --cov=agent.sandbox --cov-report=term-missing -m "not integration"
# Run tests and stop on first failure
pytest agent/sandbox/tests/ -v -x -m "not integration"
# Run tests in parallel (requires pytest-xdist)
pytest agent/sandbox/tests/ -v -n auto -m "not integration"
```
### Aliyun Provider Testing
```bash
# 1. Run unit tests (no credentials needed)
pytest agent/sandbox/tests/test_aliyun_provider.py -v
# 2. Set up credentials for integration tests
export ALIYUN_ACCESS_KEY_ID="your-key-id"
export ALIYUN_ACCESS_KEY_SECRET="your-secret"
export ALIYUN_REGION="cn-hangzhou"
# 3. Run integration tests (makes real API calls)
pytest agent/sandbox/tests/test_aliyun_integration.py -v
# 4. Test specific scenarios
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_execute_python_code -v
pytest agent/sandbox/tests/test_aliyun_integration.py::TestAliyunRealWorldScenarios -v
```
## Understanding Test Results
### Unit Test Output
```
agent/sandbox/tests/test_aliyun_provider.py::TestAliyunOpenSandboxProvider::test_initialize_success PASSED
agent/sandbox/tests/test_aliyun_provider.py::TestAliyunOpenSandboxProvider::test_create_instance_python PASSED
...
========================== 48 passed in 2.34s ===========================
```
### Integration Test Output
```
agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_health_check PASSED
agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_create_python_instance PASSED
agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_execute_python_code PASSED
...
========================== 10 passed in 15.67s ===========================
```
**Note:** Integration tests will be skipped if credentials are not set:
```
agent/sandbox/tests/test_aliyun_integration.py::TestAliyunOpenSandboxIntegration::test_health_check SKIPPED
...
========================== 48 skipped, 10 passed in 0.12s ===========================
```
## Troubleshooting
### Integration Tests Fail
1. **Check credentials:**
```bash
echo $ALIYUN_ACCESS_KEY_ID
echo $ALIYUN_ACCESS_KEY_SECRET
```
2. **Check network connectivity:**
```bash
curl -I https://opensandbox.cn-hangzhou.aliyuncs.com
```
3. **Verify permissions:**
- Make sure your Aliyun account has OpenSandbox service enabled
- Check that your AccessKey has the required permissions
4. **Check region:**
- Verify the region is correct for your account
- Try different regions: cn-hangzhou, cn-beijing, cn-shanghai, etc.
### Tests Timeout
If tests timeout, increase the timeout in the test configuration or run with a longer timeout:
```bash
pytest agent/sandbox/tests/test_aliyun_integration.py -v --timeout=60
```
### Mock Tests Fail
If unit tests fail, it's likely a code issue, not a credentials issue:
1. Check the test error message
2. Review the code changes
3. Run with verbose output: `pytest -vv`
## Contributing
When adding new providers:
1. **Create unit tests** in `test_{provider}_provider.py` with mocks
2. **Create integration tests** in `test_{provider}_integration.py` with real API calls
3. **Add markers** to distinguish test types
4. **Update this README** with provider-specific testing instructions
Example:
```python
@pytest.mark.integration
def test_new_provider_real_api():
"""Test with real API calls."""
# Your test here
```
## Continuous Integration
In CI/CD pipelines:
```yaml
# Run unit tests only (fast, no credentials)
pytest agent/sandbox/tests/ -v -m "not integration"
# Run integration tests if credentials available
if [ -n "$ALIYUN_ACCESS_KEY_ID" ]; then
pytest agent/sandbox/tests/test_aliyun_integration.py -v -m integration
fi
```

View File

@ -1,33 +0,0 @@
[pytest]
# Pytest configuration for sandbox tests
# Test discovery patterns
python_files = test_*.py
python_classes = Test*
python_functions = test_*
# Markers for different test types
markers =
integration: Tests that require external services (Aliyun API, etc.)
unit: Fast tests that don't require external services
slow: Tests that take a long time to run
# Test paths
testpaths = .
# Minimum version
minversion = 7.0
# Output options
addopts =
-v
--strict-markers
--tb=short
--disable-warnings
# Log options
log_cli = false
log_cli_level = INFO
# Coverage options (if using pytest-cov)
# addopts = --cov=agent.sandbox --cov-report=html --cov-report=term

View File

@ -1,329 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Unit tests for Aliyun Code Interpreter provider.
These tests use mocks and don't require real Aliyun credentials.
Official Documentation: https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter
Official SDK: https://github.com/Serverless-Devs/agentrun-sdk-python
"""
import pytest
from unittest.mock import patch, MagicMock
from agent.sandbox.providers.base import SandboxProvider
from agent.sandbox.providers.aliyun_codeinterpreter import AliyunCodeInterpreterProvider
class TestAliyunCodeInterpreterProvider:
"""Test AliyunCodeInterpreterProvider implementation."""
def test_provider_initialization(self):
"""Test provider initialization."""
provider = AliyunCodeInterpreterProvider()
assert provider.access_key_id == ""
assert provider.access_key_secret == ""
assert provider.account_id == ""
assert provider.region == "cn-hangzhou"
assert provider.template_name == ""
assert provider.timeout == 30
assert not provider._initialized
@patch("agent.sandbox.providers.aliyun_codeinterpreter.Template")
def test_initialize_success(self, mock_template):
"""Test successful initialization."""
# Mock health check response
mock_template.list.return_value = []
provider = AliyunCodeInterpreterProvider()
result = provider.initialize(
{
"access_key_id": "LTAI5tXXXXXXXXXX",
"access_key_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"account_id": "1234567890123456",
"region": "cn-hangzhou",
"template_name": "python-sandbox",
"timeout": 20,
}
)
assert result is True
assert provider.access_key_id == "LTAI5tXXXXXXXXXX"
assert provider.access_key_secret == "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
assert provider.account_id == "1234567890123456"
assert provider.region == "cn-hangzhou"
assert provider.template_name == "python-sandbox"
assert provider.timeout == 20
assert provider._initialized
def test_initialize_missing_credentials(self):
"""Test initialization with missing credentials."""
provider = AliyunCodeInterpreterProvider()
# Missing access_key_id
result = provider.initialize({"access_key_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"})
assert result is False
# Missing access_key_secret
result = provider.initialize({"access_key_id": "LTAI5tXXXXXXXXXX"})
assert result is False
# Missing account_id
provider2 = AliyunCodeInterpreterProvider()
result = provider2.initialize({"access_key_id": "LTAI5tXXXXXXXXXX", "access_key_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"})
assert result is False
@patch("agent.sandbox.providers.aliyun_codeinterpreter.Template")
def test_initialize_default_config(self, mock_template):
"""Test initialization with default config."""
mock_template.list.return_value = []
provider = AliyunCodeInterpreterProvider()
result = provider.initialize({"access_key_id": "LTAI5tXXXXXXXXXX", "access_key_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "account_id": "1234567890123456"})
assert result is True
assert provider.region == "cn-hangzhou"
assert provider.template_name == ""
@patch("agent.sandbox.providers.aliyun_codeinterpreter.CodeInterpreterSandbox")
def test_create_instance_python(self, mock_sandbox_class):
"""Test creating a Python instance."""
# Mock successful instance creation
mock_sandbox = MagicMock()
mock_sandbox.sandbox_id = "01JCED8Z9Y6XQVK8M2NRST5WXY"
mock_sandbox_class.return_value = mock_sandbox
provider = AliyunCodeInterpreterProvider()
provider._initialized = True
provider._config = MagicMock()
instance = provider.create_instance("python")
assert instance.provider == "aliyun_codeinterpreter"
assert instance.status == "READY"
assert instance.metadata["language"] == "python"
@patch("agent.sandbox.providers.aliyun_codeinterpreter.CodeInterpreterSandbox")
def test_create_instance_javascript(self, mock_sandbox_class):
"""Test creating a JavaScript instance."""
mock_sandbox = MagicMock()
mock_sandbox.sandbox_id = "01JCED8Z9Y6XQVK8M2NRST5WXY"
mock_sandbox_class.return_value = mock_sandbox
provider = AliyunCodeInterpreterProvider()
provider._initialized = True
provider._config = MagicMock()
instance = provider.create_instance("javascript")
assert instance.metadata["language"] == "javascript"
def test_create_instance_not_initialized(self):
"""Test creating instance when provider not initialized."""
provider = AliyunCodeInterpreterProvider()
with pytest.raises(RuntimeError, match="Provider not initialized"):
provider.create_instance("python")
@patch("agent.sandbox.providers.aliyun_codeinterpreter.CodeInterpreterSandbox")
def test_execute_code_success(self, mock_sandbox_class):
"""Test successful code execution."""
# Mock sandbox instance
mock_sandbox = MagicMock()
mock_sandbox.context.execute.return_value = {
"results": [{"type": "stdout", "text": "Hello, World!"}, {"type": "result", "text": "None"}, {"type": "endOfExecution", "status": "ok"}],
"contextId": "kernel-12345-67890",
}
mock_sandbox_class.return_value = mock_sandbox
provider = AliyunCodeInterpreterProvider()
provider._initialized = True
provider._config = MagicMock()
result = provider.execute_code(instance_id="01JCED8Z9Y6XQVK8M2NRST5WXY", code="print('Hello, World!')", language="python", timeout=10)
assert result.stdout == "Hello, World!"
assert result.stderr == ""
assert result.exit_code == 0
assert result.execution_time > 0
@patch("agent.sandbox.providers.aliyun_codeinterpreter.CodeInterpreterSandbox")
def test_execute_code_timeout(self, mock_sandbox_class):
"""Test code execution timeout."""
from agentrun.utils.exception import ServerError
mock_sandbox = MagicMock()
mock_sandbox.context.execute.side_effect = ServerError(408, "Request timeout")
mock_sandbox_class.return_value = mock_sandbox
provider = AliyunCodeInterpreterProvider()
provider._initialized = True
provider._config = MagicMock()
with pytest.raises(TimeoutError, match="Execution timed out"):
provider.execute_code(instance_id="01JCED8Z9Y6XQVK8M2NRST5WXY", code="while True: pass", language="python", timeout=5)
@patch("agent.sandbox.providers.aliyun_codeinterpreter.CodeInterpreterSandbox")
def test_execute_code_with_error(self, mock_sandbox_class):
"""Test code execution with error."""
mock_sandbox = MagicMock()
mock_sandbox.context.execute.return_value = {
"results": [{"type": "stderr", "text": "Traceback..."}, {"type": "error", "text": "NameError: name 'x' is not defined"}, {"type": "endOfExecution", "status": "error"}]
}
mock_sandbox_class.return_value = mock_sandbox
provider = AliyunCodeInterpreterProvider()
provider._initialized = True
provider._config = MagicMock()
result = provider.execute_code(instance_id="01JCED8Z9Y6XQVK8M2NRST5WXY", code="print(x)", language="python")
assert result.exit_code != 0
assert len(result.stderr) > 0
def test_get_supported_languages(self):
"""Test getting supported languages."""
provider = AliyunCodeInterpreterProvider()
languages = provider.get_supported_languages()
assert "python" in languages
assert "javascript" in languages
def test_get_config_schema(self):
"""Test getting configuration schema."""
schema = AliyunCodeInterpreterProvider.get_config_schema()
assert "access_key_id" in schema
assert schema["access_key_id"]["required"] is True
assert "access_key_secret" in schema
assert schema["access_key_secret"]["required"] is True
assert "account_id" in schema
assert schema["account_id"]["required"] is True
assert "region" in schema
assert "template_name" in schema
assert "timeout" in schema
def test_validate_config_success(self):
"""Test successful configuration validation."""
provider = AliyunCodeInterpreterProvider()
is_valid, error_msg = provider.validate_config({"access_key_id": "LTAI5tXXXXXXXXXX", "account_id": "1234567890123456", "region": "cn-hangzhou"})
assert is_valid is True
assert error_msg is None
def test_validate_config_invalid_access_key(self):
"""Test validation with invalid access key format."""
provider = AliyunCodeInterpreterProvider()
is_valid, error_msg = provider.validate_config({"access_key_id": "INVALID_KEY"})
assert is_valid is False
assert "AccessKey ID format" in error_msg
def test_validate_config_missing_account_id(self):
"""Test validation with missing account ID."""
provider = AliyunCodeInterpreterProvider()
is_valid, error_msg = provider.validate_config({})
assert is_valid is False
assert "Account ID" in error_msg
def test_validate_config_invalid_region(self):
"""Test validation with invalid region."""
provider = AliyunCodeInterpreterProvider()
is_valid, error_msg = provider.validate_config(
{
"access_key_id": "LTAI5tXXXXXXXXXX",
"account_id": "1234567890123456", # Provide required field
"region": "us-west-1",
}
)
assert is_valid is False
assert "Invalid region" in error_msg
def test_validate_config_invalid_timeout(self):
"""Test validation with invalid timeout (> 30 seconds)."""
provider = AliyunCodeInterpreterProvider()
is_valid, error_msg = provider.validate_config(
{
"access_key_id": "LTAI5tXXXXXXXXXX",
"account_id": "1234567890123456", # Provide required field
"timeout": 60,
}
)
assert is_valid is False
assert "Timeout must be between 1 and 30 seconds" in error_msg
def test_normalize_language_python(self):
"""Test normalizing Python language identifier."""
provider = AliyunCodeInterpreterProvider()
assert provider._normalize_language("python") == "python"
assert provider._normalize_language("python3") == "python"
assert provider._normalize_language("PYTHON") == "python"
def test_normalize_language_javascript(self):
"""Test normalizing JavaScript language identifier."""
provider = AliyunCodeInterpreterProvider()
assert provider._normalize_language("javascript") == "javascript"
assert provider._normalize_language("nodejs") == "javascript"
assert provider._normalize_language("JavaScript") == "javascript"
class TestAliyunCodeInterpreterInterface:
"""Test that Aliyun provider correctly implements the interface."""
def test_aliyun_provider_is_abstract(self):
"""Test that AliyunCodeInterpreterProvider is a SandboxProvider."""
provider = AliyunCodeInterpreterProvider()
assert isinstance(provider, SandboxProvider)
def test_aliyun_provider_has_abstract_methods(self):
"""Test that AliyunCodeInterpreterProvider implements all abstract methods."""
provider = AliyunCodeInterpreterProvider()
assert hasattr(provider, "initialize")
assert callable(provider.initialize)
assert hasattr(provider, "create_instance")
assert callable(provider.create_instance)
assert hasattr(provider, "execute_code")
assert callable(provider.execute_code)
assert hasattr(provider, "destroy_instance")
assert callable(provider.destroy_instance)
assert hasattr(provider, "health_check")
assert callable(provider.health_check)
assert hasattr(provider, "get_supported_languages")
assert callable(provider.get_supported_languages)

View File

@ -1,353 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Integration tests for Aliyun Code Interpreter provider.
These tests require real Aliyun credentials and will make actual API calls.
To run these tests, set the following environment variables:
export AGENTRUN_ACCESS_KEY_ID="LTAI5t..."
export AGENTRUN_ACCESS_KEY_SECRET="..."
export AGENTRUN_ACCOUNT_ID="1234567890..." # Aliyun primary account ID (主账号ID)
export AGENTRUN_REGION="cn-hangzhou" # Note: AGENTRUN_REGION (SDK will read this)
Then run:
pytest agent/sandbox/tests/test_aliyun_codeinterpreter_integration.py -v
Official Documentation: https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter
"""
import os
import pytest
from agent.sandbox.providers.aliyun_codeinterpreter import AliyunCodeInterpreterProvider
# Skip all tests if credentials are not provided
pytestmark = pytest.mark.skipif(
not all(
[
os.getenv("AGENTRUN_ACCESS_KEY_ID"),
os.getenv("AGENTRUN_ACCESS_KEY_SECRET"),
os.getenv("AGENTRUN_ACCOUNT_ID"),
]
),
reason="Aliyun credentials not set. Set AGENTRUN_ACCESS_KEY_ID, AGENTRUN_ACCESS_KEY_SECRET, and AGENTRUN_ACCOUNT_ID.",
)
@pytest.fixture
def aliyun_config():
"""Get Aliyun configuration from environment variables."""
return {
"access_key_id": os.getenv("AGENTRUN_ACCESS_KEY_ID"),
"access_key_secret": os.getenv("AGENTRUN_ACCESS_KEY_SECRET"),
"account_id": os.getenv("AGENTRUN_ACCOUNT_ID"),
"region": os.getenv("AGENTRUN_REGION", "cn-hangzhou"),
"template_name": os.getenv("AGENTRUN_TEMPLATE_NAME", ""),
"timeout": 30,
}
@pytest.fixture
def provider(aliyun_config):
"""Create an initialized Aliyun provider."""
provider = AliyunCodeInterpreterProvider()
initialized = provider.initialize(aliyun_config)
if not initialized:
pytest.skip("Failed to initialize Aliyun provider. Check credentials, account ID, and network.")
return provider
@pytest.mark.integration
class TestAliyunCodeInterpreterIntegration:
"""Integration tests for Aliyun Code Interpreter provider."""
def test_initialize_provider(self, aliyun_config):
"""Test provider initialization with real credentials."""
provider = AliyunCodeInterpreterProvider()
result = provider.initialize(aliyun_config)
assert result is True
assert provider._initialized is True
def test_health_check(self, provider):
"""Test health check with real API."""
result = provider.health_check()
assert result is True
def test_get_supported_languages(self, provider):
"""Test getting supported languages."""
languages = provider.get_supported_languages()
assert "python" in languages
assert "javascript" in languages
assert isinstance(languages, list)
def test_create_python_instance(self, provider):
"""Test creating a Python sandbox instance."""
try:
instance = provider.create_instance("python")
assert instance.provider == "aliyun_codeinterpreter"
assert instance.status in ["READY", "CREATING"]
assert instance.metadata["language"] == "python"
assert len(instance.instance_id) > 0
# Clean up
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"Instance creation failed: {str(e)}. API might not be available yet.")
def test_execute_python_code(self, provider):
"""Test executing Python code in the sandbox."""
try:
# Create instance
instance = provider.create_instance("python")
# Execute simple code
result = provider.execute_code(
instance_id=instance.instance_id,
code="print('Hello from Aliyun Code Interpreter!')\nprint(42)",
language="python",
timeout=30, # Max 30 seconds
)
assert result.exit_code == 0
assert "Hello from Aliyun Code Interpreter!" in result.stdout
assert "42" in result.stdout
assert result.execution_time > 0
# Clean up
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"Code execution test failed: {str(e)}. API might not be available yet.")
def test_execute_python_code_with_arguments(self, provider):
"""Test executing Python code with arguments parameter."""
try:
# Create instance
instance = provider.create_instance("python")
# Execute code with arguments
result = provider.execute_code(
instance_id=instance.instance_id,
code="""def main(name: str, count: int) -> dict:
return {"message": f"Hello {name}!" * count}
""",
language="python",
timeout=30,
arguments={"name": "World", "count": 2}
)
assert result.exit_code == 0
assert "Hello World!Hello World!" in result.stdout
# Clean up
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"Arguments test failed: {str(e)}. API might not be available yet.")
def test_execute_python_code_with_error(self, provider):
"""Test executing Python code that produces an error."""
try:
# Create instance
instance = provider.create_instance("python")
# Execute code with error
result = provider.execute_code(instance_id=instance.instance_id, code="raise ValueError('Test error')", language="python", timeout=30)
assert result.exit_code != 0
assert len(result.stderr) > 0 or "ValueError" in result.stdout
# Clean up
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"Error handling test failed: {str(e)}. API might not be available yet.")
def test_execute_javascript_code(self, provider):
"""Test executing JavaScript code in the sandbox."""
try:
# Create instance
instance = provider.create_instance("javascript")
# Execute simple code
result = provider.execute_code(instance_id=instance.instance_id, code="console.log('Hello from JavaScript!');", language="javascript", timeout=30)
assert result.exit_code == 0
assert "Hello from JavaScript!" in result.stdout
# Clean up
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"JavaScript execution test failed: {str(e)}. API might not be available yet.")
def test_execute_javascript_code_with_arguments(self, provider):
"""Test executing JavaScript code with arguments parameter."""
try:
# Create instance
instance = provider.create_instance("javascript")
# Execute code with arguments
result = provider.execute_code(
instance_id=instance.instance_id,
code="""function main(args) {
const { name, count } = args;
return `Hello ${name}!`.repeat(count);
}""",
language="javascript",
timeout=30,
arguments={"name": "World", "count": 2}
)
assert result.exit_code == 0
assert "Hello World!Hello World!" in result.stdout
# Clean up
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"JavaScript arguments test failed: {str(e)}. API might not be available yet.")
def test_destroy_instance(self, provider):
"""Test destroying a sandbox instance."""
try:
# Create instance
instance = provider.create_instance("python")
# Destroy instance
result = provider.destroy_instance(instance.instance_id)
# Note: The API might return True immediately or async
assert result is True or result is False
except Exception as e:
pytest.skip(f"Destroy instance test failed: {str(e)}. API might not be available yet.")
def test_config_validation(self, provider):
"""Test configuration validation."""
# Valid config
is_valid, error = provider.validate_config({"access_key_id": "LTAI5tXXXXXXXXXX", "account_id": "1234567890123456", "region": "cn-hangzhou", "timeout": 30})
assert is_valid is True
assert error is None
# Invalid access key
is_valid, error = provider.validate_config({"access_key_id": "INVALID_KEY"})
assert is_valid is False
# Missing account ID
is_valid, error = provider.validate_config({})
assert is_valid is False
assert "Account ID" in error
def test_timeout_limit(self, provider):
"""Test that timeout is limited to 30 seconds."""
# Timeout > 30 should be clamped to 30
provider2 = AliyunCodeInterpreterProvider()
provider2.initialize(
{
"access_key_id": os.getenv("AGENTRUN_ACCESS_KEY_ID"),
"access_key_secret": os.getenv("AGENTRUN_ACCESS_KEY_SECRET"),
"account_id": os.getenv("AGENTRUN_ACCOUNT_ID"),
"timeout": 60, # Request 60 seconds
}
)
# Should be clamped to 30
assert provider2.timeout == 30
@pytest.mark.integration
class TestAliyunCodeInterpreterScenarios:
"""Test real-world usage scenarios."""
def test_data_processing_workflow(self, provider):
"""Test a simple data processing workflow."""
try:
instance = provider.create_instance("python")
# Execute data processing code
code = """
import json
data = [{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]
result = json.dumps(data, indent=2)
print(result)
"""
result = provider.execute_code(instance_id=instance.instance_id, code=code, language="python", timeout=30)
assert result.exit_code == 0
assert "Alice" in result.stdout
assert "Bob" in result.stdout
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"Data processing test failed: {str(e)}")
def test_string_manipulation(self, provider):
"""Test string manipulation operations."""
try:
instance = provider.create_instance("python")
code = """
text = "Hello, World!"
print(text.upper())
print(text.lower())
print(text.replace("World", "Aliyun"))
"""
result = provider.execute_code(instance_id=instance.instance_id, code=code, language="python", timeout=30)
assert result.exit_code == 0
assert "HELLO, WORLD!" in result.stdout
assert "hello, world!" in result.stdout
assert "Hello, Aliyun!" in result.stdout
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"String manipulation test failed: {str(e)}")
def test_context_persistence(self, provider):
"""Test code execution with context persistence."""
try:
instance = provider.create_instance("python")
# First execution - define variable
result1 = provider.execute_code(instance_id=instance.instance_id, code="x = 42\nprint(x)", language="python", timeout=30)
assert result1.exit_code == 0
# Second execution - use variable
# Note: Context persistence depends on whether the contextId is reused
result2 = provider.execute_code(instance_id=instance.instance_id, code="print(f'x is {x}')", language="python", timeout=30)
# Context might or might not persist depending on API implementation
assert result2.exit_code == 0
provider.destroy_instance(instance.instance_id)
except Exception as e:
pytest.skip(f"Context persistence test failed: {str(e)}")
def test_without_credentials():
"""Test that tests are skipped without credentials."""
# This test should always run (not skipped)
if all(
[
os.getenv("AGENTRUN_ACCESS_KEY_ID"),
os.getenv("AGENTRUN_ACCESS_KEY_SECRET"),
os.getenv("AGENTRUN_ACCOUNT_ID"),
]
):
assert True # Credentials are set
else:
assert True # Credentials not set, test still passes

View File

@ -1,423 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Unit tests for sandbox provider abstraction layer.
"""
import pytest
from unittest.mock import Mock, patch
import requests
from agent.sandbox.providers.base import SandboxProvider, SandboxInstance, ExecutionResult
from agent.sandbox.providers.manager import ProviderManager
from agent.sandbox.providers.self_managed import SelfManagedProvider
class TestSandboxDataclasses:
"""Test sandbox dataclasses."""
def test_sandbox_instance_creation(self):
"""Test SandboxInstance dataclass creation."""
instance = SandboxInstance(
instance_id="test-123",
provider="self_managed",
status="running",
metadata={"language": "python"}
)
assert instance.instance_id == "test-123"
assert instance.provider == "self_managed"
assert instance.status == "running"
assert instance.metadata == {"language": "python"}
def test_sandbox_instance_default_metadata(self):
"""Test SandboxInstance with None metadata."""
instance = SandboxInstance(
instance_id="test-123",
provider="self_managed",
status="running",
metadata=None
)
assert instance.metadata == {}
def test_execution_result_creation(self):
"""Test ExecutionResult dataclass creation."""
result = ExecutionResult(
stdout="Hello, World!",
stderr="",
exit_code=0,
execution_time=1.5,
metadata={"status": "success"}
)
assert result.stdout == "Hello, World!"
assert result.stderr == ""
assert result.exit_code == 0
assert result.execution_time == 1.5
assert result.metadata == {"status": "success"}
def test_execution_result_default_metadata(self):
"""Test ExecutionResult with None metadata."""
result = ExecutionResult(
stdout="output",
stderr="error",
exit_code=1,
execution_time=0.5,
metadata=None
)
assert result.metadata == {}
class TestProviderManager:
"""Test ProviderManager functionality."""
def test_manager_initialization(self):
"""Test ProviderManager initialization."""
manager = ProviderManager()
assert manager.current_provider is None
assert manager.current_provider_name is None
assert not manager.is_configured()
def test_set_provider(self):
"""Test setting a provider."""
manager = ProviderManager()
mock_provider = Mock(spec=SandboxProvider)
manager.set_provider("self_managed", mock_provider)
assert manager.current_provider == mock_provider
assert manager.current_provider_name == "self_managed"
assert manager.is_configured()
def test_get_provider(self):
"""Test getting the current provider."""
manager = ProviderManager()
mock_provider = Mock(spec=SandboxProvider)
manager.set_provider("self_managed", mock_provider)
assert manager.get_provider() == mock_provider
def test_get_provider_name(self):
"""Test getting the current provider name."""
manager = ProviderManager()
mock_provider = Mock(spec=SandboxProvider)
manager.set_provider("self_managed", mock_provider)
assert manager.get_provider_name() == "self_managed"
def test_get_provider_when_not_set(self):
"""Test getting provider when none is set."""
manager = ProviderManager()
assert manager.get_provider() is None
assert manager.get_provider_name() is None
class TestSelfManagedProvider:
"""Test SelfManagedProvider implementation."""
def test_provider_initialization(self):
"""Test provider initialization."""
provider = SelfManagedProvider()
assert provider.endpoint == "http://localhost:9385"
assert provider.timeout == 30
assert provider.max_retries == 3
assert provider.pool_size == 10
assert not provider._initialized
@patch('requests.get')
def test_initialize_success(self, mock_get):
"""Test successful initialization."""
mock_response = Mock()
mock_response.status_code = 200
mock_get.return_value = mock_response
provider = SelfManagedProvider()
result = provider.initialize({
"endpoint": "http://test-endpoint:9385",
"timeout": 60,
"max_retries": 5,
"pool_size": 20
})
assert result is True
assert provider.endpoint == "http://test-endpoint:9385"
assert provider.timeout == 60
assert provider.max_retries == 5
assert provider.pool_size == 20
assert provider._initialized
mock_get.assert_called_once_with("http://test-endpoint:9385/healthz", timeout=5)
@patch('requests.get')
def test_initialize_failure(self, mock_get):
"""Test initialization failure."""
mock_get.side_effect = Exception("Connection error")
provider = SelfManagedProvider()
result = provider.initialize({"endpoint": "http://invalid:9385"})
assert result is False
assert not provider._initialized
def test_initialize_default_config(self):
"""Test initialization with default config."""
with patch('requests.get') as mock_get:
mock_response = Mock()
mock_response.status_code = 200
mock_get.return_value = mock_response
provider = SelfManagedProvider()
result = provider.initialize({})
assert result is True
assert provider.endpoint == "http://localhost:9385"
assert provider.timeout == 30
def test_create_instance_python(self):
"""Test creating a Python instance."""
provider = SelfManagedProvider()
provider._initialized = True
instance = provider.create_instance("python")
assert instance.provider == "self_managed"
assert instance.status == "running"
assert instance.metadata["language"] == "python"
assert instance.metadata["endpoint"] == "http://localhost:9385"
assert len(instance.instance_id) > 0 # Verify instance_id exists
def test_create_instance_nodejs(self):
"""Test creating a Node.js instance."""
provider = SelfManagedProvider()
provider._initialized = True
instance = provider.create_instance("nodejs")
assert instance.metadata["language"] == "nodejs"
def test_create_instance_not_initialized(self):
"""Test creating instance when provider not initialized."""
provider = SelfManagedProvider()
with pytest.raises(RuntimeError, match="Provider not initialized"):
provider.create_instance("python")
@patch('requests.post')
def test_execute_code_success(self, mock_post):
"""Test successful code execution."""
mock_response = Mock()
mock_response.status_code = 200
mock_response.json.return_value = {
"status": "success",
"stdout": '{"result": 42}',
"stderr": "",
"exit_code": 0,
"time_used_ms": 100.0,
"memory_used_kb": 1024.0
}
mock_post.return_value = mock_response
provider = SelfManagedProvider()
provider._initialized = True
result = provider.execute_code(
instance_id="test-123",
code="def main(): return {'result': 42}",
language="python",
timeout=10
)
assert result.stdout == '{"result": 42}'
assert result.stderr == ""
assert result.exit_code == 0
assert result.execution_time > 0
assert result.metadata["status"] == "success"
assert result.metadata["instance_id"] == "test-123"
@patch('requests.post')
def test_execute_code_timeout(self, mock_post):
"""Test code execution timeout."""
mock_post.side_effect = requests.Timeout()
provider = SelfManagedProvider()
provider._initialized = True
with pytest.raises(TimeoutError, match="Execution timed out"):
provider.execute_code(
instance_id="test-123",
code="while True: pass",
language="python",
timeout=5
)
@patch('requests.post')
def test_execute_code_http_error(self, mock_post):
"""Test code execution with HTTP error."""
mock_response = Mock()
mock_response.status_code = 500
mock_response.text = "Internal Server Error"
mock_post.return_value = mock_response
provider = SelfManagedProvider()
provider._initialized = True
with pytest.raises(RuntimeError, match="HTTP 500"):
provider.execute_code(
instance_id="test-123",
code="invalid code",
language="python"
)
def test_execute_code_not_initialized(self):
"""Test executing code when provider not initialized."""
provider = SelfManagedProvider()
with pytest.raises(RuntimeError, match="Provider not initialized"):
provider.execute_code(
instance_id="test-123",
code="print('hello')",
language="python"
)
def test_destroy_instance(self):
"""Test destroying an instance (no-op for self-managed)."""
provider = SelfManagedProvider()
provider._initialized = True
# For self-managed, destroy_instance is a no-op
result = provider.destroy_instance("test-123")
assert result is True
@patch('requests.get')
def test_health_check_success(self, mock_get):
"""Test successful health check."""
mock_response = Mock()
mock_response.status_code = 200
mock_get.return_value = mock_response
provider = SelfManagedProvider()
result = provider.health_check()
assert result is True
mock_get.assert_called_once_with("http://localhost:9385/healthz", timeout=5)
@patch('requests.get')
def test_health_check_failure(self, mock_get):
"""Test health check failure."""
mock_get.side_effect = Exception("Connection error")
provider = SelfManagedProvider()
result = provider.health_check()
assert result is False
def test_get_supported_languages(self):
"""Test getting supported languages."""
provider = SelfManagedProvider()
languages = provider.get_supported_languages()
assert "python" in languages
assert "nodejs" in languages
assert "javascript" in languages
def test_get_config_schema(self):
"""Test getting configuration schema."""
schema = SelfManagedProvider.get_config_schema()
assert "endpoint" in schema
assert schema["endpoint"]["type"] == "string"
assert schema["endpoint"]["required"] is True
assert schema["endpoint"]["default"] == "http://localhost:9385"
assert "timeout" in schema
assert schema["timeout"]["type"] == "integer"
assert schema["timeout"]["default"] == 30
assert "max_retries" in schema
assert schema["max_retries"]["type"] == "integer"
assert "pool_size" in schema
assert schema["pool_size"]["type"] == "integer"
def test_normalize_language_python(self):
"""Test normalizing Python language identifier."""
provider = SelfManagedProvider()
assert provider._normalize_language("python") == "python"
assert provider._normalize_language("python3") == "python"
assert provider._normalize_language("PYTHON") == "python"
assert provider._normalize_language("Python3") == "python"
def test_normalize_language_javascript(self):
"""Test normalizing JavaScript language identifier."""
provider = SelfManagedProvider()
assert provider._normalize_language("javascript") == "nodejs"
assert provider._normalize_language("nodejs") == "nodejs"
assert provider._normalize_language("JavaScript") == "nodejs"
assert provider._normalize_language("NodeJS") == "nodejs"
def test_normalize_language_default(self):
"""Test language normalization with empty/unknown input."""
provider = SelfManagedProvider()
assert provider._normalize_language("") == "python"
assert provider._normalize_language(None) == "python"
assert provider._normalize_language("unknown") == "unknown"
class TestProviderInterface:
"""Test that providers correctly implement the interface."""
def test_self_managed_provider_is_abstract(self):
"""Test that SelfManagedProvider is a SandboxProvider."""
provider = SelfManagedProvider()
assert isinstance(provider, SandboxProvider)
def test_self_managed_provider_has_abstract_methods(self):
"""Test that SelfManagedProvider implements all abstract methods."""
provider = SelfManagedProvider()
# Check all abstract methods are implemented
assert hasattr(provider, 'initialize')
assert callable(provider.initialize)
assert hasattr(provider, 'create_instance')
assert callable(provider.create_instance)
assert hasattr(provider, 'execute_code')
assert callable(provider.execute_code)
assert hasattr(provider, 'destroy_instance')
assert callable(provider.destroy_instance)
assert hasattr(provider, 'health_check')
assert callable(provider.health_check)
assert hasattr(provider, 'get_supported_languages')
assert callable(provider.get_supported_languages)

View File

@ -1,78 +0,0 @@
#!/usr/bin/env python3
"""
Quick verification script for Aliyun Code Interpreter provider using official SDK.
"""
import importlib.util
import sys
sys.path.insert(0, ".")
print("=" * 60)
print("Aliyun Code Interpreter Provider - SDK Verification")
print("=" * 60)
# Test 1: Import provider
print("\n[1/5] Testing provider import...")
try:
from agent.sandbox.providers.aliyun_codeinterpreter import AliyunCodeInterpreterProvider
print("✓ Provider imported successfully")
except ImportError as e:
print(f"✗ Import failed: {e}")
sys.exit(1)
# Test 2: Check provider class
print("\n[2/5] Testing provider class...")
provider = AliyunCodeInterpreterProvider()
assert hasattr(provider, "initialize")
assert hasattr(provider, "create_instance")
assert hasattr(provider, "execute_code")
assert hasattr(provider, "destroy_instance")
assert hasattr(provider, "health_check")
print("✓ Provider has all required methods")
# Test 3: Check SDK imports
print("\n[3/5] Testing SDK imports...")
try:
# Check if agentrun SDK is available using importlib
if (
importlib.util.find_spec("agentrun.sandbox") is None
or importlib.util.find_spec("agentrun.utils.config") is None
or importlib.util.find_spec("agentrun.utils.exception") is None
):
raise ImportError("agentrun SDK not found")
# Verify imports work (assign to _ to indicate they're intentionally unused)
from agentrun.sandbox import CodeInterpreterSandbox, TemplateType, CodeLanguage
from agentrun.utils.config import Config
from agentrun.utils.exception import ServerError
_ = (CodeInterpreterSandbox, TemplateType, CodeLanguage, Config, ServerError)
print("✓ SDK modules imported successfully")
except ImportError as e:
print(f"✗ SDK import failed: {e}")
sys.exit(1)
# Test 4: Check config schema
print("\n[4/5] Testing configuration schema...")
schema = AliyunCodeInterpreterProvider.get_config_schema()
required_fields = ["access_key_id", "access_key_secret", "account_id"]
for field in required_fields:
assert field in schema
assert schema[field]["required"] is True
print(f"✓ All required fields present: {', '.join(required_fields)}")
# Test 5: Check supported languages
print("\n[5/5] Testing supported languages...")
languages = provider.get_supported_languages()
assert "python" in languages
assert "javascript" in languages
print(f"✓ Supported languages: {', '.join(languages)}")
print("\n" + "=" * 60)
print("All verification tests passed! ✓")
print("=" * 60)
print("\nNote: This provider now uses the official agentrun-sdk.")
print("SDK Documentation: https://github.com/Serverless-Devs/agentrun-sdk-python")
print("API Documentation: https://help.aliyun.com/zh/functioncompute/fc/sandbox-sandbox-code-interepreter")

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -2,11 +2,9 @@
"id": 8, "id": 8,
"title": { "title": {
"en": "Generate SEO Blog", "en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"}, "zh": "生成SEO博客"},
"description": { "description": {
"en": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI “writers”, where each agent plays a specialized role — just like a real editorial team.", "en": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI “writers”, where each agent plays a specialized role — just like a real editorial team.",
"de": "Dies ist eine Multi-Agenten-Version des Workflows zur Erstellung von SEO-Blogs. Sie simuliert ein kleines Team von KI-„Autoren“, in dem jeder Agent eine spezielle Rolle übernimmt genau wie in einem echten Redaktionsteam.",
"zh": "多智能体架构可根据简单的用户输入自动生成完整的SEO博客文章。模拟小型“作家”团队其中每个智能体扮演一个专业角色——就像真正的编辑团队。"}, "zh": "多智能体架构可根据简单的用户输入自动生成完整的SEO博客文章。模拟小型“作家”团队其中每个智能体扮演一个专业角色——就像真正的编辑团队。"},
"canvas_type": "Agent", "canvas_type": "Agent",
"dsl": { "dsl": {

File diff suppressed because one or more lines are too long

View File

@ -2,11 +2,9 @@
"id": 20, "id": 20,
"title": { "title": {
"en": "Report Agent Using Knowledge Base", "en": "Report Agent Using Knowledge Base",
"de": "Berichtsagent mit Wissensdatenbank",
"zh": "知识库检索智能体"}, "zh": "知识库检索智能体"},
"description": { "description": {
"en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A", "en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"de": "Ein Berichtsgenerierungsassistent, der eine lokale Wissensdatenbank nutzt, mit erweiterten Fähigkeiten in Aufgabenplanung, Schlussfolgerung und reflektierender Analyse. Empfohlen für akademische Forschungspapier-Fragen und -Antworten.",
"zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"}, "zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"},
"canvas_type": "Agent", "canvas_type": "Agent",
"dsl": { "dsl": {

View File

@ -1,12 +1,10 @@
{ {
"id": 21, "id": 21,
"title": { "title": {
"en": "Report Agent Using Knowledge Base", "en": "Report Agent Using Knowledge Base",
"de": "Berichtsagent mit Wissensdatenbank",
"zh": "知识库检索智能体"}, "zh": "知识库检索智能体"},
"description": { "description": {
"en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A", "en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"de": "Ein Berichtsgenerierungsassistent, der eine lokale Wissensdatenbank nutzt, mit erweiterten Fähigkeiten in Aufgabenplanung, Schlussfolgerung und reflektierender Analyse. Empfohlen für akademische Forschungspapier-Fragen und -Antworten.",
"zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"}, "zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"},
"canvas_type": "Recommended", "canvas_type": "Recommended",
"dsl": { "dsl": {

View File

@ -2,11 +2,9 @@
"id": 12, "id": 12,
"title": { "title": {
"en": "Generate SEO Blog", "en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"}, "zh": "生成SEO博客"},
"description": { "description": {
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don't need any writing experience. Just provide a topic or short request — the system will handle the rest.", "en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"de": "Dieser Workflow generiert automatisch einen vollständigen SEO-optimierten Blogartikel basierend auf einer einfachen Benutzereingabe. Sie benötigen keine Schreiberfahrung. Geben Sie einfach ein Thema oder eine kurze Anfrage ein das System übernimmt den Rest.",
"zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"}, "zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"},
"canvas_type": "Marketing", "canvas_type": "Marketing",
"dsl": { "dsl": {
@ -918,4 +916,4 @@
"retrieval": [] "retrieval": []
}, },
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z" "avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
} }

View File

@ -2,11 +2,9 @@
"id": 4, "id": 4,
"title": { "title": {
"en": "Generate SEO Blog", "en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"}, "zh": "生成SEO博客"},
"description": { "description": {
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don't need any writing experience. Just provide a topic or short request — the system will handle the rest.", "en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"de": "Dieser Workflow generiert automatisch einen vollständigen SEO-optimierten Blogartikel basierend auf einer einfachen Benutzereingabe. Sie benötigen keine Schreiberfahrung. Geben Sie einfach ein Thema oder eine kurze Anfrage ein das System übernimmt den Rest.",
"zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"}, "zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"},
"canvas_type": "Recommended", "canvas_type": "Recommended",
"dsl": { "dsl": {
@ -918,4 +916,4 @@
"retrieval": [] "retrieval": []
}, },
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z" "avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
} }

View File

@ -2,12 +2,10 @@
"id": 17, "id": 17,
"title": { "title": {
"en": "SQL Assistant", "en": "SQL Assistant",
"de": "SQL Assistent",
"zh": "SQL助理"}, "zh": "SQL助理"},
"description": { "description": {
"en": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., 'Show me last quarter's top 10 products by revenue') and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ", "en": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., Show me last quarters top 10 products by revenue) and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ",
"de": "SQL-Assistent ist ein KI-gestütztes Tool, mit dem Geschäftsanwender einfache englische Fragen in vollständige SQL-Abfragen umwandeln können. Geben Sie einfach Ihre Frage ein (z.B. 'Zeige mir die Top 10 Produkte des letzten Quartals nach Umsatz') und der SQL-Assistent generiert das exakte SQL, führt es gegen Ihre Datenbank aus und liefert die Ergebnisse in Sekunden.", "zh": "用户能够将简单文本问题转化为完整的SQL查询并输出结果。只需输入您的问题例如“展示上个季度前十名按收入排序的产品”SQL助理就会生成精确的SQL语句对其运行您的数据库并几秒钟内返回结果。"},
"zh": "用户能够将简单文本问题转化为完整的SQL查询并输出结果。只需输入您的问题例如展示上个季度前十名按收入排序的产品SQL助理就会生成精确的SQL语句对其运行您的数据库并几秒钟内返回结果。"},
"canvas_type": "Marketing", "canvas_type": "Marketing",
"dsl": { "dsl": {
"components": { "components": {
@ -83,10 +81,10 @@
"value": [] "value": []
} }
}, },
"password": "", "password": "20010812Yy!",
"port": 3306, "port": 3306,
"sql": "{Agent:WickedGoatsDivide@content}", "sql": "{Agent:WickedGoatsDivide@content}",
"username": "" "username": "13637682833@163.com"
} }
}, },
"upstream": [ "upstream": [
@ -527,10 +525,10 @@
"value": [] "value": []
} }
}, },
"password": "", "password": "20010812Yy!",
"port": 3306, "port": 3306,
"sql": "{Agent:WickedGoatsDivide@content}", "sql": "{Agent:WickedGoatsDivide@content}",
"username": "" "username": "13637682833@163.com"
}, },
"label": "ExeSQL", "label": "ExeSQL",
"name": "ExeSQL" "name": "ExeSQL"
@ -578,7 +576,7 @@
{ {
"data": { "data": {
"form": { "form": {
"text": "Searches for relevant database creation statements.\n\nIt should label with a dataset to which the schema is dumped in. You could use \" General \" as parsing method, \" 2 \" as chunk size and \" ; \" as delimiter." "text": "Searches for relevant database creation statements.\n\nIt should label with a knowledgebase to which the schema is dumped in. You could use \" General \" as parsing method, \" 2 \" as chunk size and \" ; \" as delimiter."
}, },
"label": "Note", "label": "Note",
"name": "Note Schema" "name": "Note Schema"
@ -715,4 +713,4 @@
"retrieval": [] "retrieval": []
}, },
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q==" "avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q=="
} }

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More