Compare commits

..

496 Commits

Author SHA1 Message Date
e6cb74b27f Fix (next search): Optimize the search problem interface and related functions #3221 (#9569)
### What problem does this PR solve?

Fix (next search): Optimize the search problem interface and related
functions #3221

-Add search_id to the retrievval_test interface
-Optimize handleSearchStrChange and handleSearch callbacks to determine
whether to enable AI search based on search configuration

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 19:22:07 +08:00
00f54c207e Fix: Reset all data except the first one on the chat page shared with others #3221 (#9567)
### What problem does this PR solve?

Fix: Reset all data except the first one on the chat page shared with
others #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 19:04:40 +08:00
d0dc56166c Fix: no effect on retrieval_test in term of metadata filter. (#9566)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 18:57:35 +08:00
e15e39f183 Fix: Fixed an issue where renaming a chat would create a new chat #3221 (#9563)
### What problem does this PR solve?

Fix: Fixed an issue where renaming a chat would create a new chat #3221
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 18:33:55 +08:00
33f3e05b75 Refa: create new name for duplicated dialog name (#9558)
### What problem does this PR solve?

 Create new name for duplicated dialog name.

### Type of change

- [x] Refactoring
2025-08-19 18:14:04 +08:00
b8bfbac2e5 Feat: Switch the root route to the new page #3221 (#9560)
### What problem does this PR solve?

Feat: Switch the root route to the new page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-19 17:41:03 +08:00
d5729e598f Docs: Updated workarounds for uploading file to an agent (#9561)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-08-19 17:40:39 +08:00
f2c5ad170d Fix(search): Search application list supports renaming function #3221 (#9555)
### What problem does this PR solve?

Fix (search): Search application list supports renaming function #3221

-Update the search application list page and add a renaming operation
entry
-Modify the search application details interface to support obtaining
detailed information
-Optimize search settings page layout and style

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 17:35:32 +08:00
0aa3c4cdae Docs: Update version references to v0.20.2 in READMEs and docs (#9559)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.20.1 to v0.20.2
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-08-19 17:26:49 +08:00
f123587538 Feat: add meta filter to search app. (#9554)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-19 17:25:44 +08:00
a41a646909 Fix: Fixed the issue where clicking the SQL tool test button did not request the interface #9541 (#9542)
### What problem does this PR solve?

Fix: Fixed the issue where clicking the SQL tool test button did not
request the interface #9541
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 16:41:32 +08:00
787e0c6786 Refa: OpenAI whisper-1 (#9552)
### What problem does this PR solve?

Refactor OpenAI to enable audio parsing.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-08-19 16:41:18 +08:00
05ee1be1e9 Docs: Updated v0.20.2 release notes (#9553)
### What problem does this PR solve?

### Type of change


- [x] Documentation Update
2025-08-19 16:03:42 +08:00
a0d630365c Refactor:Improve VoyageRerank not texts handling (#9539)
### What problem does this PR solve?

Improve VoyageRerank not texts handling

### Type of change

- [x] Refactoring
2025-08-19 10:31:04 +08:00
b5b8032a56 Feat: Support metadata auto filer for Search. (#9524)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-19 10:27:24 +08:00
ccb9f0b0d7 Feature (agent): Allow Retrieval kb_ids param use kb_id,and allow list kb_name or kb_id (#9531)
### What problem does this PR solve?

Allow Retrieval kb_ids param use kb_id,and allow list kb_name or kb_id。
- Add judgment on whether the knowledge base name is a list and support
batch queries
-When the knowledge base name does not exist, try using the ID for
querying
-If both query methods fail, throw an exception

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-08-19 09:42:39 +08:00
a0ab619aeb Fix: ensure update_progress loop always waits between iterations (#9528)
Move stop_event.wait(6) into finally block so that even when an
exception occurs, the loop still sleeps before retrying. This prevents
busy looping and excessive error logs when Redis connection fails.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 09:42:15 +08:00
32349481ef Feat: Allow agent operators to select speech-to-text models #3221 (#9534)
### What problem does this PR solve?

Feat: Allow agent operators to select speech-to-text models #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-19 09:40:01 +08:00
2b9ed935f3 feat(search): Optimized search functionality and user interface #3221 (#9535)
### What problem does this PR solve?

feat(search): Optimized search functionality and user interface #3221
### Type of change
- Added similarity threshold adjustment function
- Optimized mind map display logic
- Adjusted search settings interface layout
- Fixed related search and document viewing functions
- Optimized time display and node selection logic

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-19 09:39:48 +08:00
188c0f614b Refa: refine search app (#9536)
### What problem does this PR solve?

Refine search app.

### Type of change

- [x] Refactoring
2025-08-19 09:33:33 +08:00
dad97869b6 Fix: search service reference (#9533)
### What problem does this PR solve?

- Update search_app.py to use SearchService instead of
KnowledgebaseService for duplicate

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-18 19:02:10 +08:00
57c8a37285 Feat: add dialog chatbots info (#9530)
### What problem does this PR solve?

Add dialog chatbots info.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-18 19:01:45 +08:00
9d0fed601d Feat: Displays the embedded page of the chat module #3221 (#9532)
### What problem does this PR solve?

Feat: Displays the embedded page of the chat module #3221
Feat: Let the agen operator support the selection of tts model #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-18 18:02:13 +08:00
fe32952825 Fix: Gemini parameters error (#9520)
### What problem does this PR solve?

Fix Gemini parameters error.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-18 14:51:10 +08:00
5808aef28c Fix (search): Optimize the search page functionality and UI #3221 (#9525)
### What problem does this PR solve?

Fix (search): Optimize the search page functionality and UI #3221 

- Add a search list component
- Implement search settings
- Optimize search result display
- Add related search functionality
- Adjust the search input box style
- Unify internationalized text

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-18 14:50:29 +08:00
ca720bd811 Fix: save team's canvas issue. (#9518)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-18 13:05:29 +08:00
ba11312766 Feat: embedded search (#9501)
### What problem does this PR solve?

Add embedded search functionality.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-18 12:05:11 +08:00
c8bbf7452d Env: Update dependencies for proxy support (#9519)
### What problem does this PR solve?

- Update httpx dependency to include socks support in pyproject.toml
- Update lockfile with new socksio dependency

### Type of change

- [x] Update dependencies for proxy support
2025-08-18 12:04:16 +08:00
b08650bc4c Feat: Fixed the chat model setting echo issue (#9521)
### What problem does this PR solve?

Feat: Fixed the chat model setting echo issue

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-18 12:03:33 +08:00
fb77f9917b Refactor: Use Input Length In DefaultRerank (#9516)
### What problem does this PR solve?

1. Use input length to prepare res
2. Adjust torch_empty_cache code location

### Type of change

- [x] Refactoring
- [x] Performance Improvement
2025-08-18 10:00:27 +08:00
d874683ae4 Fix the bug in enablePrologue under agent task mode (#9487)
### What problem does this PR solve?

There is a problem with the implementation of the Agent begin-form:
although the enablePrologue switch and the prologue input box are hidden
in Task mode, these values are still saved in the form data. If the user
first enables the opening and sets the content in Conversational mode,
and then switches to Task mode, these values will still be saved and may
be used in some scenarios.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-15 20:29:02 +08:00
f9e5caa8ed feat(search): Added app embedding functionality and optimized search page #3221 (#9499)
### What problem does this PR solve?
feat(search): Added app embedding functionality and optimized search
page #3221

- Added an Embed App button and related functionality
- Optimized the layout and interaction of the search settings interface
- Adjusted the search result display method
- Refactored some code to support new features
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-15 18:25:00 +08:00
99df0766fe Feat: add SMTP support for user invitation emails (#9479)
### What problem does this PR solve?

Add SMTP support for user invitation emails

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 18:12:20 +08:00
3b50688228 Docs: Miscellaneous updates. (#9506)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-08-15 18:10:11 +08:00
ffc095bd50 Feat: conversation completion can specify different model (#9485)
### What problem does this PR solve?

Conversation completion can specify different model

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 17:44:58 +08:00
799c57287c Feat: Add metadata configuration for new chats #3221 (#9502)
### What problem does this PR solve?

Feat: Add metadata configuration for new chats #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 17:40:16 +08:00
eef43fa25c Fix: unexpected truncated Excel files (#9500)
### What problem does this PR solve?

Handle unexpected truncated Excel files.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-15 17:00:34 +08:00
5a4dfecfbe Refactor:Standardize image conf and add private registry support (#9496)
- Unified configuration format: All services now use the same image
configuration structure for consistency.

- Private registry support: Added imagePullSecrets to enable pulling
images from private registries.

- Per-service flexibility: Each service can override image-related
parameters independently.

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-08-15 16:05:33 +08:00
7f237fee16 Fix:HTTPs component re.error: bad escape \u (#9480)
### What problem does this PR solve?

When calling HTTP to request data, if the JSON string returned by the
interface contains an unasked back slash like '\u', Python's RE module
will escape 'u' as Unicode, but there is no valid 4-digit hexadecimal
number at the end, so it will directly report an error. Error: re.
error: bad escape \ u at position 26
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-15 15:48:10 +08:00
30ae78755b Feat: Delete or filter conversations #3221 (#9491)
### What problem does this PR solve?

Feat: Delete or filter conversations #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 12:05:27 +08:00
2114e966d8 Feat: add citation option to agent and enlarge the timeouts. (#9484)
### What problem does this PR solve?

#9422

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 10:05:01 +08:00
562349eb02 Feat: Upload files in the chat box #3221 (#9483)
### What problem does this PR solve?
Feat: Upload files in the chat box #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 10:04:37 +08:00
618d6bc924 Feat:Can directly generate an agent node by dragging and dropping the connecting line (#9226) (#9357)
…e connecting line (#9226)

### What problem does this PR solve?

Can directly generate an agent node by dragging and dropping the
connecting line (#9226)

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-08-14 17:48:02 +08:00
762aa4b8c4 fix: preserve correct MIME & unify data URL handling for vision inputs (relates #9248) (#9474)
fix: preserve correct MIME & unify data URL handling for vision inputs
(relates #9248)

- Updated image2base64() to return a full data URL
(data:image/<fmt>;base64,...) with accurate MIME
- Removed hardcoded image/jpeg in Base._image_prompt(); pass through
data URLs and default raw base64 to image/png
- Set AnthropicCV._image_prompt() raw base64 media_type default to
image/png
- Ensures MIME type matches actual image content, fixing “cannot process
base64 image” errors on vLLM/OpenAI-compatible backends

### What problem does this PR solve?

This PR fixes a compatibility issue where base64-encoded images sent to
vision models (e.g., vLLM/OpenAI-compatible backends) were rejected due
to mismatched MIME type or incorrect decoding.
Previously, the backend:
- Always converted raw base64 into data:image/jpeg;base64,... even if
the actual content was PNG.
- In some cases, base64 decoding was attempted on the full data URL
string instead of the pure base64 part.
This caused errors like:
```
cannot process base64 image
failed to decode base64 string: illegal base64 data at input byte 0
```
by strict validators such as vLLM.
With this fix, the MIME type in the request now matches the actual image
content, and data URLs are correctly handled or passed through, ensuring
vision models can decode and process images reliably.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-14 17:00:56 +08:00
9cd09488ca Feat: Send data to compare the performance of different models' answers #3221 (#9477)
### What problem does this PR solve?

Feat: Send data to compare the performance of different models' answers
#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-14 16:57:35 +08:00
f2806a8332 Update cv_model.py (#9472)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9452

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-14 13:45:38 +08:00
b6e34e3aa7 Fix: PyPDF's Manipulated FlateDecode streams can exhaust RAM (#9469)
### What problem does this PR solve?

#3951
#8463 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-14 13:45:19 +08:00
3ee9653170 Agent template: report agent using knowledge base (#9427)
### What problem does this PR solve?

Agent template: report agent using knowledge base
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-14 12:17:57 +08:00
6d1078b538 fix 'KeyError: "There is no item named 'word/NULL' in the archive"' (#9455)
### What problem does this PR solve?

Issue referring to:
https://github.com/python-openxml/python-docx/issues/797
Fix referring to:
https://github.com/python-openxml/python-docx/issues/1105#issuecomment-1298075246

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-14 12:14:03 +08:00
6e862553cb Docs: Deprecated 'Create session with agent' (#9464)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-08-14 12:13:11 +08:00
b1baa91ff0 feat(next-search): Implements document preview functionality #3221 (#9465)
### What problem does this PR solve?

feat(next-search): Implements document preview functionality

- Adds a new document preview modal component
- Implements document preview page logic
- Adds document preview-related hooks
- Optimizes document preview rendering logic
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-14 12:11:53 +08:00
b55c3d07dc Test: Update error message assertions for chunk update tests (#9468)
### What problem does this PR solve?

Modify test cases to accept additional error message format when
updating chunks.
fix actions:
https://github.com/infiniflow/ragflow/actions/runs/16942741621/job/48015850297

### Type of change

- [x] Update test cases
2025-08-14 12:11:20 +08:00
2b3318cd3d Fix: KB folder may not there while creating virtual file (#9431)
### What problem does this PR solve?

KB folder may not there while creating virtual file. #9423 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-14 09:40:30 +08:00
434b55be70 Feat: Display a separate chat multi-model comparison page #3221 (#9461)
### What problem does this PR solve?
Feat: Display a separate chat multi-model comparison page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-14 09:39:20 +08:00
98b4c67292 Trival. (#9460)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-14 09:39:00 +08:00
3d645ff31a Docs: Update HTTP API reference with simplified response format and parameters (#9454)
### What problem does this PR solve?

- Make `session_id` optional and add `inputs` parameter
- Remove deprecated `sync_dsl` parameter
- Update request/response examples to match current API behavior

### Type of change

- [x] Documentation Update
2025-08-13 21:02:54 +08:00
5e8cd693a5 Refa: split services about llm. (#9450)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2025-08-13 16:41:01 +08:00
29f297b850 Fix: update broken create agent session due to v0.20.0 changes (#9445)
### What problem does this PR solve?

 Update broken create agent session due to v0.20.0 changes. #9383


**NOTE: A session ID is no longer required to interact with the agent.**

See: #9241, #9309.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-13 16:01:54 +08:00
7235638607 Feat: Show multiple chat boxes #3221 (#9443)
### What problem does this PR solve?

Feat: Show multiple chat boxes #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-13 15:59:51 +08:00
00919fd599 Fix typo in issue template (#9444) 2025-08-13 14:27:15 +08:00
43c0792ffd Add issue template for agent scenario feature request (#9437) 2025-08-13 12:50:06 +08:00
4b1b68c5fc Fix: no doc hits after meta data filter. (#9435)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-13 12:43:31 +08:00
3492f54c7a Docs: Update HTTP API reference with new response fields (#9434)
### What problem does this PR solve?

Add `url`, `doc_type`, and `created_at` fields to the API response
example in the documentation.

### Type of change

- [x] Documentation Update
2025-08-13 12:18:39 +08:00
da5cef0686 Refactor:Improve the float compare for LocalAIRerank (#9428)
### What problem does this PR solve?
Improve the float compare for LocalAIRerank

### Type of change

- [x] Refactoring
2025-08-13 10:26:42 +08:00
9098efb8aa Feat: Fixed the issue where some fields in the chat configuration could not be displayed #3221 (#9430)
### What problem does this PR solve?

Feat: Fixed the issue where some fields in the chat configuration could
not be displayed #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-13 10:26:26 +08:00
421657f64b Feat: allows setting multiple types of default models in service config (#9404)
### What problem does this PR solve?

Allows set multiple types of default models in service config.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-13 09:46:05 +08:00
7ee5e0d152 Fix KeyError in session listing endpoint when accessing conversation reference (#9419)
- Add type and boundary checks for conv["reference"] access
- Prevent KeyError: 0 when reference list is empty or malformed
- Ensure reference is list type before indexing
- Handle cases where reference items are None or missing chunks
- Maintains backward compatibility with existing data structures

This resolves crashes in /api/v1/agents/<agent_id>/sessions endpoint
when conversation reference data is not properly structured.

### What problem does this PR solve?

This PR fixes a critical `KeyError: 0` that occurs in the
`/api/v1/agents/<agent_id>/sessions` endpoint when the system attempts
to access conversation reference data that is not properly structured.

**Background Context:**
The `list_agent_session` method in `api/apps/sdk/session.py` assumes
that `conv["reference"]` is always a properly indexed list with valid
dictionary structures. However, in real-world scenarios, this data can
be:
- Not a list type (could be None, string, or other types)
- An empty list when `chunk_num` tries to access index 0
- Contains None values or malformed dictionary structures
- Missing expected "chunks" keys in reference items

**Impact Before Fix:**
When malformed reference data is encountered, the API crashes with:
```json
{
    "code": 100,
    "data": null,
    "message": "KeyError(0)"
}
```
**Solution:**
Added comprehensive safety checks including type validation, boundary
checking, null safety, and structure validation to ensure the API
gracefully handles all reference data formats while maintaining backward
compatibility.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-08-13 09:23:52 +08:00
22915223d4 Fix: citation issue. (#9424)
### What problem does this PR solve?

#8474

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 18:53:34 +08:00
d7b4e84cda Refa: Update LLM stream response type to Generator (#9420)
### What problem does this PR solve?

Change return type of _generate_streamly from str to Generator[str,
None, None] to properly type hint streaming responses.

### Type of change

- [x] Refactoring
2025-08-12 18:05:52 +08:00
e845d5f9f8 Fix:valueERROR when file is optional but not exist value (#9414)
### What problem does this PR solve?

when begin component has optional file but not exist , it rase error

### Type of change

- [x] Bug Fix

Co-authored-by: Popmio <zhengyihao036@gamil.com>
2025-08-12 17:39:03 +08:00
3d18284dd6 Feat: Added meta data to the chat configuration page #8531 (#9417)
### What problem does this PR solve?

Feat: Added meta data to the chat configuration page #8531

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 16:19:23 +08:00
96783aa82c Fix: remove doc error. (#9413)
### What problem does this PR solve?

Close #9407

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 15:55:04 +08:00
a0c2da1219 Fix: Patch LiteLLM (#9416)
### What problem does this PR solve?

Patch LiteLLM refactor. #9408

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 15:54:30 +08:00
79e2edc835 Fix "File contains no valid workbook part" (#9360)
### What problem does this PR solve?

fix "File contains no valid workbook part"

stacktrace:
```
Traceback (most recent call last):
  File "/ragflow/deepdoc/parser/excel_parser.py", line 54, in _load_excel_to_workbook
    return RAGFlowExcelParser._dataframe_to_workbook(df)
  File "/ragflow/deepdoc/parser/excel_parser.py", line 69, in _dataframe_to_workbook
    ws.cell(row=row_num, column=col_num, value=value)
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/worksheet/worksheet.py", line 246, in cell
    cell.value = value
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/cell/cell.py", line 218, in value
    self._bind_value(value)
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/cell/cell.py", line 197, in _bind_value
    value = self.check_string(value)
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/cell/cell.py", line 165, in check_string
    raise IllegalCharacterError(f"{value} cannot be used in worksheets.")
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-08-12 14:58:36 +08:00
57b87fa9d9 Fix:TypeError: OllamaCV.chat() got an unexpected keyword argument 'stop' (#9363)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9351
Support filter argument before invoking
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-12 14:55:27 +08:00
153e430b00 Feat: add meta data filter. (#9405)
### What problem does this PR solve?

#8531 
#7417 
#6761 
#6573
#6477

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 14:12:56 +08:00
3ccaa06031 Fix: Before executing the SQL, remove tags in the format [ID: number] to avoid execution errors. (#9326)
### What problem does this PR solve?

Before executing the SQL, remove tags in the format [ID: number] to
avoid execution errors.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: wangyazhou <wangyazhou@sdibd.cn>
2025-08-12 12:42:28 +08:00
569ab011c4 Add fallback to use 'calamine' parse engine in excel_parser.py (#9374)
### What problem does this PR solve?

add fallback to `calamine` engine when parse error raised using the
default `openpyxl` / `xlrd` engine.
e.g. the following error can be fixed:
```
Traceback (most recent call last):
  File "/ragflow/deepdoc/parser/excel_parser.py", line 53, in _load_excel_to_workbook
    df = pd.read_excel(file_like_object)
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_base.py", line 495, in read_excel
    io = ExcelFile(
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_base.py", line 1567, in __init__
    self._reader = self._engines[engine](
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_xlrd.py", line 46, in __init__
    super().__init__(
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_base.py", line 573, in __init__
    self.book = self.load_workbook(self.handles.handle, engine_kwargs)
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_xlrd.py", line 63, in load_workbook
    return open_workbook(file_contents=data, **engine_kwargs)
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/__init__.py", line 172, in open_workbook
    bk = open_workbook_xls(
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/book.py", line 68, in open_workbook_xls
    bk.biff2_8_load(
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/book.py", line 641, in biff2_8_load
    cd.locate_named_stream(UNICODE_LITERAL(qname))
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/compdoc.py", line 398, in locate_named_stream
    result = self._locate_stream(
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/compdoc.py", line 429, in _locate_stream
    raise CompDocError("%s corruption: seen[%d] == %d" % (qname, s, self.seen[s]))
xlrd.compdoc.CompDocError: Workbook corruption: seen[2] == 4
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 12:41:33 +08:00
96b1538b3e Fix:HTTP request component failed to retrieve the corresponding value (#9399)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9385
Based on my understanding, I think checking empty string is fine

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-12 12:27:22 +08:00
735570486f feat(next-search): Added AI summary functionality #3221 (#9402)
### What problem does this PR solve?

feat(next-search): Added AI summary functionality #3221

- Added the LlmSettingFieldItems component for AI summary settings
- Updated the SearchSetting component to integrate AI summary
functionality
- Added the updateSearch hook and related service methods
- Modified the ISearchAppDetailProps interface to add the llm_setting
field

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 12:27:00 +08:00
da68f541b6 Feat: add full list of supported AWS Bedrock regions (#9395)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 11:01:16 +08:00
83771e500c Refa: migrate chat models to LiteLLM (#9394)
### What problem does this PR solve?

All models pass the mock response tests, which means that if a model can
return the correct response, everything should work as expected.
However, not all models have been fully tested in a real environment,
the real API_KEY. I suggest actively monitoring the refactored models
over the coming period to ensure they work correctly and fixing them
step by step, or waiting to merge until most have been tested in
practical environment.

### Type of change

- [x] Refactoring
2025-08-12 10:59:20 +08:00
a6d2119498 Refa: list canvas (#9341)
### What problem does this PR solve?

Refactor list canvas.

### Type of change

- [x] Refactoring
2025-08-12 10:58:06 +08:00
57b9f8cf52 Fix: Update test assertions and simplify test cases (#9400)
### What problem does this PR solve?

- Fix error message assertion in test_update_chunk.py to match new
ownership validation
- Simplify dataset listing test cases by removing lambda assertions for
sorting
- Fix actions:
https://github.com/infiniflow/ragflow/actions/runs/16885465524/job/47831942553

### Type of change

- [x] Fix test cases
2025-08-12 10:57:30 +08:00
5c3577c4c9 Python SDK: add meta_fields to Document class (#9387)
### What problem does this PR solve?

Python class Document was missing "meta_fields", e.g. when querying, the
document instances came without meta_fields

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 10:16:12 +08:00
76118000c1 Feat: Allow chat to use meta data #3221 (#9393)
### What problem does this PR solve?

Feat:  Allow chat to use meta data #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 10:15:10 +08:00
9433f64fe2 Feat: added functionality to choose all datasets if no id is provided (#9184)
### What problem does this PR solve?

Using the mcp server in n8n sometimes (with smaller models) results in
errors because the llm misses a char or adds one to the list of
dataset_ids provided. It first asks for the list of datasets and if you
got a larger list of them it makes a error recalling the list
completely. So adding the feature to just search through all available
datasets solves this and makes the retrieval of data more stable. The
functionality to just call special datasets by id is not changed, the
dataset_ids are now not required anymore (only the "question" is). You
can provide (like before) a list of datasets, a empty list or no list at
all.

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
<img width="1897" height="880" alt="mcp error dataset id"
src="https://github.com/user-attachments/assets/71076d24-f875-4663-a69a-60839fc7a545"
/>
2025-08-11 17:20:35 +08:00
d7c9611d45 docs(sandbox): update /etc/hosts entry to include required services (#9144)
Fixes an issue where running the sandbox (code component) fails due to
unresolved hostnames. Added missing service names (es01, infinity,
mysql, minio, redis) to 127.0.0.1 in the /etc/hosts example.

Reference: https://github.com/infiniflow/ragflow/issues/8226

## What this PR does

Updates the sandbox quickstart documentation to fix a known issue where
the sandbox fails to resolve required service hostnames.

## Why

Following the original instruction leads to a `Failed to resolve 'none'`
error, as discussed in issue #8226. Adding the missing service names to
`127.0.0.1` resolves the problem.

## Related issue

https://github.com/infiniflow/ragflow/issues/8226

## Note

It might be better to add `127.0.0.1 es01 infinity mysql minio redis` to
docs/quickstart.mdx, but since no issues appeared at the time without
adding this line—and the problem occurred while working with the code
component—I added it here.

### Type of change

- [X] Documentation Update
2025-08-11 17:18:56 +08:00
79399f7f25 Support the case of one cell split by multiple columns. (#9225)
### What problem does this PR solve?
Support the case of one cell split by multiple columns. Besides, the
codes are compatible with the common cell case.
#8606 can be fixed.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

I provide a case of one cell split by multiple columns:

[test.xlsx](https://github.com/user-attachments/files/21578693/test.xlsx)

The chunk res:
<img width="236" height="57" alt="2025-06-17 16-04-07 的屏幕截图"
src="https://github.com/user-attachments/assets/b0a499ac-349d-4c3d-8c6e-0931c8fc26de"
/>
2025-08-11 17:17:56 +08:00
23522f1ea8 Fix: handle missing dataset_ids when creating chat assistant (#9324)
- Root cause: accessing req.get("dataset_ids") returns None when the key
is absent, causing KeyError.
- Fix: use req.get("dataset_ids", []) to default to empty list.
2025-08-11 17:17:20 +08:00
46dc3f1c48 Fix: Update test assertions and add GraphRAG config in dataset tests (#9386)
### What problem does this PR solve?

- Modify error message assertion in chunk update test to check for
document ownership
- Add GraphRAG configuration with `use_graphrag: False` in dataset
update tests
- Fix actions:
https://github.com/infiniflow/ragflow/actions/runs/16863637898/job/47767511582
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:15:48 +08:00
c9b156fa6d Fix: Remove default dataset_ids from Chat class initialization (#9381)
### What problem does this PR solve?

- The default dataset_ids "kb1" was removed from the Chat class. 
- The HTTP API response does not include the dataset_ids field.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:15:30 +08:00
83939b1a63 Feat: add full list of supported AWS Bedrock regions (#9378)
### What problem does this PR solve?

Add full list of supported AWS Bedrock regions.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 17:15:07 +08:00
7f08ba47d7 Fix "no tc element at grid_offset" (#9375)
### What problem does this PR solve?

fix "no `tc` element at grid_offset", just log warning and ignore.
stacktrace:
```
Traceback (most recent call last):
  File "/ragflow/rag/svr/task_executor.py", line 620, in handle_task
    await do_handle_task(task)
  File "/ragflow/rag/svr/task_executor.py", line 553, in do_handle_task
    chunks = await build_chunks(task, progress_callback)
  File "/ragflow/rag/svr/task_executor.py", line 257, in build_chunks
    cks = await trio.to_thread.run_sync(lambda: chunker.chunk(task["name"], binary=binary, from_page=task["from_page"],
  File "/ragflow/.venv/lib/python3.10/site-packages/trio/_threads.py", line 447, in to_thread_run_sync
    return msg_from_thread.unwrap()
  File "/ragflow/.venv/lib/python3.10/site-packages/outcome/_impl.py", line 213, in unwrap
    raise captured_error
  File "/ragflow/.venv/lib/python3.10/site-packages/trio/_threads.py", line 373, in do_release_then_return_result
    return result.unwrap()
  File "/ragflow/.venv/lib/python3.10/site-packages/outcome/_impl.py", line 213, in unwrap
    raise captured_error
  File "/ragflow/.venv/lib/python3.10/site-packages/trio/_threads.py", line 392, in worker_fn
    ret = context.run(sync_fn, *args)
  File "/ragflow/rag/svr/task_executor.py", line 257, in <lambda>
    cks = await trio.to_thread.run_sync(lambda: chunker.chunk(task["name"], binary=binary, from_page=task["from_page"],
  File "/ragflow/rag/app/naive.py", line 384, in chunk
    sections, tables = Docx()(filename, binary)
  File "/ragflow/rag/app/naive.py", line 230, in __call__
    while i < len(r.cells):
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/table.py", line 438, in cells
    return tuple(_iter_row_cells())
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/table.py", line 436, in _iter_row_cells
    yield from iter_tc_cells(tc)
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/table.py", line 424, in iter_tc_cells
    yield from iter_tc_cells(tc._tc_above)  # pyright: ignore[reportPrivateUsage]
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/oxml/table.py", line 741, in _tc_above
    return self._tr_above.tc_at_grid_offset(self.grid_offset)
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/oxml/table.py", line 98, in tc_at_grid_offset
    raise ValueError(f"no `tc` element at grid_offset={grid_offset}")
ValueError: no `tc` element at grid_offset=10
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:13:10 +08:00
ce3dd019c3 Fix broken data stream when writing image file (#9354)
### What problem does this PR solve?

fix "broken data stream when writing image file", just log warning and
ignore

Close #8379 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:07:49 +08:00
476c56868d Agent plans tasks by referring to its own prompt. (#9315)
### What problem does this PR solve?

Fixes the issue in the analyze_task execution flow where the Lead Agent
was not utilizing its own sys_prompt during task analysis, resulting in
incorrect or incomplete task planning.
https://github.com/infiniflow/ragflow/issues/9294
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 17:05:06 +08:00
b9c4954c2f Fix: Replace StrEnum with strenum in code_exec.py (#9376)
### What problem does this PR solve?

- The enum import was changed from Python's built-in StrEnum to the
strenum package.
- Fix error `Warning: Failed to import module code_exec: cannot import
name 'StrEnum' from 'enum' (/usr/lib/python3.10/enum.py)`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 15:32:04 +08:00
a060672b31 Feat: Run eslint when the project is running to standardize everyone's code #9377 (#9379)
### What problem does this PR solve?

Feat: Run eslint when the project is running to standardize everyone's
code #9377

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 15:31:38 +08:00
f022504ef9 Support Russian in UI Update config.ts (#9361)
add ru

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
2025-08-11 15:30:35 +08:00
1a78b8b295 Support Russian in UI (#9362)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 14:06:18 +08:00
017dd85ccf Feat: Modify the agent list return field name #3221 (#9373)
### What problem does this PR solve?

Feat: Modify the agent list return field name #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 14:03:52 +08:00
4c7b2ef46e Feat: New search page components and features (#9344)
### What problem does this PR solve?

Feat: New search page components and features #3221

- Added search homepage, search settings, and ongoing search components
- Implemented features such as search app list, creating search apps,
and deleting search apps
- Optimized the multi-select component, adding disabled state and suffix
display
- Adjusted navigation hooks to support search page navigation

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 10:34:22 +08:00
597d88bf9a Doc: updated supported model name (#9343)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-08-11 10:05:39 +08:00
9b026fc5b6 Refa: code format. (#9342)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-08-08 18:44:42 +08:00
90eb5fd31b Fix: canvas sharing bug. (#9339)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 18:31:51 +08:00
b9eeb8e64f Docs: Update version references to v0.20.1 in READMEs and docs (#9335)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.20.0 to v0.20.1
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-08-08 18:17:25 +08:00
4c99988c3e Revert: revert token_required decorator of agent_bot completions and inputs (#9332)
### What problem does this PR solve?

Revert token_required decorator of agent_bot completions and inputs.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-08-08 17:45:53 +08:00
4f2e9ef248 feat(agent): Adds prologue functionality (#9336)
### What problem does this PR solve?

feat(agent): Adds prologue functionality #3221

- Add a prologue field to the IInputs type
- Initialize the prologue state in the chat container
- Use useEffect to monitor prologue changes and add prologue responses
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 17:45:37 +08:00
4a3871090d Docs: v0.20.1 release notes (#9331)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-08-08 17:35:12 +08:00
7ce64cb265 Update the sql assistant workflow (#9329)
### What problem does this PR solve?
Update the sql assistant workflow
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 17:06:37 +08:00
d102a6bb71 New workflow templates: choose your knowledge base (#9325)
### What problem does this PR solve?

new Agent templates: you can choose your knowledge base, providing
workflow and Agent versions

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 17:06:16 +08:00
a02ca16260 Fix: add prologue to api. (#9322)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 17:05:55 +08:00
cd3bb0ed7c Feat: Set the description of the agent, which can be null #3221 (#9327)
### What problem does this PR solve?

Feat: Set the description of the agent, which can be null #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 16:44:08 +08:00
86fb710e52 Feat: Add xai logo #1853 (#9321)
### What problem does this PR solve?

Feat: Add xai logo #1853

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 14:13:19 +08:00
7713e14d6a Update chat_model.py (#9318)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/9317
base on
https://discuss.ai.google.dev/t/valueerror-invalid-operation-the-response-text-quick-accessor-requires-the-response-to-contain-a-valid-part-but-none-were-returned/42866
should can be handled by retry 
### Type of change

- [x] Refactoring
2025-08-08 14:13:07 +08:00
392f5f4ce9 fix model type (#9250)
### What problem does this PR solve?
 ERROR type model

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 13:43:53 +08:00
79481becea Feat: supports GPT-5 (#9320)
### What problem does this PR solve?

Supports GPT-5.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 11:54:40 +08:00
58a64000ea Feat: Render agent setting dialog #3221 (#9312)
### What problem does this PR solve?

Feat: Render agent setting dialog #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 11:00:55 +08:00
1bd64dafcb Fix: update broken agent completion due to v0.20.0 changes (#9309)
### What problem does this PR solve?

Update broken agent completion due to v0.20.0 changes. #9199

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 10:00:16 +08:00
07354f4a1a Add files viaContribute a new workflow template: SQL Assistant upload (#9311)
### What problem does this PR solve?

Contribute a new workflow template: SQL Assistant

### Type of change

- [x] Other (please describe): new workflow template
2025-08-07 18:06:49 +08:00
d628234942 Feat: Restore the button's background color #3221 (#9307)
### What problem does this PR solve?

Feat: Restore the button's background color #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 17:37:53 +08:00
5749aa30b0 Fix: model type error. (#9308)
### What problem does this PR solve?

#9240

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 16:14:47 +08:00
a2e1f5618d Fix: bytes style image issue. (#9304)
### What problem does this PR solve?

#9302

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 15:20:01 +08:00
dc48c3863d Feat: Replace color variables according to design draft #3221 (#9305)
### What problem does this PR solve?

Feat: Replace color variables according to design draft #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 15:19:45 +08:00
23062cb27a Feat: Configure colors according to the design draft#3221 (#9301)
### What problem does this PR solve?

Feat: Configure colors according to the design draft#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 13:59:33 +08:00
63c2f5b821 Fix: virtual file cannot be displayed in KB (#9282)
### What problem does this PR solve?

Fix virtual file cannot be displayed in KB. #9265

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 11:08:03 +08:00
0a0bfc02a0 Refactor:naive_merge_with_images close useless images (#9296)
### What problem does this PR solve?

naive_merge_with_images close useless images

### Type of change

- [x] Refactoring
2025-08-07 11:07:29 +08:00
f0c34d4454 Feat: Render chat page #3221 (#9298)
### What problem does this PR solve?

Feat: Render chat page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 11:07:15 +08:00
7c719f8365 fix: Optimized popups and the search page (#9297)
### What problem does this PR solve?

fix: Optimized popups and the search page #3221 
- Added a new PortalModal component
- Refactored the Modal component, adding show and hide methods to
support popups
- Updated the search page, adding a new query function and optimizing
the search card style
- Localized, added search-related translations

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 11:07:04 +08:00
4fc9e42e74 fix: add missing env vars and default values of service_conf.yaml (#9289)
### What problem does this PR solve?

Add missing env var `MYSQL_MAX_PACKET` to service_conf.yaml.template,
and add default values to opendal config to fix npe.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 10:41:05 +08:00
35539092d0 Add **kwargs to model base class constructors (#9252)
Updated constructors for base and derived classes in chat, embedding,
rerank, sequence2txt, and tts models to accept **kwargs. This change
improves extensibility and allows passing additional parameters without
breaking existing interfaces.

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: IT: Sop.Son <sop.son@feavn.local>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-07 09:45:37 +08:00
581a54fbbb Feat: Search conversation by name #3221 (#9283)
### What problem does this PR solve?

Feat: Search conversation by name #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 09:41:03 +08:00
9ca86d801e Refa: add provider info while adding model. (#9273)
### What problem does this PR solve?
#9248

### Type of change

- [x] Refactoring
2025-08-07 09:40:42 +08:00
fb0426419e Feat: Create a conversation #3221 (#9269)
### What problem does this PR solve?

Feat: Create a conversation #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-06 11:42:40 +08:00
1409bb30df Refactor:Improve the logic so that it does not decode base 64 for the test image each time (#9264)
### What problem does this PR solve?

Improve the logic so that it does not decode base 64 for the test image
each time

### Type of change

- [x] Refactoring
- [x] Performance Improvement

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-06 11:42:25 +08:00
7efeaf6548 Fix:remove a img close which can not operate (#9267)
### What problem does this PR solve?


https://github.com/infiniflow/ragflow/issues/9149#issuecomment-3157129587

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-06 10:59:49 +08:00
46a35f44da Feat: add Claude Opus 4.1 (#9268)
### What problem does this PR solve?

Add Claude Opus 4.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring
2025-08-06 10:57:03 +08:00
a7eba61067 FIX: If chunk["content_with_weight"] contains one or more unpaired surrogate characters (such as incomplete emoji or other special characters), then calling .encode("utf-8") directly will raise a UnicodeEncodeError. (#9246)
FIX: If chunk["content_with_weight"] contains one or more unpaired
surrogate characters (such as incomplete emoji or other special
characters), then calling .encode("utf-8") directly will raise a
UnicodeEncodeError.

### What problem does this PR solve?
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-06 10:36:50 +08:00
465f7e036a Feat: advanced list dialogs (#9256)
### What problem does this PR solve?

Advanced list dialogs

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-06 10:33:52 +08:00
7a27d5e463 Feat: Added history management and paste handling features #3221 (#9266)
### What problem does this PR solve?

feat(agent): Added history management and paste handling features #3221

- Added a PasteHandlerPlugin to handle paste operations, optimizing the
multi-line text pasting experience
- Implemented the AgentHistoryManager class to manage history,
supporting undo and redo functionality
- Integrates history management functionality into the Agent component

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-06 10:29:44 +08:00
6a0d6d2565 Added French language support (#9173)
### What problem does this PR solve?
Implemented French UI translation

### Type of change
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: ramin cedric <>
Co-authored-by: Liu An <asiro@qq.com>
2025-08-06 10:22:32 +08:00
f359f2c44e Docs: fixed errors (#9259)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-08-05 21:29:46 +08:00
9295c23170 Update readme (#9260)
### What problem does this PR solve?

Update readme

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-08-05 20:27:43 +08:00
023b090fa4 Fix: template error. (#9258)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 19:52:59 +08:00
2124329e95 Fix: local variable issue. (#9255)
### What problem does this PR solve?

#9227

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 19:24:34 +08:00
ed9757b0c7 Feat: Limit the appearance of loops in operators in the agent canvas #3221 (#9253)
### What problem does this PR solve?
Feat: Limit the appearance of loops in operators in the agent canvas
#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-05 19:21:24 +08:00
f235a38225 Fix: resolve the prompt problem of Customer Support Workflow (#9251)
### What problem does this PR solve?


### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 18:19:17 +08:00
yzz
550e65bb22 Fix: PlainParser using fix in presentation (#9239)
### What problem does this PR solve?

tiny fix about the using of `deepdoc.pdf_parser.PlainParser` in
`rag.app.presentation.chunk`, I referred to other ways of using this
class.
So tiny the fix is, a issue seems unnecessary.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 17:48:18 +08:00
a264c629b5 Feat: Render dialog list #3221 (#9249)
### What problem does this PR solve?

Feat: Render dialog list #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-05 17:47:44 +08:00
e6bad45c6d Fix: update broken agent OpenAI-Compatible completion due to v0.20.0 changes (#9241)
### What problem does this PR solve?

Update broken agent OpenAI-Compatible completion due to v0.20.0. #9199 

Usage example:

**Referring the input is important, otherwise, will result in empty
output.**

<img width="1273" height="711" alt="Image"
src="https://github.com/user-attachments/assets/30740be8-f4d6-400d-9fda-d2616f89063f"
/>

<img width="622" height="247" alt="Image"
src="https://github.com/user-attachments/assets/0a2ca57a-9600-4cec-9362-0cafd0ab3aee"
/>

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 17:47:25 +08:00
0a303d9ae1 Refactor:Improve the chat stream logic for NvidiaCV (#9242)
### What problem does this PR solve?

Improve the chat stream logic for NvidiaCV

### Type of change


- [x] Refactoring
2025-08-05 17:47:00 +08:00
98a83543e8 Fix: fix mismatch of assitant message and its reference (#9233)
### What problem does this PR solve?

#9232

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

1. When creating a new session, initialize an empty reference that
includes both the app api and sdk API.
2. Fix the logic when retrieving references for historical messages: the
number of dialogue messages and reference messages may differ, but it
should match the number of assistant messages.

Co-authored-by: Li Ye <liye@unittec.com>
2025-08-05 14:32:39 +08:00
afd3a508e5 Fix: Set the maximum number of rounds for the agent to 1 #3221 (#9238)
### What problem does this PR solve?

Fix: Fixed the issue where numbers could not be displayed in the numeric
input box under white theme #3221
Fix: Set the maximum number of rounds for the agent to 1 #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-05 14:32:06 +08:00
1deb0a2d42 Fix:local variable 'response' referenced before assignment (#9230)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9227

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-05 11:00:06 +08:00
dd055deee9 Docs: Updated tips for max rounds (#9235)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-08-05 10:59:37 +08:00
a249803961 Refa: ensure Redis stream queue could be created properly (#9223)
### What problem does this PR solve?

Ensure Redis queue could be created properly.

### Type of change

- [x] Refactoring
2025-08-05 09:54:31 +08:00
6ec3f18e22 Fix: self-deployed LLM error, (#9217)
### What problem does this PR solve?

Close #9197
Close #9145

### Type of change

- [x] Refactoring
- [x] Bug fixing.
2025-08-05 09:49:47 +08:00
7724acbadb Perf Impr: Decouple reasoning and extraction for faster, more precise logic (#9191)
### What problem does this PR solve?

This commit refactors the core prompts to decouple the high-level
reasoning from the low-level information extraction. By making
REASON_PROMPT a dedicated strategist that only generates search queries
and re-tasking RELEVANT_EXTRACTION_PROMPT to be a specialized tool for
single-fact extraction, we eliminate redundant information
summarization. This clear separation of concerns makes the overall
reasoning process significantly faster and more precise, as each
component now has a single, well-defined responsibility.

### Type of change

- [x] Performance Improvement
2025-08-05 09:36:14 +08:00
a36ba95c1c Fix: Add prompt text to the form in the MCP module (#9222)
### What problem does this PR solve?

Fix: Add prompt text to the form in the MCP module #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 09:26:59 +08:00
30ccc4a66c Fix: correct single base64 image handling in image prompt (#9220)
### What problem does this PR solve?

Correct single base64 image handling in image prompt.


![img_v3_02or_ec4757c2-a9d4-4774-9a76-f7c6be633ebg](https://github.com/user-attachments/assets/872a86bf-e2a8-48d1-9b71-2a0c7a35ba9e)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 09:26:42 +08:00
dda5a0080a Fix: Fixed the issue where the agent's chat box could not automatically scroll to the bottom #3221 (#9219)
### What problem does this PR solve?

Fix: Fixed the issue where the agent's chat box could not automatically
scroll to the bottom #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 09:26:15 +08:00
9db999ccae v0.20.0 release notes (#9218)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-08-04 18:07:53 +08:00
5f5c6a7990 Fix: Fixed the loss of Await Response function on the share page and other style issues #3221 (#9216)
### What problem does this PR solve?

Fix: Fixed the loss of Await Response function on the share page and
other style issues #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 18:06:56 +08:00
53618d13bb Fix: Fixed the issue where the prompt word edit box had no scroll bar #3221 (#9215)
### What problem does this PR solve?
Fix: Fixed the issue where the prompt word edit box had no scroll bar
#3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 18:06:19 +08:00
60d652d2e1 Feat: list documents supports range filtering (#9214)
### What problem does this PR solve?

list_document supports range filtering.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-04 16:35:35 +08:00
448bdda73d Fix: Web Server Accepts Invalid Data That Could Cause Problems in uv.lock (#8966)
**Context and Purpose:**

This PR automatically remediates a security vulnerability:
- **Description:** h11: h11 accepts some malformed Chunked-Encoding
bodies
- **Rule ID:** CVE-2025-43859
- **Severity:** CRITICAL
- **File:** uv.lock
- **Lines Affected:** None - None

This change is necessary to protect the application from potential
security risks associated with this vulnerability.

**Solution Implemented:**

The automated remediation process has applied the necessary changes to
the affected code in `uv.lock` to resolve the identified issue.

Please review the changes to ensure they are correct and integrate as
expected.
2025-08-04 16:09:15 +08:00
26b85a10d1 Feat: New Agent startup parameters add knowledge base parameter #9194 (#9210)
### What problem does this PR solve?

Feat: New Agent startup parameters add knowledge base parameter #9194

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-04 16:08:41 +08:00
cae11201ef fix "out of memory" if slide.get_thumbnail() to a huge image (#9211)
### What problem does this PR solve?

fix "out of memory" if slide.get_thumbnail() to a huge image

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 16:08:24 +08:00
6ad8b54754 fix "TypeError: '<' not supported between instances of 'Emu' and 'Non… (#9209)
…eType'"

### What problem does this PR solve?

fix "TypeError: '<' not supported between instances of 'Emu' and
'NoneType'"

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 16:07:03 +08:00
83aca2d07b fix #8424 NPE in dify_retrieval.py, add log exception (#9212)
### What problem does this PR solve?

fix #8424 NPE in dify_retrieval.py, add log exception

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 15:36:31 +08:00
34f829e1b1 docs(agent): Correct several spelling errors, such as: Ouline -> Outline (#9188)
### What problem does this PR solve?

Correct several spelling errors, such as: Ouline -> Outline

### Type of change

- [x] Documentation Update
2025-08-04 14:53:32 +08:00
52a349349d Fix: migrate deprecated Langfuse API from v2 to v3 (#9204)
### What problem does this PR solve?

Fix:

```bash
'Langfuse' object has no attribute 'trace'
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 14:45:43 +08:00
45bf294117 Refactor: support config strong test (#9198)
### What problem does this PR solve?


https://github.com/infiniflow/ragflow/issues/9189#issuecomment-3148920950

### Type of change
- [x] Refactoring

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-04 13:54:18 +08:00
667c5812d0 Fix:Repeated images when parsing markdown files with images (#9196)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9149

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 13:35:58 +08:00
30e9212db9 Fix: enlarge the timeout limits. (#9201)
### What problem does this PR solve?

#9189

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 13:34:34 +08:00
e9cbf4611d Fix:Error when parsing files using Gemini: **ERROR**: GENERIC_ERROR - Unknown field for GenerationConfig: max_tokens (#9195)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/9177
The reason should be due to the gemin internal use a different parameter
name
`
        max_output_tokens (int):
            Optional. The maximum number of tokens to include in a
            response candidate.

            Note: The default value varies by model, see the
            ``Model.output_token_limit`` attribute of the ``Model``
            returned from the ``getModel`` function.

            This field is a member of `oneof`_ ``_max_output_tokens``.
`
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 10:06:09 +08:00
d4b1d163dd Fix: list tags api by using tenant id instead of user id (#9103)
### What problem does this PR solve?

The index name of the tag chunks is generated by the tenant id of the
knowledge base, so it should use the tenant id instead of the current
user id in the listing tags API.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 09:57:00 +08:00
fca94509e8 Feat: Add the migration script and its doc, added backup as default… (#8245)
### What problem does this PR solve?

This PR adds a data backup and migration solution for RAGFlow Docker
Compose deployments. Currently, users lack a standardized way to backup
and restore RAGFlow data volumes (MySQL, MinIO, Redis, Elasticsearch),
which is essential for data safety and environment migration.

**Solution:**
- **Migration Script** (`docker/migration.sh`) - Automates
backup/restore operations for all RAGFlow data volumes
- **Documentation**
(`docs/guides/migration/migrate_from_docker_compose.md`) - Usage guide
and best practices
- **Safety Features** - Container conflict detection and user
confirmations to prevent data loss

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

Co-authored-by: treedy <treedy2022@icloud.com>
2025-08-04 09:43:43 +08:00
47ba683728 Fix: Share-log bugs (#9172)
### What problem does this PR solve?

Fix Share-log bugs #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-01 21:55:49 +08:00
a16cd4f110 Refa: add result to callback for agent tool use. (#9137)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-08-01 21:49:39 +08:00
c5823a33a3 Docs: Add agentic workflow support to latest updates in READMEs (#9171)
### What problem does this PR solve?

Add agentic workflow support to latest updates in READMEs

### Type of change

- [x] Documentation Update
2025-08-01 20:56:14 +08:00
85e07e06a2 Docs: Update demo GIFs in README files (#9170)
### What problem does this PR solve?

Update demo GIFs in README files

### Type of change

- [x] Documentation Update
2025-08-01 20:42:12 +08:00
95534f5cf2 Docs: Update version references to v0.20.0 in READMEs and docs (#9164)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.19.1 to v0.20.0
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-08-01 20:41:44 +08:00
b89ddd07f0 Docs: Updated docs for 0.20.0 (#9169)
### What problem does this PR solve?

v0.20.0 documents.

### Type of change


- [x] Documentation Update
2025-08-01 20:22:27 +08:00
21ddcd3c39 Feat: Adjust the style of the note node #3221 (#9167)
### What problem does this PR solve?

Feat: Adjust the style of the note node #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 20:21:19 +08:00
01bf799a59 Fix: Fixed share-log UI issues and log-template bugs (#9166)
### What problem does this PR solve?

Fix: Fixed share-log UI issues and log-template bugs #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-01 18:32:38 +08:00
8fd12b670e Feat: Modify the style of the agent page bright theme #3221 (#9162)
### What problem does this PR solve?

Feat: Modify the style of the agent page bright theme #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 17:55:35 +08:00
6591031bad Feat: Add industry-related search keyword generation function (#9156)
### What problem does this PR solve?
Add industry-related search keyword generation function
- When generating search keywords, support for specific industries has
been added
- If the "industry" parameter is provided, industry-specific
restrictions will be added to the prompt
- This change can help users generate more precise search keywords
within specific industries

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 15:50:46 +08:00
b26088ab70 Add a series of qwen3 latest SOTA models (#9140)
### What problem does this PR solve?

Add a series of qwen3 latest SOTA models:
qwen3-coder-480b-a35b-instruct, qwen3-30b-a3b-instruct-2507,
qwen3-30b-a3b-thinking-2507, qwen3-235b-a22b-instruct-2507,
qwen3-235b-a22b-thinking-2507
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 15:19:51 +08:00
2a79f4fc7f Feat: Remove the exception comment field from the agent form #3221 (#9153)
### What problem does this PR solve?
Feat: Remove the exception comment field from the agent form #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 15:19:33 +08:00
4b98119c52 Fix: kimi-latest is not authorized (#9151)
### What problem does this PR solve?

Fix kimi-latest is not authorized.

Add kimi-thinking-preview.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 12:40:58 +08:00
ac53ef6216 Fix: Fix share-chat bugs (#9150)
### What problem does this PR solve?

Fix: Fix share-chat bugs #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-01 12:38:29 +08:00
5ccdb95008 Refactor:Introduce Image Close For GeminiCV (#9147)
### What problem does this PR solve?

Introduce Image Close For GeminiCV

### Type of change

- [x] Refactoring
- [x] Performance Improvement
2025-08-01 12:38:13 +08:00
cdac51f145 Fix: Redis stream lag can be nil (#9139)
### What problem does this PR solve?

```bash
Traceback (most recent call last):
  File "/home/infiniflow/workspace/ragflow/api/db/services/document_service.py", line 635, in update_progress
    info["progress_msg"] = "%d tasks are ahead in the queue..."%get_queue_length(priority)
  File "/home/infiniflow/workspace/ragflow/api/db/services/document_service.py", line 686, in get_queue_length
    return int(group_info.get("lag", 0))
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
```
This issue can happen very rare. When a `stream` is first created, the
`lag` value may be nil, which can cause this issue. However, once any
message is synced, the `lag` will become `0` afterwards.

```bash
> XINFO GROUPS rag_flow_svr_queue
1)  1) "name"
    2) "rag_flow_svr_task_broker"
    3) "consumers"
    4) (integer) 0
    5) "pending"
    6) (integer) 0
    7) "last-delivered-id"
    8) "1753952489937-0"
    9) "entries-read"
   10) (nil)
   11) "lag"
   12) (nil)
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-01 09:39:41 +08:00
0aa160a990 Feat: Delete the operator node and hide the corresponding sheet #3221 (#9142)
### What problem does this PR solve?

Feat: Delete the operator node and hide the corresponding sheet #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 09:38:48 +08:00
89c2067a16 Feat: Display operator icons on the agent form #3221 (#9138)
### What problem does this PR solve?

Feat: Display operator icons on the agent form #3221
Fix: Fixed the issue where the form corresponding to the tool operator
icon could not appear after clicking it #3211

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 18:53:03 +08:00
26042343d8 Fix: Improve Agent templates functionality and fix some UI style issues (#9129)
### What problem does this PR solve?

Fix: Improve Agent templates functionality and fix some UI style issues
#3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-31 16:09:45 +08:00
3f6177b5e5 Feat: Add thought info to every component. (#9134)
### What problem does this PR solve?

#9082 #6365

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 15:13:45 +08:00
0d7a83f05f Feat: Displays the tool operator icon #3221 (#9133)
### What problem does this PR solve?

Feat: Displays the tool operator icon #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 15:05:12 +08:00
aeaeb169e4 Feat/support 302ai provider (#8742)
### What problem does this PR solve?

Support 302.AI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 14:48:30 +08:00
46ded9d329 add Kimi-K2-Instruct from Tongyi-Qianwen API (#9125)
### What problem does this PR solve?

add Kimi-K2-Instruct from Tongyi-Qianwen API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 14:42:32 +08:00
20b4d88098 Refactor: Improve the try catch logic for XinferenceEmbed (#9128)
### What problem does this PR solve?

Improve the try catch logic for XinferenceEmbed

### Type of change


- [x] Refactoring
2025-07-31 12:14:50 +08:00
0327fd848e Feat: Replace the link of the old version of the agent module #3221 (#9130)
### What problem does this PR solve?
Feat: Automatically save agent canvas content
Feat: Replace the link of the old version of the agent module #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 12:14:00 +08:00
e9c5c7bc7c Rafe: Update LLMService type hints (#9131)
### What problem does this PR solve?

- Add Generator return type annotation for tts method
- Import typing.Generator for type hints

### Type of change

- [x] Refactoring
2025-07-31 12:13:49 +08:00
2bf4ed6512 Fix: Disable Auto-scroll when user looks back through historical chat-Bug 9062 (#9107)
### What problem does this PR solve?

This code allows user chat to auto-scroll down when entered, but if user
scrolls up away from the generative feedback, autoscroll is disabled.
Close #9062 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Charles Copley <ccopley@ancera.com>
2025-07-31 12:13:15 +08:00
6a170b2f6e Feat: Make the agent dialog window exposed to the outside world fill in the begin form #3221 (#9124)
### What problem does this PR solve?
Feat: Make the agent dialog window exposed to the outside world fill in
the begin form #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 09:34:45 +08:00
d9fe279dde Feat: Redesign and refactor agent module (#9113)
### What problem does this PR solve?

#9082 #6365

<u> **WARNING: it's not compatible with the older version of `Agent`
module, which means that `Agent` from older versions can not work
anymore.**</u>

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 19:41:09 +08:00
07e37560fc Feat: Handling abnormal anchor points of agent operators #3221 (#9121)
### What problem does this PR solve?
Feat: Handling abnormal anchor points of agent operators #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 19:14:33 +08:00
db6d4307f2 Feat: Add log-detail page,Improve the style of chat boxes (#9119)
### What problem does this PR solve?

Feat: Add log-detail page,Improve the style of chat boxes #3221

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 17:38:31 +08:00
840abd5239 Feat: Translate operator names and allow mailboxes to reference operator names #3221 (#9118)
### What problem does this PR solve?

Feat: Translate operator names and allow mailboxes to reference operator
names #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 16:16:47 +08:00
ffff5c2e8c Refa: Update base64 test image with new sample data (#9115)
### What problem does this PR solve?

Replace the placeholder test image in base64_image.py with a new sample
image data string.

### Type of change

- [x] Refactoring
2025-07-30 14:34:26 +08:00
391c5586dd Feat: Add wencai operator #3221 (#9116)
### What problem does this PR solve?

Feat: Add wencai operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 14:34:06 +08:00
b638d3f773 Image validation of the image2text model without using local paths (#9052)
### What problem does this PR solve?

#9050

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-30 12:57:24 +08:00
30356b0b79 Fix : API /document/thumbnails using wrong method to get ''doc_ids' (#9097)
### What problem does this PR solve?

doc_ids is a list , should use request.args.getlist("doc_ids")

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-30 12:56:59 +08:00
523e61ae18 Feat: Add invoke and github operators #3221 (#9112)
### What problem does this PR solve?

Feat: Add invoke and github operators #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 12:48:55 +08:00
021e8b57ae Fix: fix error 429 api rate limit when building knowledge graph for all chat model and Mistral embedding model (#9106)
### What problem does this PR solve?

fix error 429 api rate limit when building knowledge graph for all chat
model and Mistral embedding model.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-30 11:37:49 +08:00
e26f37351d Add Redis data retention policy config options (#9080)
### What problem does this PR solve?

Adds configuration options to the RAGFlow Helm chart to set the Redis
data retention policies. By default this feature is disabled to maintain
support with older Kubernetes versions.

https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-07-30 10:35:59 +08:00
5c761174c2 docs: Complete tool calling bash script in MCP client example (#9073)
### What problem does this PR solve?

- Fix incomplete curl command in section 5 'Tool calling', add missing
closing braces and parentheses to complete the JSON payload

This resolves the incomplete bash script that was missing proper JSON
structure closure.

### Type of change

- [x] Documentation Update
2025-07-30 09:50:21 +08:00
4f8e7ef763 Feat: Add agent-log-list page (#9076)
### What problem does this PR solve?

Fix: Add agent-log-list page And RAPTOR:Save directly after enabling,
incomplete form submission #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-30 09:48:51 +08:00
39ef2ffba9 Feat: parsing supports jsonl or ldjson format (#9087)
### What problem does this PR solve?

Supports jsonl or ldjson format. Feature request from
[discussion](https://github.com/orgs/infiniflow/discussions/8774).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 09:48:20 +08:00
ba563f8095 Update embedding_model.py (#9083)
### What problem does this PR solve?

Reduce the logic scope for DefaultEmbedding

### Type of change

- [x] Refactoring
2025-07-30 09:44:30 +08:00
cfc339e4f3 Feat: Enable MCP streamable-http model via docker compose (#9092)
### What problem does this PR solve?

Enable MCP streamable-http model via docker compose

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 09:43:29 +08:00
98c78073c7 Feat: Add Arxiv GoogleScholar operator #3221 (#9102)
### What problem does this PR solve?

Feat: Add Arxiv GoogleScholar operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-30 09:43:12 +08:00
b9d3846bb4 Feat: Add Email and DuckDuckGo and Wikipedia Operator #3221 (#9090)
### What problem does this PR solve?

Feat: Add Email and DuckDuckGo and Wikipedia Operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-29 17:36:36 +08:00
ec51508f3e Docs: add MCP streamable-http transport (#9093)
### What problem does this PR solve?

Add documentation for MCP streamable-http transport.

### Type of change

- [x] Documentation Update

---------

Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-07-29 17:09:57 +08:00
b6745e50c6 Feat: Add Yahoo Finance Operator #3221 (#9088)
### What problem does this PR solve?

Feat: Add Yahoo Finance Operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-29 13:15:37 +08:00
f7164f686b Feat: Add Google operator #3221 (#9086)
### What problem does this PR solve?

Feat: Add Google operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-29 11:36:05 +08:00
7e7619bdc0 Feat: Render the uploaded agent message file #3221 (#9081)
### What problem does this PR solve?

Feat: Render the uploaded agent message file #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-29 10:56:30 +08:00
342a04ec8a Added infinity rank_feature support (#9044)
### What problem does this PR solve?

Added infinity rank_feature support

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-29 09:14:23 +08:00
28f7b33a74 Feat: The operator is displayed only when the number of conditions is greater than 1 #3221 (#9077)
### What problem does this PR solve?

Feat: The operator is displayed only when the number of conditions is
greater than 1 #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-28 19:25:19 +08:00
5e7aaf2c41 Fix:When deleting a knowledge base that is currently performing a parsing task, the parsing queue will not be deleted! (#9018)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8995

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-07-28 17:32:12 +08:00
35b1a5b7e0 Feat: Click the edit tool button of the agent form to open the corresponding form #3221 (#9071)
### What problem does this PR solve?

Feat: Click the edit tool button of the agent form to open the
corresponding form #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-28 16:48:59 +08:00
381f9df941 Feat: Add agent log-sheet in cavas and log-sheet in share's page (#9072)
### What problem does this PR solve?

Feat: Add agent log-sheet in cavas and log-sheet in share's page #3221 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-28 16:48:46 +08:00
cc0227cf6e Fix: Fixed the issue that the condition of deleting the classification operator cannot be connected anymore #3221 (#9068)
### What problem does this PR solve?
Fix: Fixed the issue that the condition of deleting the classification
operator cannot be connected anymore #3221
### Type of change



- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-28 14:16:20 +08:00
905dab22a6 adding platform: linux/amd64 to the mac build (#9059)
### What problem does this PR solve?

Mac OS build fails on M4. Docker compose requires platform to be
specified to build correctly

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Charles Copley <ccopley@ancera.com>
2025-07-28 10:17:37 +08:00
86b4da0844 Refactor: Remove Useless split for BedrockEmbed (#9067)
### What problem does this PR solve?

Remove Useless split for BedrockEmbed

### Type of change

- [x] Refactoring
2025-07-28 10:16:38 +08:00
0fccd1fef3 Fix:in the knowledge base operation file will result in an error (#8962)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8941
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-25 19:26:31 +08:00
ad77f504f9 Feat: Filter the agent form's large model list by type #3221 (#9049)
### What problem does this PR solve?

Feat: Filter the agent form's large model list by type #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-25 19:25:19 +08:00
c63d12b936 Feat: Keep the workflow page link unchanged #3221 (#9045)
### What problem does this PR solve?
Feat: Keep the workflow page link unchanged #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-25 14:10:42 +08:00
5cc570f5e0 Refa: suppress DB migration error logs (#9043)
### What problem does this PR solve?

Suppress DB migration error logs.

### Type of change

- [x] Refactoring
2025-07-25 12:38:07 +08:00
b5ffca332a Refa: validation utils to use Pydantic v2 style models (#9037)
### What problem does this PR solve?

- Update BaseModel to use model_config instead of Config class
- Replace StrEnum with Literal types for method fields
- Convert Field declarations to Annotated style

### Type of change

- [x] Refactoring
2025-07-25 12:16:45 +08:00
53b0b0e583 get keep alive from env (#9039)
### What problem does this PR solve?

get keepalive from env

### Type of change

- [x] Refactoring
2025-07-25 12:16:33 +08:00
6aaad85cc6 Fix: Add parsing animations to the agent log and optimize some page styles (#9040)
### What problem does this PR solve?

Add parsing animations to the agent log and optimize some page styles
#3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-25 12:16:17 +08:00
03daf4618c Refactor parser code (#9042)
### What problem does this PR solve?

Refactor code

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-07-25 12:04:07 +08:00
bcaac061ac Feature: Support set chats kbs to empty (#9038)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/9034

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-25 10:06:33 +08:00
49f3f26622 Bug fix: OpenSearch chunk update some api error (#9032)
### What problem does this PR solve?

Fix a small non-blocking main workflow bug about chunk update When
OpenSearch is the doc engine.
When you wanna enable/disable a chunk in the web-page “Knowledge Base /
Dataset / Chunk”, the bug ocurred.
<img width="2388" height="662" alt="image"
src="https://github.com/user-attachments/assets/575987a0-c929-4589-bfa0-ba54e137cfd9"
/>

The reaseon why it ocurred is that some api params between OpenSearch
and ES differs. It functioned well no matter enable/disable/rewrite the
chunk after I fixed. I also checked the result when using the chat
web-page.

<img width="2394" height="660" alt="image"
src="https://github.com/user-attachments/assets/8b899dc6-d769-4e80-8dd8-ad0fbbca5f78"
/>

I will still focus on vector-database espeically OpenSearch.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 张雨豪 <zhangyh80@chinatelecom.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-07-25 09:57:24 +08:00
6529bb433b Feat: Downstream operators can get the variables defined by the user input operator #3221 (#9036)
### What problem does this PR solve?

Feat: Downstream operators can get the variables defined by the user
input operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-24 19:51:08 +08:00
5c6e586251 Docs: Updated UI tips (#9033)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-07-24 19:18:13 +08:00
88910449e4 Feat: Upload files in the chat box on the agent page #3221 (#9035)
### What problem does this PR solve?

Feat: Upload files in the chat box on the agent page #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-24 19:17:56 +08:00
2ae8f2cf00 Fix: exception layout_type in is_caption (#9028)
### What problem does this PR solve?

Exception layout_type in is_caption.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-24 17:06:56 +08:00
ae856b8faa Feat: Allows users to delete a condition of a conditional operator #3221 (#9022)
### What problem does this PR solve?

Feat: Allows users to delete a condition of a conditional operator #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-24 15:30:28 +08:00
b47dcc9108 Fix issue with keep_alive=-1 for ollama chat model by allowing a user to set an additional configuration option (#9017)
### What problem does this PR solve?

fix issue with `keep_alive=-1` for ollama chat model by allowing a user
to set an additional configuration option. It is no-breaking change
because it still uses a previous default value such as: `keep_alive=-1`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [X] Performance Improvement
- [X] Other (please describe):
- Additional configuration option has been added to control behavior of
RAGFlow while working with ollama LLM
2025-07-24 11:20:14 +08:00
3db819f011 Feat: Modify the background color of the agent canvas #3221 (#9020)
### What problem does this PR solve?

Feat: Modify the background color of the agent canvas #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-24 11:20:00 +08:00
34c35cf8ae fix: obfuscate additional server secrets values (#9014)
### What problem does this PR solve?

Obfuscates additional secrets values on ragflow_server startup to
prevent leakage:
* `secret` (azure)
* `client_secret` (oauth)
* `http_secret_key` (authentication)
* `sas_token` (azure)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Gifford R Nowland <gifford.r.nowland@aero.org>
2025-07-24 10:16:23 +08:00
dc95bd6a7c Ensure Redis volumeClaimTemplate labels are deterministic (#9016)
### What problem does this PR solve?

Previous version created labels which were dependent on the specific
Helm chart version such as:
```
volumeClaimTemplates:
- metadata:
    name: redis-data
    labels:
      helm.sh/chart: ragflow-0.2.3-dev.0.opensearch-test.4
      app.kubernetes.io/name: ragflow
      app.kubernetes.io/instance: test-1
      app.kubernetes.io/version: "9a04408"
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: redis
```
which causes `helm upgrade` commands to fail with
```
Upgrade "test-1" failed: cannot patch "test-1-ragflow-redis" with
kind StatefulSet: StatefulSet.apps "test-1-ragflow-redis" is
invalid: spec: Forbidden: updates to statefulset spec for fields
other than 'replicas', 'ordinals', 'template', 'updateStrategy',
'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are
forbidden
```
because the labels changed on upgrade.

This fix uses a reduced set of labels to prevent upgrade failures.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-07-24 10:15:11 +08:00
93c94fda7b Trival. (#9015)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-24 10:14:21 +08:00
3eca9d3a5f Fix OpenSearch liveness probe (#9012)
### What problem does this PR solve?

Fix Kubernetes liveness probe on the OpenSearch container. The previous
HTTP probe received an 401 response from the OpenSearch API which
treated as a failure and caused the container to be restarted every 20
minutes.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-07-24 09:30:31 +08:00
03e39ca9be Fix:Optimize Agent template page, fix bugs in knowledge base (#9009)
### What problem does this PR solve?

Replace Avatar with RAGFlowAvatar component for knowledge base and
agent, optimize Agent template page, and modify bugs in knowledge base
#3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-24 09:30:05 +08:00
ad177951e9 Bump to infinity v0.6.0-dev4 (#9013)
### What problem does this PR solve?

Bump to infinity v0.6.0-dev4.
WARNNING: infinity v0.6.0-dev4 has very different meta data format with
older versions. You have to destroy infinity data volume are restart
infinity container if there's existing data.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-23 19:27:57 +08:00
a2f73af1a4 Fix: typo Bearer token (#8998)
### What problem does this PR solve?

Typo Bearer token. #8960

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 18:10:51 +08:00
7ebc1f0943 Feat: add model provider DeepInfra (#9003)
### What problem does this PR solve?

Add model provider DeepInfra. This model list comes from our community. 

NOTE: most endpoints haven't been tested, but they should work as OpenAI
does.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-23 18:10:35 +08:00
5f0ec005ba Feat: Share agent dialog box externally #3221 (#9005)
### What problem does this PR solve?

Feat: Share agent dialog box externally #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-23 18:10:18 +08:00
8345e92671 Feat: OpenAI-compatible-API supports references (#8997)
### What problem does this PR solve?

OpenAI-compatible-API supports references.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-23 18:10:05 +08:00
03165a1efa Fix: Fixed the issue where the error prompt box on the Agent page would be covered #3221 (#8992)
### What problem does this PR solve?

Fix: Fixed the issue where the error prompt box on the Agent page would
be covered #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 15:09:24 +08:00
b4b6d296ea Fix: Increase timeouts for document parsing and model checks (#8996)
### What problem does this PR solve?

- Extended embedding model timeout from 3 to 10 seconds in api_utils.py
- Added more time for large file batches and concurrent parsing
operations to prevent test flakiness
- Import from #8940
- https://github.com/infiniflow/ragflow/actions/runs/16422052652

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 15:08:36 +08:00
d16505691c fix: use consistent filenames for chrome & chromedriver (#8991)
### What problem does this PR solve?

PR #8665 updated chrome and chromedriver sources, removing the appended
version number. This PR resolves filename inconsistencies that would
cause `Dockerfile.deps` to fail to build when ommiting `--china-mirrors`
when running `uv run download_deps.py`.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 11:01:24 +08:00
509a7fa4dc Switch to StatefulSet resources for stateful components (#8985)
### What problem does this PR solve?

Switch to Kubernetes StatefulSet resources for MySQL, Minio and vector
DB since these are stateful application components. This makes
operations such as helm upgrade smoother since the default container
update strategy becomes a sequential rolling update of each pod.

Also fixes a bug in the name template for the Minio stateful set
resource to align it with the naming convention used for other
components.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 10:52:27 +08:00
ec21d9a98f Refactor:remove use less convert for FastEmbed (#8984)
### What problem does this PR solve?

remove use less convert for FastEmbed

### Type of change

- [x] Refactoring
2025-07-23 10:51:48 +08:00
0d7244e4a4 Fix: Adds newest Gemini models to fit google's standard API rate limits (#8970)
### What problem does this PR solve?

Adds configurations for gemini-2.5-flash and Gemini 2.5-pro models,
including tags, maximum token limits, and model types.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 10:18:04 +08:00
175f5eaa90 use quote_plus to escape password in opendal's mysql url (#8976)
### What problem does this PR solve?

Use `quote_plus` to escape password in opendal's mysql url to support
special characters like `#`.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 10:17:34 +08:00
935ce872d8 Refa: remove temperature since some LLMs fail to support. (#8981)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-07-23 10:17:04 +08:00
0020c50000 Fix: Refactor parser config handling and add GraphRAG defaults (#8778)
### What problem does this PR solve?

- Update `get_parser_config` to merge provided configs with defaults
- Add GraphRAG configuration defaults for all chunk methods
- Make raptor and graphrag fields non-nullable in ParserConfig schema
- Update related test cases to reflect config changes
- Ensure backward compatibility while adding new GraphRAG support
- #8396

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 09:29:37 +08:00
c3b8d8b4ba Refa: improve usability of Node.js/JavaScript code executor (#8979)
### What problem does this PR solve?

Improve usability of Node.js/JavaScript code executor.

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2025-07-23 09:26:09 +08:00
f63ad6b725 Fix: correct cancel logic error (#8973)
### What problem does this PR solve?

Correct cancel logic error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-07-23 09:25:48 +08:00
e992bc5307 Feat: Adjust the page header to breadcrumbs #3221 (#8971)
### What problem does this PR solve?

Feat: Adjust the page header to breadcrumbs #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-22 18:27:01 +08:00
131fc10af5 Feat: Add the option to use the knowledge graph to the retrieval form #3221 (#8968)
### What problem does this PR solve?

Feat: Add the option to use the knowledge graph to the retrieval form
#3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-22 12:10:55 +08:00
af00f2cad8 Fix: Fixed the issue that the key parameter duplication check of the begin operator was incorrect #3221 (#8964)
### What problem does this PR solve?

Fix: Fixed the issue that the key parameter duplication check of the
begin operator was incorrect #3221
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-22 11:27:57 +08:00
95b9208b13 Fix:Improve float operation when rerank (#8963)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8915

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-22 10:04:00 +08:00
45b01e1bcb Fix: Fixed the issue of clicking to run the agent causing an error #3221 (#8956)
### What problem does this PR solve?

Fix: Fixed the issue of clicking to run the agent causing an error #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-22 09:27:21 +08:00
6691532079 Feat: Add model editing functionality with improved UI labels (#8855)
### What problem does this PR solve?

Add edit button for local LLM models
<img width="1531" height="1428" alt="image"
src="https://github.com/user-attachments/assets/19d62255-59a6-4a7e-9772-8b8743101f78"
/>

<img width="1531" height="1428" alt="image"
src="https://github.com/user-attachments/assets/c3a0f77e-cc6b-4190-95a6-13835463428b"
/>



### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: Liu An <asiro@qq.com>
2025-07-21 19:16:53 +08:00
dbc267758e Fix: Generate avatar; Add knowledge graph; Modify the style of the MultiSelect component (#8952)
### What problem does this PR solve?

Fix: Generate avatar; Add knowledge graph; Modify the style of the
multi-select component
[#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-21 19:11:27 +08:00
b8891fdbeb Fix: Fixed the issue that variables defined in the begin operator cannot be referenced in the switch operator. #3221 (#8950)
### What problem does this PR solve?

Fix: Fixed the issue that variables defined in the begin operator cannot
be referenced in the switch operator. #3221
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-21 19:11:11 +08:00
933e075f8b Feat: Display agent version in pages #3221 (#8947)
### What problem does this PR solve?

Feat: Display agent version in pages #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-21 17:52:15 +08:00
0b487dee43 Fix: support cross language for API. (#8946)
### What problem does this PR solve?

Close #8943

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-21 17:25:28 +08:00
7eb5ea3814 Feat: Display agent history versions #3221 (#8942)
### What problem does this PR solve?

Feat: Display agent history versions #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-21 16:28:06 +08:00
c783d90ba3 Perf: set timeout for building chunks. (#8940)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-07-21 15:56:45 +08:00
e101c35c0b Fix:Add a component of calendar (#8923)
### What problem does this PR solve?

fix:Add a component of calendar
[#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-21 10:26:41 +08:00
46caf6ae72 Refactor improve codes for ranker (#8936)
### What problem does this PR solve?
Use the normalize method directly

### Type of change

- [x] Refactoring
2025-07-21 10:22:20 +08:00
fca9203f18 Docs: Updated knowledge graph-specific APIs (#8927)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-07-21 09:59:23 +08:00
ab53a73768 Perf: limit embedding in KG. (#8917)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-07-18 19:51:14 +08:00
77deaf390b Feat: Adjust the style of the agent canvas connection line #3221 (#8922)
### What problem does this PR solve?

Feat: Adjust the style of the agent canvas connection line #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-18 19:26:24 +08:00
24b719ddba Add OpenSearch support to Helm chart (#8921)
### What problem does this PR solve?

Adds OpenSearch support to the RAGFlow Helm chart based on
https://github.com/infiniflow/ragflow/pull/7140 and the existing
Elasticsearch support in the Helm chart.

### Type of change
- [X] New Feature (non-breaking change which adds functionality)
2025-07-18 19:26:11 +08:00
e2f10fbd3e Fix: Fixed the issue that the content of the Dropdown section cannot be seen when selecting the next operator #3221 (#8918)
…be seen when selecting the next operator #3221

### What problem does this PR solve?
Fix: Fixed the issue that the content of the Dropdown section cannot be
seen when selecting the next operator #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-18 17:54:32 +08:00
92cfbcb382 Fix: when parse markdown support extract image at local (#8906)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8902
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-18 17:06:58 +08:00
3722fec921 Feat: Adjust the EmbedDialog style #3221 (#8911)
### What problem does this PR solve?

Feat: Adjust the EmbedDialog style #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-18 17:06:28 +08:00
daa1113513 Feat: Show agent embed dialog #3221 (#8903)
### What problem does this PR solve?
Feat: Show agent embed dialog #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-18 09:47:21 +08:00
412a088008 Feat: Add knowledge graph http api (#8896)
### What problem does this PR solve?

Add knowledge graph http api

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-17 19:20:48 +08:00
9767c26535 Fix: wrong parameters. (#8900)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-17 18:19:13 +08:00
71efd8d765 Feat: Add TavilyExtract operator #3221 (#8899)
### What problem does this PR solve?

Feat: Add TavilyExtract operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-17 17:33:01 +08:00
ecdb1701df Perf: test llm before RAPTOR. (#8897)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-07-17 16:48:50 +08:00
606bf20a3f Fix: parameter missing. (#8895)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-17 16:06:22 +08:00
96fe9c0acf Fix: Fixed the issue that the knowledge graph could not be displayed #8890 (#8891)
### What problem does this PR solve?

Fix: Fixed the issue that the knowledge graph could not be displayed
#8890

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-17 15:26:21 +08:00
729e6098f9 Refa: add more logs to KG. (#8889)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-07-17 14:43:08 +08:00
a422367a40 fix: Fix the problem that the custom footer of modal component (#8877)
### What problem does this PR solve?

Fix the problem that the custom footer of modal component is not
effective, specify the react and react-dom versions, and add the
input-number component
[#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-17 12:10:58 +08:00
fa1d6ed683 fix settings global docStoreConn for opensearch (#8885)
### What problem does this PR solve?
fix opensearch OSConnection init.
```
        docStoreConn = rag.utils.opensearch_conn.OSConnection()
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-07-17 12:10:42 +08:00
fd97ce3e5a fix s3 init config . (#8886)
### What problem does this PR solve?

when``` if 'signature_version' in self.s3_config:``` and ```if
'addressing_style' in self.s3_config:``` both true.
the config init is error, will be overwrite by last one. 

this pr is for fix that case.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-07-17 12:10:15 +08:00
c0de0f3a60 Feat: Display the thinking process according to the start_to_think flag of the message #3221 (#8888)
### What problem does this PR solve?
Feat: Display the thinking process according to the start_to_think flag
of the message #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-17 12:10:03 +08:00
38b34116dd Refa: Remove useless conver and fix a bug for DefaultRerank (#8887)
### What problem does this PR solve?

1. bug when re-try, we need to reset i.
2. remove useless convert

### Type of change

- [x] Refactoring
2025-07-17 12:09:50 +08:00
7120b37882 Perf: add limit to embedding druing KG. (#8881)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2025-07-17 09:44:04 +08:00
fbd115773b Perf: set timeout of some steps in KG. (#8873)
### What problem does this PR solve?

### Type of change


- [x] Performance Improvement
2025-07-16 18:06:03 +08:00
b3018a455f Feat: Add bing form #3221 (#8875)
### What problem does this PR solve?

Feat: Add bing form #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-16 17:56:55 +08:00
d2df669135 Feat: Add sql form #3221 (#8874)
### What problem does this PR solve?

Feat: Add sql form #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-16 16:25:50 +08:00
8b7dbb349e fix: update service_conf.yaml.template (#8863)
fix: modify the connection ports of minio and redis in
service_conf.yaml.template

### What problem does this PR solve?

If you modify the external ports of minio and redis in the .env file, it
will also affect the connection ports inside the container in the
service_conf.yaml.template file, which is unreasonable.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-07-16 15:31:57 +08:00
9e45fcfdb3 Fix: fix typo in OpenAI error logging message (#8865)
### What problem does this PR solve?

Correct the logging message from "OpenAI cat_with_tools" to "OpenAI
chat_with_tools" in the `_exceptions` method of the `Base` class to
accurately reflect the method name and improve error traceability.

### Type of change

- [x] Typo
2025-07-16 15:31:57 +08:00
ed7bea060f Feat: add Kimi model series support (#8866)
### What problem does this PR solve?

Add Kimi model series support.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-16 15:31:57 +08:00
9bd2127ba1 Fix: graphknowledge Tree structure not found for treeKey: combo (#7819) (#8862)
### What problem does this PR solve?

Fixed graphknowledge Tree structure not found for treeKey.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-16 11:43:06 +08:00
a9abf9df48 Adds new Voyage embedding models (#8845)
### What problem does this PR solve?
This PR enhances the application's capabilities by adding support for
four new Voyage embedding models (voyage-3-large, voyage-3.5,
voyage-3.5-lite, and voyage-code-3) to the `llm_factories.json`
configuration file. These models expand the available options for text
embedding tasks, enabling improved processing of text data with a
maximum token limit of 32,000. This addition addresses the need for more
diverse and specialized embedding models to support various use cases
without altering existing functionality.

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-07-16 11:41:06 +08:00
6ca502c1d1 Feat: Add agent tool CrawlerForm #3221 (#8859)
### What problem does this PR solve?

Feat: Add agent tool CrawlerForm #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-16 09:34:30 +08:00
30d7f31875 Docs: Updated tag set tips. (#8860)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-07-16 09:34:06 +08:00
f2909ea0c4 Perf: retryable mysql connection. (#8858)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2025-07-15 19:05:48 +08:00
9371d7b19c Feat: Add CrawlerForm component #3221 (#8857)
### What problem does this PR solve?

Feat: Add CrawlerForm component #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-15 18:12:00 +08:00
faebb519f7 Feat: Display file references for agent dialogues #3221 (#8854)
### What problem does this PR solve?

Feat: Display file references for agent dialogues #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-15 17:30:45 +08:00
e9b14142a5 Fix: fixed invalid save() arguments for slide thumbnails (#8851)
### What problem does this PR solve?

Fixed invalid save() arguments for slide thumbnails.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 17:19:45 +08:00
aa4a725529 Pref: use redis to check if canceled. (#8853)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2025-07-15 17:19:27 +08:00
ed8d7291ff Fix: Remove antd from dataset-page (#8830)
### What problem does this PR solve?

remove antd from dataset-page
[#3221](https://github.com/infiniflow/ragflow/issues/3221)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 16:12:50 +08:00
148fde8b1b Feat: Add authorization token field to the MCP form #3221 (#8850)
### What problem does this PR solve?

Feat: Add authorization token field to the MCP form #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-15 15:50:10 +08:00
24c41d2a61 Perf: make do_cancel quicker. (#8846)
### What problem does this PR solve?

### Type of change

- [x] Performance Improvement
2025-07-15 14:35:00 +08:00
5fa6f2f151 Update embedding_model.py (#8836)
### What problem does this PR solve?

Remove useless covert for bge encode_queries

### Type of change

- [x] Performance Improvement
2025-07-15 14:04:58 +08:00
51a8604dcb Fix: fixed context loss caused by separating markdown tables from original text (#8844)
### What problem does this PR solve?

Fix context loss caused by separating markdown tables from original
text. #6871, #8804.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 13:03:01 +08:00
c08ed28f09 Adds 'Vietnamese' to the list of available languages in the cross-language item components (#8843)
### What problem does this PR solve?
This change adds 'Vietnamese' to the list of supported languages in two
components related to cross-language functionality. The addition expands
language support by including Vietnamese as a selectable option

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-07-15 13:02:12 +08:00
dbc2a8689a Fix: no chunks parsed out for Law (#8842)
### What problem does this PR solve?

Fixes no chunks parsed out for Law. #5113 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-15 13:01:56 +08:00
451e0a92db Feat: Adjust agent mcp style #3221 (#8841)
### What problem does this PR solve?

Feat: Adjust agent mcp style #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-15 11:28:36 +08:00
f683580310 Feat: Synchronize MCP data to agent #3221 (#8832)
### What problem does this PR solve?

Feat: Synchronize MCP data to agent #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-15 09:37:08 +08:00
c642dbefca Perf: Enhance timeout handling. (#8826)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-07-15 09:36:45 +08:00
ce140f1393 Fix:Better Support Table Value Type (#8822)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8782

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-14 17:51:26 +08:00
ab4ad0f373 Feat: Render the mcp list on the agent page #3221 (#8829)
### What problem does this PR solve?
Feat: Render the mcp list on the agent page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-14 17:03:58 +08:00
237e59532b Feat: refine create and list operations for MCP dashboard (#8823)
### What problem does this PR solve?

Refine MCP dashboard create and list operations.

### Type of change

- [x] Refactoring
2025-07-14 14:36:56 +08:00
5383e254c4 Perf:Remove Useless Convert When BGE Embedding (#8816)
### What problem does this PR solve?

FlagModel internal support returns as numpy

### Type of change
- [x] Performance Improvement
2025-07-14 14:02:48 +08:00
dc068bbd1e Feat: Filter MCP server list by text. #3221 (#8820)
### What problem does this PR solve?

Feat: Filter MCP server list by text. #3221
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-14 11:46:52 +08:00
504e453ae5 BugFix: AgentCreateBUGFix:required argument are missing: dsl; (#8794)
### AgentCreateBUGFix
Because useFetchFlowTemplates is called both in the hooks and the
AgentTemplateModal, and the ID of the empty template is generated via
uuid, there may be cases where the IDs do not match.

Report a BUG as follows:
Prompt: 101
Required argument is missing: dsl;

<img width="472" height="121" alt="52d79682-4e50-4863-8486-f1e154003043"
src="https://github.com/user-attachments/assets/c5d217c9-b6cc-4ef2-866b-694c8b9ab3ae"
/>

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 海贼宅 <stu_xyx@163.com>
2025-07-11 19:41:26 +08:00
f6ae570417 fix redundancy package import. (#8754)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [x] Performance Improvement
- [ ] Other (please describe):

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-07-11 19:11:52 +08:00
3f4f203215 Feat: Add text and markdown file view (#8767)
### What problem does this PR solve?

Add document viewers for text and markdown files

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-11 18:52:44 +08:00
72c19b44c3 Refa: better MIME content type (#8801)
### What problem does this PR solve?

Better uniform MIME content type.

### Type of change

- [x] Refactoring
2025-07-11 18:47:19 +08:00
f569401398 Fix: better_handle_different_types (#8775)
### What problem does this PR solve?


https://github.com/infiniflow/ragflow/issues/8719#issuecomment-3055883271

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 18:21:39 +08:00
bc0cc8559a Docs: Updated tips (#8809)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-07-11 18:21:11 +08:00
d05b405394 Feat: Import and export MCP Server #3221 (#8806)
### What problem does this PR solve?

Feat: Import and export MCP Server #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-11 18:18:31 +08:00
2b7adbd2d1 Fix: Improve Memory Usage For Presentation (#8792)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/8791


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 11:35:25 +08:00
52dce4329d fix: fix dataset-page's bugs (#8786)
### What problem does this PR solve?

fix dataset-page's bugs,Input component supports icon, added Radio
component, and removed antd from chunk-result-bar page [#3221
](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 11:34:36 +08:00
07208e519b Fix: Wrong_Input_type_for_Gemin (#8783)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8763#issuecomment-3055317110

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-11 11:34:04 +08:00
e8aee8d720 Feat: change document status in bulk (#8777)
### What problem does this PR solve?
 
Change document status in bulk.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-11 10:38:59 +08:00
1895667573 Feat: add xAI provider (#8781)
### What problem does this PR solve?

Add xAI provider (experimental feature, requires user feedback).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-11 10:35:23 +08:00
fc0c81acc6 Feat: Edit MCP server #3221 (#8784)
### What problem does this PR solve?

Feat: Edit MCP server #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-11 10:34:57 +08:00
98829f5dbe Feat: Modify the agent tool name #3221 (#8780)
### What problem does this PR solve?

Feat: Modify the agent tool name #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-10 18:36:34 +08:00
512772c45a Fix: Resolve typo in /list route function (#8769)
### What problem does this PR solve?

Fixes a function name typo for the `/list` route in
`api/apps/conversation_app.py`.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-10 14:32:28 +08:00
8281ceb406 Refa: refine retry gap. (#8773)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
- [x] Performance Improvement
2025-07-10 14:28:57 +08:00
9f94d88acd Feat: Delete MCP server #3221 (#8772)
### What problem does this PR solve?

Feat: Delete MCP server #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-10 14:28:27 +08:00
2e0905d06a Feat: Avoid the form sheet covering the chat sheet #3221 (#8768)
### What problem does this PR solve?

Feat: Avoid the form sheet covering the chat sheet #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-10 13:48:12 +08:00
cedcd13204 fix: use tenant_id of kb to get index name in rm chunk func (#8760)
### What problem does this PR solve?

The rm function in chunk_app.py now takes the index name differently
than other functions, so there will be situations where users can create
and update a chunk but not delete it.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-10 10:30:56 +08:00
8d027813f5 Refactor: Improve How To Handle QWenEmbed (#8765)
### What problem does this PR solve?

Based on https://github.com/infiniflow/ragflow/issues/8740 
1. A better handle for 'NoneType' object is not subscriptable
2. Add some logs to get the internal message

### Type of change

- [x] Refactoring
2025-07-10 10:30:18 +08:00
2a11b2c331 Docs: Update default chunk_token_num to 512 in API references (#8766)
### What problem does this PR solve?

Changed the default value of `chunk_token_num` from 128 to 512 in both
HTTP and Python API reference documentation to reflect the updated
configuration.

#8753

### Type of change

- [x] Documentation Update
2025-07-10 09:53:20 +08:00
f8524462b0 Fix: Increase default chunk_token_num from 128 to 512 in parser config (#8753)
### What problem does this PR solve?

Updated the default `chunk_token_num` value in `api_utils.py` and
`validation_utils.py` to 512 to accommodate larger text chunks. Adjusted
corresponding test cases in HTTP and SDK API tests to reflect this
change.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-10 09:53:20 +08:00
aae9fbb9de Feat: Test MCP server #3221 (#8757)
### What problem does this PR solve?

Feat: Test MCP server #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-10 09:33:29 +08:00
cf0a1366af Docs: Updated upgrading guide (#8746)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-07-09 17:13:04 +08:00
779932dcb0 Fix: graphrag, raptor can be null for api created kb issue (#8743)
### What problem does this PR solve?

When knowledgebase/dataset created by API, graphrag and raptor can be
null, and will trigger NoneType error when reach to this code, causing
chunking task not able to finish.

![image](https://github.com/user-attachments/assets/998a63e9-611b-4301-8808-24839a05be8a)

Proposed solution will result in None and pass the condition check
without error.

![image](https://github.com/user-attachments/assets/184374fb-e06a-46e6-b8ac-d66a3fd93b59)


### Type of change

-   Bug Fix (non-breaking change which fixes an issue)
2025-07-09 17:12:42 +08:00
19419281c3 Fix: Change Ollama Embedding Keep Alive (#8734)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/8733

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-09 12:17:26 +08:00
2f79a2a04d Feat: Display MCP multiple selection bar #3221 (#8737)
### What problem does this PR solve?

Feat: Display MCP multiple selection bar #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-09 12:17:01 +08:00
c1f6e6f00e Feat: add advanced document filter (#8723)
### What problem does this PR solve?

Add advanced document filter

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-09 09:33:11 +08:00
f7af0fc71e Feat: List MCP servers #3221 (#8730)
### What problem does this PR solve?

Feat: List MCP servers #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-09 09:32:38 +08:00
00c954755e Fix:use the same logic to handle pos in tokenize_chunks_with_images (#8732)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8719

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-09 09:31:40 +08:00
d42e6fb955 Docs: miscellaneous editorial updates (#8731)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-07-09 09:28:56 +08:00
addda5ccbe Fix: Add validation for dialog name (#8722)
### What problem does this PR solve?

- Validate dialog name in `dialog_app.py` to ensure it is a non-empty
string and does not exceed 255 bytes in UTF-8 encoding.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-08 19:20:29 +08:00
edb32b1304 fix: Change the data in the dataset page to be obtained using the interface (#8726)
### What problem does this PR solve?

Change the data in the dataset page to be obtained using the interface,
and change the import to obtain all data every 15 seconds to obtain the
data of the current page every 5 seconds when parsing the existing file.
[#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-08 19:19:07 +08:00
3fe143d84a Feat: Add note node #3221 (#8728)
### What problem does this PR solve?

Feat: Add note node #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-08 19:18:55 +08:00
2a03d49a84 Test: Add dialog app test suite and update common.py with dialog endpoints (#8729)
### What problem does this PR solve?

This commit introduces a comprehensive test suite for the dialog app,
including tests for creating, updating, retrieving, listing, and
deleting dialogs. Additionally, the common.py file has been updated to
include necessary API endpoints and helper functions for dialog
operations.

### Type of change

- [x] Add test cases
2025-07-08 19:18:44 +08:00
8af0d04ad0 Refactor:Improve the logic in search.py (#8716)
### What problem does this PR solve?

1. Remove the useless pop logic due to already been checked at the if
logic
2. merge log logic

### Type of change

- [x] Refactoring
2025-07-08 12:32:01 +08:00
01f81b24f6 Fix:Added support for preview of txt, md, excel, csv, ppt, image, doc and other files (#8712)
### What problem does this PR solve?

Added support for preview of txt, md, excel, csv, ppt, image, doc and
other files [#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-08 10:39:18 +08:00
3e1e908422 Feat: Get the running log of each message through the trace interface #3221 (#8711)
### What problem does this PR solve?
Feat: Get the running log of each message through the trace interface
#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-08 09:27:56 +08:00
30065e2f43 Fix: EsLint Problem of index.tsx (#8710)
'handleOk' was used before it was
defined.eslint@typescript-eslint/no-use-before-define

### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-08 09:27:34 +08:00
5b52b7561a Fix: Fix text errors #3221 (#8708)
### What problem does this PR solve?

Fix: Fix text errors #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 17:28:46 +08:00
441fb92aa7 Fix: suppress docker-compose warning (#8698)
### What problem does this PR solve?

Suppress docker-compose warning like:

```bash
The "HF_ENDPOINT" variable is not set. Defaulting to a blank string.
The "MACOS" variable is not set. Defaulting to a blank string.
The "SANDBOX_EXECUTOR_MANAGER_IMAGE variable is not set. Defaulting to a blank string.
The "SANDBOX_EXECUTOR_MANAGER_PORT variable is not set. Defaulting to a blank string.
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-07-07 14:50:23 +08:00
e60ec0a31b Fix:disallowed special token while embedding (#8692)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8567

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 14:13:37 +08:00
2259bb2586 fix:Use use-chunk-request.ts to replace chunk-hooks.ts; implement chunk selectAll, enable, disable and other functions (#8695)
### What problem does this PR solve?

Use use-chunk-request.ts to replace chunk-hooks.ts; implement chunk
selectAll, enable, disable and other functions
[#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 14:13:13 +08:00
4d7bfd2ba3 Fix: typo process_duration (#8696)
### What problem does this PR solve?

Fix typo process_duration.

### Type of change

- [x] Documentation Update
- [x] Refactoring
2025-07-07 14:11:47 +08:00
789ae87727 Fix: Prevent Duplicate Retrieval Requests on Knowledge Testing (#8683)
### What problem does this PR solve?

Previously, when testing knowledge retrieval and clicking the test
button, the component would trigger two API requests instead of one.
This led to redundant network calls and inconsistent results being
displayed.

Before:


![image](https://github.com/user-attachments/assets/530d9a97-04f7-4db4-8489-0a7b67c78194)

After:


![image](https://github.com/user-attachments/assets/d17caf18-a6b1-46bc-b077-d81de0a73818)


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 13:07:34 +08:00
07eee8329c Refa: Update Minio image to specific release version in docker-compose-base.yml (#8693)
### What problem does this PR solve?

- Ensure consistent Minio deployment by pinning the image to a specific
release version (RELEASE.2025-06-13T11-33-47Z) for stability and
reproducibility.
- #8672

### Type of change

- [x] Refactoring
2025-07-07 13:06:32 +08:00
4a9708889e Feat: Support uploading files when running agent #3221 (#8697)
### What problem does this PR solve?

Feat: Support uploading files when running agent #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-07 12:18:18 +08:00
9580e99650 fix: retry embedding with Qwen family models when limits temporarily reached. (#8690)
fix: retry embedding with Qwen family models when limits temporarily
reached.

APIs of Qwen family models are limited by calling rates. When reached,
the "output" attribute of the "resp" will be None, and in turn cause
TypeError when trying to retrieve "embeddings". Since these limits are
almost temporary, I have added a simple retry mechanism to avoid it.
Besides, if retry_max reached, the error can be early raised, instead of
hidden behind "TypeError".

### What problem does this PR solve?

Sometimes Qwen blocks calling due to rate limits, but it will cause the
whole parsing procedure stops when creating knowledge base. In this
situation, resp["output"] will be None, and resp["output"]["embeddings"]
will cause TypeError. Since the limits are temporary, I apply a simple
retry mechanism to solve it.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-07-07 12:15:52 +08:00
ae3683c346 fix task_service.py (#8687)
Fix the case where pages variable might be None

### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 09:48:51 +08:00
1e6bda735a Fix: add ES re-connect once request timeout. (#8678)
### What problem does this PR solve?

#8669

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-07 09:22:25 +08:00
ebf827a956 fix(docker-compose):The old base image lost the curl command, and the image has been updated to fix this issue. Add Health Check (#8672)
### What problem does this PR solve?
1.The old base image lost the curl command, and an updated image was
used to fix this issue (the service has been tested in the new version)
2.Add Health Check


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 20:03:03 +08:00
8a3b5d1d76 Fix a small typo in count of used fragments (#8673)
### What problem does this PR solve?

Fix a small typo in count of used fragments.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-07-04 19:46:31 +08:00
1ac61c0f0f Fix: secure canvas (#8670)
### What problem does this PR solve?

Secure canvas access.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 19:40:39 +08:00
39799469d1 Fix: Wrong Citation Display #8594 #8474 (#8671)
### What problem does this PR solve?

Fix: Wrong Citation Display #8594 #8474

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 19:12:13 +08:00
7f707ef5ed Fix: optimize the chunk result page (#8676)
### What problem does this PR solve?
fix: Create a new message component to replace the antd message
component, create a new Spin component to replace the antd Spin
component, optimize the original paging component style, and optimize
the chunk result page[
#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 19:00:30 +08:00
a306a6f158 Refa: refactor prompts into markdown-style structure using Jinja2 (#8667)
### What problem does this PR solve?

Refactor prompts into markdown-style structure using Jinja2.

### Type of change

- [x] Refactoring
2025-07-04 15:59:41 +08:00
1cf24be04b fix:Optimized the style of the dataset configuration page and added the l… (#8655)
### What problem does this PR solve?

Optimized the style of the dataset configuration page and added the
logic of cancelling submission
[#3221](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 15:11:30 +08:00
b382b63f9a Fix(docker-compose)Update docker-compose-base.yml (#8650)
### What problem does this PR solve?
1.Optimize Redis Health Check

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 14:06:20 +08:00
9fbb36ca40 feat: use official sources for chromedriver-linux in download_deps.py (#8665)
### What problem does this PR solve?

Resolves ambiguity and potential MITM attacks by using official channel
for chromedriver-linux in download_deps.py

### Type of change

- [x] Performance Improvement
2025-07-04 14:05:56 +08:00
d5f6335f99 Fix: The data set created by API call failed to parse after uploading the file. (#8657)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8656

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 12:41:28 +08:00
194e088d01 Fix: Fixed the issue where the debug form Switch component had no default value #3221 (#8662)
### What problem does this PR solve?

Fix: Fixed the issue where the debug form Switch component had no
default value #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-04 12:21:00 +08:00
f8a6987f1e Refa: automatic LLMs registration (#8651)
### What problem does this PR solve?

Support automatic LLMs registration.

### Type of change

- [x] Refactoring
2025-07-03 19:05:31 +08:00
3234a15aae Fix: Fixed the issue of retrieval operator text overlapping #3221 (#8652)
### What problem does this PR solve?

Fix: Fixed the issue of retrieval operator text overlapping #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-03 19:04:06 +08:00
9771b521cd Update svg of SiliconFlow with new LOGO (#8647)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-07-03 17:29:16 +08:00
a4d97dcf12 Feat: Edit the output data of the code operator #3221 (#8649)
### What problem does this PR solve?

Feat: Edit the output data of the code operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-03 17:29:02 +08:00
612abd6d89 Feat: Display the iteration operator toolbar #3221 (#8645)
### What problem does this PR solve?

Feat: Display the iteration operator toolbar #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-03 13:31:39 +08:00
1dd18f95e9 Optimize the style and logic of the profile (#8639)
### What problem does this PR solve?

Optimize the style and logic of the profile [#3221
](https://github.com/infiniflow/ragflow/issues/3221)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-03 13:31:22 +08:00
747da87a1e Feat: Combine the output logs of the same operator together #3221 (#8638)
### What problem does this PR solve?

Feat: Combine the output logs of the same operator together #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-02 19:21:40 +08:00
4243330d5c Feat: add MCP server test endpoint (#8632)
### What problem does this PR solve?

Add MCP server test endpoint.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-02 18:52:24 +08:00
140d4f0d30 Minor: fixed broken links. (#8636)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2025-07-02 18:39:11 +08:00
83c8af1b59 Fix: page_size can be None error (#8603)
### What problem does this PR solve?

Issue #8602

`parser_config.task_page_size` can be defaults to `None` when dataset is
created by API. This was not handled by the `task_executor.py` code thus
`page_size` could sometimes be `None` which will cause issue in line
351.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-02 18:38:48 +08:00
62b63acbb5 Refa: more robust mcp tool call (#8631)
### What problem does this PR solve?

More robust MCP tool call conn.

### Type of change

- [x] Refactoring
2025-07-02 18:37:54 +08:00
fffb7c0bba Fix: anthropic llm issue. (#8633)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-02 18:37:34 +08:00
898da23caa make dirs with 'exist_ok=True' (#8629)
### What problem does this PR solve?

The following error occurred during local testing, which should be fixed
by configuring 'exist_ok=True'.

```log
set_progress(7461edc2535c11f0a2aa0242c0a82009), progress: -1, progress_msg: 21:41:41 Page(1~100000001): [ERROR][Errno 17] File exists: '/ragflow/tmp'
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-02 18:35:16 +08:00
56e6f37ffa Update Chrome download URL in use_china_mirrors configuration (#8628)
### What problem does this PR solve?

Update Chrome download URL in use_china_mirrors configuration


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: lqh <liqunhuan@foreveross.com>
2025-07-02 18:34:38 +08:00
040e4ad8a5 Feat: Convert the arguments parameter of the code operator to a dictionary #3221 (#8623)
### What problem does this PR solve?

Feat: Convert the arguments parameter of the code operator to a
dictionary #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-02 18:34:21 +08:00
695bfe34a2 fix opendal config 'oss_table' and 'max_allowed_packet' (#8611)
### What problem does this PR solve?

Fix the config option name of the opendal table name and setting of
'max_allowed_packet'.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: He Wang <wanghechn@qq.com>
2025-07-02 16:45:01 +08:00
d343cb4deb Add Google Cloud Vision API Integration (Image2Text) (#8608)
### What problem does this PR solve?

This PR introduces Google Cloud Vision API integration to enhance image
understanding capabilities in the application. It addresses the need for
advanced image description and chat functionalities by implementing a
new `GoogleCV` class to handle API interactions and updating relevant
configurations. This enables users to leverage Google Cloud Vision for
image-to-text tasks, improving the application's ability to process and
interpret visual data.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-02 10:02:01 +08:00
9dd3dfaab0 Add service_conf and llm_factories options to Helm chart (#8607)
### What problem does this PR solve?

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-07-02 09:58:17 +08:00
212d5ce7ff Feat: Construct the to field of the classification operator when saving data #3221 (#8610)
### What problem does this PR solve?

Feat: Construct the to field of the classification operator when saving
data #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-02 09:49:42 +08:00
0b40eb3e90 Test: Add tests for chunk API endpoints (#8616)
### What problem does this PR solve?

- Add comprehensive test suite for chunk operations including:
  - Test files for create, list, retrieve, update, and delete chunks
  - Authorization tests
  - Batch operations tests
- Update test configurations and common utilities
- Validate `important_kwd` and `question_kwd` fields are lists in
chunk_app.py
- Reorganize imports and clean up duplicate code

### Type of change

- [x] Add test cases
2025-07-02 09:49:08 +08:00
f586dd0a96 Fix: docx parse error. (#8600)
### What problem does this PR solve?

docx parse error.

![image](https://github.com/user-attachments/assets/efbe6d1b-10c8-415e-b693-a86f73e1ffa6)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

### What problem does this PR solve?

Some docx parse with naive cause error. `block.style.name` in Function
`__get_nearest_title` will be None in some case.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: wenxuan.zhang <wenxuan.zhang@chinacreator.com>
2025-07-01 17:38:11 +08:00
93a8f4a4c8 Fix: Fixed the issue that the global variables of the code operator cannot be selected #3221 (#8605)
### What problem does this PR solve?

Fix: Fixed the issue that the global variables of the code operator
cannot be selected #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-01 17:31:56 +08:00
6b04b07eb4 Fixed the issue where variables were not displayed in the switch operator #3221 (#8601)
### What problem does this PR solve?

Feat: Fixed the issue where variables were not displayed in the switch
operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-01 15:52:14 +08:00
1c77b4ed9b fix: Correctly format message parts in GoogleChat (#8596)
### What problem does this PR solve?

This PR addresses an incompatibility issue with the Google Chat API by
correcting the message content format in the `GoogleChat` class.
Previously, the content was directly assigned to the "parts" field,
which did not align with the API's expected format. This change ensures
that messages are properly formatted with a "text" key within a
dictionary, as required by the API.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-01 14:06:07 +08:00
e3edcc3064 Trivals. (#8597)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-01 14:05:18 +08:00
103027580e Feat: Add agent advanced settings form #3221 (#8592)
### What problem does this PR solve?

Feat: Add agent advanced settings form #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-07-01 10:52:48 +08:00
32f8b3ad77 Fix: the output log is incorrect (#8577)
### What problem does this PR solve?

Fix: the output log is incorrect

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liang <xiaofeng.liang@landstech.com.cn>
2025-07-01 10:49:43 +08:00
d4da6dce6e Feat: Add file management HTTP_API (#8395)
### What problem does this PR solve?

Add file management HTTP_API for operating files

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-01 09:51:53 +08:00
7f19f604a9 Pass Form Instance to GoogleModal Form Component (#8586)
### What problem does this PR solve?

This PR enables the `Form` component within the `GoogleModal` to
directly access and manipulate the form state by passing the form
instance from the parent component. This enhances form control and data
manipulation capabilities within the modal, improving the component's
functionality and integration with the parent form.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-01 09:48:36 +08:00
4a1680a799 doc: change to chunk_token num (#8590)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/8556

### Type of change

- [x] Documentation Update
2025-07-01 09:47:23 +08:00
8801de2772 Refa: change mcp_client module to rag/utils/conn (#8578)
### What problem does this PR solve?

Change mcp_client module to rag/utils/conn.

### Type of change

- [x] Refactoring
2025-07-01 09:29:19 +08:00
d620432e3b Feat: In a dialog message, users can enter different types of data #3221 (#8583)
### What problem does this PR solve?

Feat: In a dialog message, users can enter different types of data #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 19:32:40 +08:00
cf8c063a69 Adding semaphore usage on the '/run' endpoint (#8526)
### What problem does this PR solve?

Switching threading.Lock() to asyncio.Lock(), since threading.Lock() is
blocking.

### Type of change

- [x] Performance Improvement
2025-06-30 15:40:23 +08:00
40b1684c1e Feat: Fixed the issue that the top toolbar disappears when opening the agent operator form #3221 (#8579)
### What problem does this PR solve?

Feat: Fixed the issue that the top toolbar disappears when opening the
agent operator form #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 15:39:38 +08:00
d46c24045f Feat: add GiteeAI as a llm provider. (#8572)
### What problem does this PR solve?

#1853

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 11:22:11 +08:00
10f12fa149 Feat: Support GiteeAI model #1853 (#8573)
### What problem does this PR solve?

Feat: Support GiteeAI model  #1853

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 11:21:51 +08:00
356d1f3485 Feat: Allow users to enter text in the middle of a chat #3221 (#8569)
### What problem does this PR solve?

Feat: Allow users to enter text in the middle of a chat #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 10:36:52 +08:00
aafeffa292 Feat: add gitee as LLM provider. (#8545)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-30 09:22:31 +08:00
e441c17c2c Refa: limit embedding concurrency and fix chat_with_tool (#8543)
### What problem does this PR solve?

#8538

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-06-27 19:28:41 +08:00
8e1f8a0c48 Feat: Fixed the issue where the begin operator parameters could not be submitted during debugging #3221 (#8539)
### What problem does this PR solve?

Feat: Fixed the issue where the begin operator parameters could not be
submitted during debugging #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-27 18:53:13 +08:00
0f7c955634 Feat: Display sub-agents in agent form #3221 (#8536)
### What problem does this PR solve?
Feat: Display sub-agents in agent form #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-27 15:45:53 +08:00
5a2099a1c7 Feat: Fixed the issue where the prompt menu content was hidden #3221 (#8530)
### What problem does this PR solve?

Feat: Fixed the issue where the prompt menu content was hidden #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-27 12:11:29 +08:00
a10f05f4d7 Fix: chat with tools bug. (#8528)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-27 12:10:53 +08:00
0478f36e36 Feat: allow users to choose which MCP tools are enabled (#8519)
### What problem does this PR solve?

Allow users to choose which MCP tools are enabled.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-27 10:23:34 +08:00
303c6dd1a8 Fix memory leaks in PIL image and BytesIO handling during chunk processing (#8522)
### What problem does this PR solve?
This PR addresses critical memory leaks in the task executor's image
processing pipeline. The current implementation fails to properly
dispose of PIL Image objects and BytesIO buffers during chunk
processing, leading to progressive memory accumulation that can cause
the task executor to consume excessive memory over time.

### Background context
- The `upload_to_minio` function processes images from document chunks
and converts them to JPEG format for storage.
- PIL Image objects hold significant memory resources that must be
explicitly closed to prevent memory leaks.
- BytesIO objects also consume memory and should be properly disposed of
after use.
- In high-throughput scenarios with many image-containing documents,
these memory leaks can lead to out-of-memory errors and degraded
performance.

### Specific issues fixed
- PIL Image objects were not being explicitly closed after processing.
- BytesIO buffers lacked proper cleanup in all code paths.
- Converted images (RGBA/P to RGB) were not disposing of the original
image object.
- Memory references to large image data were not being cleared promptly.

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Performance Improvement


### Changes made
- Added explicit `d["image"].close()` calls after image processing
operations.
- Implemented proper cleanup of converted images when changing formats
from RGBA/P to RGB.
- Enhanced BytesIO cleanup with `try/finally` blocks to ensure disposal
in all code paths.
- Added explicit `del d["image"]` to clear memory references after
processing.

This fix ensures stable memory usage during long-running document
processing tasks and prevents potential out-of-memory conditions in
production environments.
2025-06-27 10:23:21 +08:00
7dbe06f7d8 Refactor: remove useless initialize logic in list_doc (#8523)
### What problem does this PR solve?

Remove useless logic in a loop for list_doc

### Type of change

- [x] Refactoring
- [x] Performance Improvement
2025-06-27 10:23:08 +08:00
be712714af Refactor:improve the logic to check cancel (#8524)
### What problem does this PR solve?

improve the logic to check cancel

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-27 10:22:53 +08:00
938d8dd878 Fix: user_default_llm configuration doesn't work for OpenAI API compatible LLM factory (#8502)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/8467
when add llm the llm_name will like "llm1___OpenAI-API"
f09ca8e795/api/apps/llm_app.py (L173)
so we should not use llm1 to query


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-27 09:41:12 +08:00
daf6c82066 fix: list index out of range (#8518)
### What problem does this PR solve?

stack:

```
2025-06-26 17:22:24,739 ERROR    1609 list index out of range
Traceback (most recent call last):
  File "/ragflow/.venv/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "/ragflow/.venv/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/ragflow/api/utils/api_utils.py", line 298, in decorated_function
    return func(*args, **kwargs)
  File "/ragflow/api/apps/sdk/session.py", line 472, in list_session
    print(conv["reference"][message_num])
IndexError: list index out of range

```


![图片](https://github.com/user-attachments/assets/93fe90a8-0434-4842-ba9f-bb5a995b498a)


### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-06-27 09:38:33 +08:00
f7b6c4ca99 Feat: Add StringTransform operator #3221 (#8520)
### What problem does this PR solve?

Feat: Add StringTransform operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-27 09:27:28 +08:00
2990779d59 fix(prompt-editor): resolve initial cursor position and auto-newline … (#8511)
### What problem does this PR solve?

In web folder's prompt-editor component, when entering content for the
first time, the cursor position is abnormal and it will automatically
wrap

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: leonlai <owllai123456>
2025-06-26 19:28:46 +08:00
d768130204 Fix: chunk number error after re-parsing (#8513)
### What problem does this PR solve?

Fix chunk number error after re-parsing. #8503.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-26 17:46:53 +08:00
05bf01b058 Feat: Displays the output variable type selected by the loop operator #3221 (#8515)
### What problem does this PR solve?

Feat: Displays the output variable type selected by the loop operator
#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-26 17:46:37 +08:00
d11cfd4e45 Fix: Add input validation to chunk creation endpoint (#8516)
### What problem does this PR solve?

- Include optional `tag_feas` field if present in request
- Add input validation for `important_kwd` and `question_kwd` to ensure
they are lists
- #8462

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-26 17:46:00 +08:00
32a7ad3cba Feat: Customize the output variable name of the loop operator #3221 (#8514)
### What problem does this PR solve?

Feat: Customize the output variable name of the loop operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-26 16:43:06 +08:00
42a570a64d Feat: Add UserFillUpForm component #3221 (#8508)
### What problem does this PR solve?

Feat: Add UserFillUpForm component #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-26 14:55:51 +08:00
6d256ff0f5 Perf: ignore concate between rows. (#8507)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2025-06-26 14:55:37 +08:00
0eb90e73a5 Feat: add MCP dashboard functionalities list_tools and test_tool (#8505)
### What problem does this PR solve?

Add MCP dashboard functionalities list_tools and test_tool.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-26 13:52:01 +08:00
6b1221d2f6 Fix parser_config access for layout_recognize in presentation.py (#8492)
### What problem does this PR solve?
This PR addresses an issue in the presentation parser where the
`layout_recognize` configuration was incorrectly retrieved from
`kwargs.get("layout_recognize", "DeepDOC")`. Instead, it should be
sourced from the `parser_config` parameter, specifically
`parser_config.get("layout_recognize", "DeepDOC")`.

This mismatch could cause the parser to default to the "DeepDOC" layout
recognizer, ignoring any alternative recognition method specified in the
parser configuration. As a result, PDF document parsing might use an
incorrect recognition engine.

The fix ensures the presentation parser consistently uses the
`layout_recognize` setting from `parser_config`, aligning with the
configuration access patterns used elsewhere in the codebase.

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-26 11:54:43 +08:00
f09ca8e795 Feat: Allow operators inside the loop operator to reference the output parameters of external operators #3221 (#8498)
### What problem does this PR solve?

Feat: Allow operators inside the loop operator to reference the output
parameters of external operators #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-26 09:34:38 +08:00
c4bfd9fa2c Feat: Add retrieval tool #3221 (#8491)
### What problem does this PR solve?

Feat: Add retrieval tool #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-25 18:32:56 +08:00
7353070f49 Adds retrieval result fields to Chunk (#8478)
### What problem does this PR solve?

This PR adds fields to the `Chunk` class to store retrieval results like
similarity scores, term similarity, vector similarity, positions, and
document type. This allows the chunk object to hold all the information
needed when returning search results from the vector database.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-25 16:53:15 +08:00
dac5bcdf17 Fix: Enforce default embedding model in create_dataset / update_dataset (#8486)
### What problem does this PR solve?

Previous:
- Defaulted to hardcoded model 'BAAI/bge-large-zh-v1.5@BAAI'
- Did not respect user-configured default embedding_model

Now:
- Correctly prioritizes user-configured default embedding_model

Other:
- Make embedding_model optional in CreateDatasetReq with proper None
handling
- Add default embedding model fallback in dataset update when empty
- Enhance validation utils to handle None values and string
normalization
- Update SDK default embedding model to None to match API changes
- Adjust related test cases to reflect new validation rules

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-25 16:41:32 +08:00
340354b79c fix the error 'Unknown field for GenerationConfig: max_tokens' when u… (#8473)
### What problem does this PR solve?
[https://github.com/infiniflow/ragflow/issues/8324](url)

docker image version: v0.19.1

The `_clean_conf` function was not implemented in the `_chat` and
`chat_streamly` methods of the `GeminiChat` class, causing the error
"Unknown field for GenerationConfig: max_tokens" when the default LLM
config includes the "max_tokens" parameter.

**Buggy Code(ragflow/rag/llm/chat_model.py)**
```python
class GeminiChat(Base):
    def __init__(self, key, model_name, base_url=None, **kwargs):
        super().__init__(key, model_name, base_url=base_url, **kwargs)

        from google.generativeai import GenerativeModel, client

        client.configure(api_key=key)
        _client = client.get_default_generative_client()
        self.model_name = "models/" + model_name
        self.model = GenerativeModel(model_name=self.model_name)
        self.model._client = _client

    def _clean_conf(self, gen_conf):
        for k in list(gen_conf.keys()):
            if k not in ["temperature", "top_p"]:
                del gen_conf[k]
        return gen_conf

    def _chat(self, history, gen_conf):
        from google.generativeai.types import content_types

        system = history[0]["content"] if history and history[0]["role"] == "system" else ""
        hist = []
        for item in history:
            if item["role"] == "system":
                continue
            hist.append(deepcopy(item))
            item = hist[-1]
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "role" in item and item["role"] == "system":
                item["role"] = "user"
            if "content" in item:
                item["parts"] = item.pop("content")

        if system:
            self.model._system_instruction = content_types.to_content(system)
        response = self.model.generate_content(hist, generation_config=gen_conf)
        ans = response.text
        return ans, response.usage_metadata.total_token_count

    def chat_streamly(self, system, history, gen_conf):
        from google.generativeai.types import content_types

        if system:
            self.model._system_instruction = content_types.to_content(system)
        #_clean_conf was not implemented 
        for k in list(gen_conf.keys()):
            if k not in ["temperature", "top_p", "max_tokens"]:
                del gen_conf[k]
        for item in history:
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "content" in item:
                item["parts"] = item.pop("content")
        ans = ""
        try:
            response = self.model.generate_content(history, generation_config=gen_conf, stream=True)
            for resp in response:
                ans = resp.text
                yield ans

            yield response._chunks[-1].usage_metadata.total_token_count
        except Exception as e:
            yield ans + "\n**ERROR**: " + str(e)

        yield 0
```
**Implement the _clean_conf function**
```python
class GeminiChat(Base):
    def __init__(self, key, model_name, base_url=None, **kwargs):
        super().__init__(key, model_name, base_url=base_url, **kwargs)

        from google.generativeai import GenerativeModel, client

        client.configure(api_key=key)
        _client = client.get_default_generative_client()
        self.model_name = "models/" + model_name
        self.model = GenerativeModel(model_name=self.model_name)
        self.model._client = _client

    def _clean_conf(self, gen_conf):
        for k in list(gen_conf.keys()):
            if k not in ["temperature", "top_p"]:
                del gen_conf[k]
        return gen_conf

    def _chat(self, history, gen_conf):
        from google.generativeai.types import content_types
        # implement _clean_conf to remove the wrong parameters
        gen_conf = self._clean_conf(gen_conf)

        system = history[0]["content"] if history and history[0]["role"] == "system" else ""
        hist = []
        for item in history:
            if item["role"] == "system":
                continue
            hist.append(deepcopy(item))
            item = hist[-1]
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "role" in item and item["role"] == "system":
                item["role"] = "user"
            if "content" in item:
                item["parts"] = item.pop("content")

        if system:
            self.model._system_instruction = content_types.to_content(system)
        response = self.model.generate_content(hist, generation_config=gen_conf)
        ans = response.text
        return ans, response.usage_metadata.total_token_count

    def chat_streamly(self, system, history, gen_conf):
        from google.generativeai.types import content_types
        # implement _clean_conf to remove the wrong parameters
        gen_conf = self._clean_conf(gen_conf)

        if system:
            self.model._system_instruction = content_types.to_content(system)
        #Removed duplicate parameter filtering logic "for k in list(gen_conf.keys()):"
        for item in history:
            if "role" in item and item["role"] == "assistant":
                item["role"] = "model"
            if "content" in item:
                item["parts"] = item.pop("content")
        ans = ""
        try:
            response = self.model.generate_content(history, generation_config=gen_conf, stream=True)
            for resp in response:
                ans = resp.text
                yield ans

            yield response._chunks[-1].usage_metadata.total_token_count
        except Exception as e:
            yield ans + "\n**ERROR**: " + str(e)

        yield 0
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-25 16:23:35 +08:00
c4b58ed195 Feat: Filter the query variable drop-down box options by type #3221 (#8485)
### What problem does this PR solve?

Feat: Filter the query variable drop-down box options by type #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-25 16:23:20 +08:00
b705ff08fe Refa: improve GraphRAG similarity sensitivity to numeric differences (#8479)
### What problem does this PR solve?

Improve GraphRAG similarity sensitivity to numeric differences. #8444.

### Type of change

- [x] Refactoring
2025-06-25 16:20:59 +08:00
d632046032 Fixes typo in variable name (#8476)
### What problem does this PR solve?

This PR fixes a typo in the variable name `succesfulFilenames`,
correcting it to `successfulFilenames`. This ensures consistency and
avoids potential errors due to the misspelled variable.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-25 15:36:54 +08:00
de8ba7298c RAGFlow service_conf using .env variable (#8454)
### What problem does this PR solve?
Fix: when using external components, it is impossible to specify the
port, because the variables in the `docker/.env` variable were not
referenced by `docker/service_conf.yaml.template`.

382d2d0373/docker/.env (L85)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-25 15:24:37 +08:00
ece27c66e9 Feat: Insert the node data of the bottom subagent into the tool array of the head agent #3221 (#8471)
### What problem does this PR solve?

Feat: Insert the node data of the bottom subagent into the tool array of
the head agent #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-25 15:24:22 +08:00
5256980ffb Fix: Solve the OOM issue when passing large PDF files while using QA chunking method. (#8464)
### What problem does this PR solve?

Using the QA chunking method with a large PDF (e.g., 300+ pages) may
lead to OOM in the ragflow-worker module.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-25 10:25:45 +08:00
f21827bc28 Feat: add MCP treamable-http transport (#8449)
### What problem does this PR solve?

Add MCP treamable-http transport.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-25 10:01:54 +08:00
8d9d2cc0a9 Fix: some cases Task return but not set progress (#8469)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/8466
I go through the codes, current logic:
When do_handle_task raises an exception, handle_task will set the
progress, but for some cases do_handle_task internal will just return
but not set the right progress, at this cases the redis stream will been
acked but the task is running.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-25 09:58:55 +08:00
af6850c8d8 Feat: add MCP dashboard operations (#8460)
### What problem does this PR solve?

Add MCP server dashboard operations.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-06-25 09:26:04 +08:00
18fd7983f1 Docs: exporting created knowledge graphs is not supported (#8465)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-06-25 09:21:54 +08:00
d6a941ebf5 Fix the bug of long type value overflow (#8313)
### What problem does this PR solve?

This PR will fix the #8271 by extending int type to float type when
there is any value out of long type range in a column.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-24 18:18:30 +08:00
1c68c9ebd6 Feat: Add IterationNode component #3221 (#8461)
### What problem does this PR solve?

Feat: Add IterationNode component #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-24 18:01:30 +08:00
bc1b837616 FIX:Saving an RGBA image directly as JPEG will cause an error. If the… (#8399)
Saving an RGBA image directly as JPEG will cause an error. If the image
is in RGBA mode, convert it to RGB mode before saving it in JPG format.

### What problem does this PR solve?

During document parsing in the knowledge base, we occasionally encounter
the error 'cannot write mode RGBA as JPEG.' This occurs because images
in RGBA mode cannot be directly saved as JPEG. They must be converted
first before saving.

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-24 18:01:13 +08:00
9f9acf0c49 Test: Add document app tests (#8456)
### What problem does this PR solve?

- Add new test suite for document app with
create/list/parse/upload/remove tests
- Update API URLs to use version variable from config in HTTP and web
API tests

### Type of change

- [x] Add test cases
2025-06-24 17:26:16 +08:00
382d2d0373 Refactor:Improve insert file logic (#8445)
### What problem does this PR solve?

before refactor
1. create file record
2. Add to blob

if have some execption at 2 the system db will have a file record but
not have related blob, which will introduce some bug.

after refactor
1. add to blob
2. create file record.

if 1 success but 2 failed just have a dirty blob in blob system, user
will not feel that



### Type of change


- [x] Refactoring
2025-06-24 13:17:22 +08:00
07545fbfd3 Feat: Delete the agent and tool nodes downstream of the agent node #3221 (#8450)
### What problem does this PR solve?

Feat: Delete the agent and tool nodes downstream of the agent node #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-24 11:33:01 +08:00
49d67cbcb7 fix a bug when using huggingface embedding api (#8432)
### What problem does this PR solve?

image_version: v0.19.1
This PR fixes a bug in the HuggingFaceEmBedding API method that was
causing AssertionError: assert len(vects) == len(docs) during the
document embedding process.

#### Problem
The HuggingFaceEmbed.encode() method had an early return statement
inside the for loop, causing it to return after processing only the
first text input instead of processing all texts in the input list.

**Error Messenge**
```python
AssertionError: assert len(vects) == len(docs) # input chunks  != embedded  vectors from embedding api
File "/ragflow/rag/svr/task_executor.py", line 442, in embedding
```



**Buggy code(/ragflow/rag/llm/embedding_model.py)**
```python
class HuggingFaceEmbed(Base):
    def __init__(self, key, model_name, base_url=None):
        if not model_name:
            raise ValueError("Model name cannot be None")
        self.key = key
        self.model_name = model_name.split("___")[0]
        self.base_url = base_url or "http://127.0.0.1:8080"
        def encode(self, texts: list):
            embeddings = []
            for text in texts:
                response = requests.post(...)
                if response.status_code == 200:
                    try:
                        embedding = response.json()
                        embeddings.append(embedding[0])
                        #  Early return
                        return np.array(embeddings), sum([num_tokens_from_string(text) for text in texts]) 
                    except Exception as _e:
                        log_exception(_e, response)
                else:
                    raise Exception(...)
```
**Fixed Code(I just Rollback this function to the v0.19.0 version)**
```python
Class HuggingFaceEmbed(Base):
    def __init__(self, key, model_name, base_url=None):
        if not model_name:
            raise ValueError("Model name cannot be None")
        self.key = key
        self.model_name = model_name.split("___")[0]
        self.base_url = base_url or "http://127.0.0.1:8080"
        def encode(self, texts: list):
            embeddings = []
            for text in texts:
                response = requests.post(...)
                if response.status_code == 200:
                    embedding = response.json()
                    embeddings.append(embedding[0])  #  Only append, no return
                else:
                    raise Exception(...)
            return np.array(embeddings), sum([num_tokens_from_string(text) for text in texts])  #  Return after processing all
```
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-24 09:35:02 +08:00
96b63cc81f Feat: Use the message_id returned by the interface as the id of the reply message #3221 (#8434)
### What problem does this PR solve?
Feat: Use the message_id returned by the interface as the id of the
reply message #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-24 09:34:33 +08:00
fd7ac17605 Feat: Scratch MCP tool calling support. (#8263)
### What problem does this PR solve?

This is a cherry-pick from #7781 as requested.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-23 17:45:35 +08:00
e9c6891e24 Docs: Miscellaneous editorial updates (#8430)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-06-23 17:45:20 +08:00
03656da4dd Refa: upgrade MCP SDK to v1.9.4 (#8421)
### What problem does this PR solve?

Upgrade MCP SDK to v1.9.4 (latest).

### Type of change

- [x] Refactoring
2025-06-23 16:53:59 +08:00
0427eebe94 Update .env ,Defaults to the v0.19.1-slim edition (#8412)
### What problem does this PR solve?

Update .env ,Defaults to the v0.19.1-slim edition

### Type of change

- [x] Other (please describe): Update .env ,Defaults to the
v0.19.1-slim edition
2025-06-23 16:00:14 +08:00
244d8a47b9 Fix: AzureChat model code (#8426)
### What problem does this PR solve?

- Simplify AzureChat constructor by passing base_url directly
- Clean up spacing and formatting in chat_model.py
- Remove redundant parentheses and improve code consistency
- #8423

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-23 15:59:25 +08:00
4760e317d5 Feat: Add HTTPS setup instructions and configuration for Nginx (#8401)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change: Documentation Update/Refactoring

#### Summary
Adds HTTPS/SSL configuration guide/example to enable secure RAGFlow
deployments with proper certificate management.

#### Changes
- New HTTPS Setup Section: Step-by-step guide for SSL certificate
configuration
- Let's Encrypt Integration: Complete Certbot setup instructions
- Docker Configuration: Volume mapping examples for certificates

#### Key Features
- Prerequisites checklist
- Docker Compose configuration examples
- Support for both Let's Encrypt and existing certificates

#### Files Modified
- `README.md`
- `ragflow.https.conf` (new file)
2025-06-23 15:36:15 +08:00
71afebb2c0 Feat: The delete button is displayed only when the cursor is hovered over the connection line #3221 (#8422)
### What problem does this PR solve?

Feat: The delete button is displayed only when the cursor is hovered
over the connection line #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-23 15:27:34 +08:00
f0e0783618 Fix: Database Query Vulnerable to Injection Attacks in rag/utils/opendal_conn.py (#8408)
**Context and Purpose:**

This PR automatically remediates a security vulnerability:
- **Description:** Detected possible formatted SQL query. Use
parameterized queries instead.
- **Rule ID:**
python.lang.security.audit.formatted-sql-query.formatted-sql-query
- **Severity:** HIGH
- **File:** rag/utils/opendal_conn.py
- **Lines Affected:** 98 - 98

This change is necessary to protect the application from potential
security risks associated with this vulnerability.

**Solution Implemented:**

The automated remediation process has applied the necessary changes to
the affected code in `rag/utils/opendal_conn.py` to resolve the
identified issue.

Please review the changes to ensure they are correct and integrate as
expected.
2025-06-23 14:54:25 +08:00
d4e6e2bd21 Fix: doc_aggs issue. (#8418)
### What problem does this PR solve?

#8406

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-23 14:54:01 +08:00
81a4c0698c Feat: Solved the conflict between the Handle click and drag events of the canvas node #3221 (#8413)
### What problem does this PR solve?

Feat: Solved the conflict between the Handle click and drag events of
the canvas node #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-23 14:36:01 +08:00
83e23f1e8a Fix: rank feature score should be greater than 0. (#8416)
### What problem does this PR solve?

#8414

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-06-23 14:10:13 +08:00
794a4102c2 Fix: Document parse via API will alot problen (#8407)
### What problem does this PR solve?
#8391
#8404

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-06-23 13:08:11 +08:00
3a50908946 Docs: Added v0.19.1 release notes (#8398)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-06-23 09:51:28 +08:00
db9e91152d Feat: Add Tavily operator #3221 (#8400)
### What problem does this PR solve?

Feat: Add Tavily operator #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-06-23 09:51:09 +08:00
958 changed files with 69984 additions and 35437 deletions

View File

@ -0,0 +1,46 @@
name: "❤️‍🔥ᴬᴳᴱᴺᵀ Agent scenario request"
description: Propose a agent scenario request for RAGFlow.
title: "[Agent Scenario Request]: "
labels: ["❤️‍🔥ᴬᴳᴱᴺᵀ agent scenario"]
body:
- type: checkboxes
attributes:
label: Self Checks
description: "Please check the following in order to be responded in time :)"
options:
- label: I have searched for existing issues [search for existing issues](https://github.com/infiniflow/ragflow/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report ([Language Policy](https://github.com/infiniflow/ragflow/issues/5910)).
required: true
- label: Non-english title submitions will be closed directly ( 非英文标题的提交将会被直接关闭 ) ([Language Policy](https://github.com/infiniflow/ragflow/issues/5910)).
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:
label: Is your feature request related to a scenario?
description: |
A clear and concise description of what the scenario is. Ex. I'm always frustrated when [...]
render: Markdown
validations:
required: false
- type: textarea
attributes:
label: Describe the feature you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: Documentation, adoption, use case
description: If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful.
render: Markdown
validations:
required: false
- type: textarea
attributes:
label: Additional information
description: |
Add any other context or screenshots about the feature request here.
validations:
required: false

2
.gitignore vendored
View File

@ -193,3 +193,5 @@ dist
# SvelteKit build / generate output # SvelteKit build / generate output
.svelte-kit .svelte-kit
# Default backup dir
backup

15
.trivyignore Normal file
View File

@ -0,0 +1,15 @@
**/*.md
**/*.min.js
**/*.min.css
**/*.svg
**/*.png
**/*.jpg
**/*.jpeg
**/*.gif
**/*.woff
**/*.woff2
**/*.map
**/*.webp
**/*.ico
**/*.ttf
**/*.eot

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -81,12 +81,15 @@ data.
Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io). Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 Latest Updates ## 🔥 Latest Updates
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-04 Supports new models, including Kimi K2 and Grok 4.
- 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent. - 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query. - 2025-05-05 Supports cross-language query.
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files. - 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
@ -187,7 +190,7 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. > All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system. > If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.19.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.19.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1` for the full edition `v0.19.1`. > The command below downloads the `v0.20.2-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.2-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2` for the full edition `v0.20.2`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -200,8 +203,8 @@ releases! 🌟
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------| |-------------------|-----------------|-----------------------|--------------------------|
| v0.19.1 | &approx;9 | :heavy_check_mark: | Stable release | | v0.20.2 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.19.1-slim | &approx;2 | ❌ | Stable release | | v0.20.2-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -74,12 +74,15 @@
Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io). Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 Pembaruan Terbaru ## 🔥 Pembaruan Terbaru
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
- 2025-08-04 Mendukung model baru, termasuk Kimi K2 dan Grok 4.
- 2025-08-01 Mendukung alur kerja agen dan MCP.
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen. - 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
- 2025-05-05 Mendukung kueri lintas bahasa. - 2025-05-05 Mendukung kueri lintas bahasa.
- 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX. - 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX.
@ -178,7 +181,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64. > Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image). > Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.19.1-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.19.1-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1 untuk edisi lengkap v0.19.1. > Perintah di bawah ini mengunduh edisi v0.20.2-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.2-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2 untuk edisi lengkap v0.20.2.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -191,8 +194,8 @@ $ docker compose -f docker-compose.yml up -d
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.19.1 | &approx;9 | :heavy_check_mark: | Stable release | | v0.20.2 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.19.1-slim | &approx;2 | ❌ | Stable release | | v0.20.2-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -54,12 +54,15 @@
デモをお試しください:[https://demo.ragflow.io](https://demo.ragflow.io)。 デモをお試しください:[https://demo.ragflow.io](https://demo.ragflow.io)。
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 最新情報 ## 🔥 最新情報
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
- 2025-08-04 新モデル、キミK2およびGrok 4をサポート。
- 2025-08-01 エージェントワークフローとMCPをサポート。
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。 - 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
- 2025-05-05 言語間クエリをサポートしました。 - 2025-05-05 言語間クエリをサポートしました。
- 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。 - 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。
@ -157,7 +160,7 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。 > 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。 > ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.19.1-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.19.1-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.19.1 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1 と設定します。 > 以下のコマンドは、RAGFlow Docker イメージの v0.20.2-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.2-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.2 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2 と設定します。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -170,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.19.1 | &approx;9 | :heavy_check_mark: | Stable release | | v0.20.2 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.19.1-slim | &approx;2 | ❌ | Stable release | | v0.20.2-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -54,12 +54,15 @@
데모를 [https://demo.ragflow.io](https://demo.ragflow.io)에서 실행해 보세요. 데모를 [https://demo.ragflow.io](https://demo.ragflow.io)에서 실행해 보세요.
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 업데이트 ## 🔥 업데이트
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
- 2025-08-04 새로운 모델인 Kimi K2와 Grok 4를 포함하여 지원합니다.
- 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다.
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다. - 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
- 2025-05-05 언어 간 쿼리를 지원합니다. - 2025-05-05 언어 간 쿼리를 지원합니다.
- 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다. - 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다.
@ -157,7 +160,7 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다. > 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image). > ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.19.1-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.19.1-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.19.1을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1로 설정합니다. > 아래 명령어는 RAGFlow Docker 이미지의 v0.20.2-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.2-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.2을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2로 설정합니다.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -170,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.19.1 | &approx;9 | :heavy_check_mark: | Stable release | | v0.20.2 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.19.1-slim | &approx;2 | ❌ | Stable release | | v0.20.2-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -74,12 +74,15 @@
Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io). Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 Últimas Atualizações ## 🔥 Últimas Atualizações
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
- 04-08-2025 Suporta novos modelos, incluindo Kimi K2 e Grok 4.
- 01-08-2025 Suporta fluxo de trabalho agente e MCP.
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente. - 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
- 05-05-2025 Suporte a consultas entre idiomas. - 05-05-2025 Suporte a consultas entre idiomas.
- 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX. - 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX.
@ -177,7 +180,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64. > Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema. > Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição `v0.19.1-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.19.1-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1` para a edição completa `v0.19.1`. > O comando abaixo baixa a edição `v0.20.2-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.2-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2` para a edição completa `v0.20.2`.
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -190,8 +193,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? | | Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | ------------------------------- | ------------------------ | | --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.19.1 | ~9 | :heavy_check_mark: | Lançamento estável | | v0.20.2 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.19.1-slim | ~2 | ❌ | Lançamento estável | | v0.20.2-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno | | nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno | | nightly-slim | ~2 | ❌ | _Instável_ build noturno |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -77,12 +77,15 @@
請登入網址 [https://demo.ragflow.io](https://demo.ragflow.io) 試用 demo。 請登入網址 [https://demo.ragflow.io](https://demo.ragflow.io) 試用 demo。
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 近期更新 ## 🔥 近期更新
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-04 支援 Kimi K2 和 Grok 4 等模型.
- 2025-08-01 支援 agentic workflow 和 MCP
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。 - 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。 - 2025-05-05 支援跨語言查詢。
- 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述. - 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述.
@ -180,7 +183,7 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。 > 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。 > 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.19.1-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.19.1-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1` 來下載 RAGFlow 鏡像的 `v0.19.1` 完整發行版。 > 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.2-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.2-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2` 來下載 RAGFlow 鏡像的 `v0.20.2` 完整發行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -193,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.19.1 | &approx;9 | :heavy_check_mark: | Stable release | | v0.20.2 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.19.1-slim | &approx;2 | ❌ | Stable release | | v0.20.2-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"> <img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a> </a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank"> <a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.19.1"> <img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.2">
</a> </a>
<a href="https://github.com/infiniflow/ragflow/releases/latest"> <a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release"> <img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -77,12 +77,15 @@
请登录网址 [https://demo.ragflow.io](https://demo.ragflow.io) 试用 demo。 请登录网址 [https://demo.ragflow.io](https://demo.ragflow.io) 试用 demo。
<div align="center" style="margin-top:20px;margin-bottom:20px;"> <div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/> <img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif" width="1200"/>
</div> </div>
## 🔥 近期更新 ## 🔥 近期更新
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型.
- 2025-08-04 新增对 Kimi K2 和 Grok 4 等模型的支持.
- 2025-08-01 支持 agentic workflow 和 MCP。
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。 - 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。 - 2025-05-05 支持跨语言查询。
- 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述. - 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述.
@ -180,7 +183,7 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。 > 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。 > 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.19.1-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.19.1-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.1` 来下载 RAGFlow 镜像的 `v0.19.1` 完整发行版。 > 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.2-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.2-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.2` 来下载 RAGFlow 镜像的 `v0.20.2` 完整发行版。
```bash ```bash
$ cd ragflow/docker $ cd ragflow/docker
@ -193,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? | | RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ | | ----------------- | --------------- | --------------------- | ------------------------ |
| v0.19.1 | &approx;9 | :heavy_check_mark: | Stable release | | v0.20.2 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.19.1-slim | &approx;2 | ❌ | Stable release | | v0.20.2-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build | | nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build | | nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -1,45 +0,0 @@
English | [简体中文](./README_zh.md)
# *Graph*
## Introduction
*Graph* is a mathematical concept which is composed of nodes and edges.
It is used to compose a complex work flow or agent.
And this graph is beyond the DAG that we can use circles to describe our agent or work flow.
Under this folder, we propose a test tool ./test/client.py which can test the DSLs such as json files in folder ./test/dsl_examples.
Please use this client at the same folder you start RAGFlow. If it's run by Docker, please go into the container before running the client.
Otherwise, correct configurations in service_conf.yaml is essential.
```bash
PYTHONPATH=path/to/ragflow python graph/test/client.py -h
usage: client.py [-h] -s DSL -t TENANT_ID -m
options:
-h, --help show this help message and exit
-s DSL, --dsl DSL input dsl
-t TENANT_ID, --tenant_id TENANT_ID
Tenant ID
-m, --stream Stream output
```
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/79179c5e-d4d6-464a-b6c4-5721cb329899" width="1000"/>
</div>
## How to gain a TENANT_ID in command line?
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/419d8588-87b1-4ab8-ac49-2d1f047a4b97" width="600"/>
</div>
💡 We plan to display it here in the near future.
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/c97915de-0091-46a5-afd9-e278946e5fe3" width="600"/>
</div>
## How to set 'kb_ids' for component 'Retrieval' in DSL?
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/0a731534-cac8-49fd-8a92-ca247eeef66d" width="600"/>
</div>

View File

@ -1,46 +0,0 @@
[English](./README.md) | 简体中文
# *Graph*
## 简介
"Graph"是一个由节点和边组成的数学概念。
它被用来构建复杂的工作流或代理。
这个图超越了有向无环图DAG我们可以使用循环来描述我们的代理或工作流。
在这个文件夹下,我们提出了一个测试工具 ./test/client.py
它可以测试像文件夹./test/dsl_examples下一样的DSL文件。
请在启动 RAGFlow 的同一文件夹中使用此客户端。如果它是通过 Docker 运行的,请在运行客户端之前进入容器。
否则,正确配置 service_conf.yaml 文件是必不可少的。
```bash
PYTHONPATH=path/to/ragflow python graph/test/client.py -h
usage: client.py [-h] -s DSL -t TENANT_ID -m
options:
-h, --help show this help message and exit
-s DSL, --dsl DSL input dsl
-t TENANT_ID, --tenant_id TENANT_ID
Tenant ID
-m, --stream Stream output
```
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/05924730-c427-495b-8ee4-90b8b2250681" width="1000"/>
</div>
## 命令行中的TENANT_ID如何获得?
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/419d8588-87b1-4ab8-ac49-2d1f047a4b97" width="600"/>
</div>
💡 后面会展示在这里:
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/c97915de-0091-46a5-afd9-e278946e5fe3" width="600"/>
</div>
## DSL里面的Retrieval组件的kb_ids怎么填?
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/0a731534-cac8-49fd-8a92-ca247eeef66d" width="600"/>
</div>

View File

@ -13,14 +13,21 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import logging import base64
import json import json
import logging
import time
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy from copy import deepcopy
from functools import partial from functools import partial
import pandas as pd from typing import Any, Union, Tuple
from agent.component import component_class from agent.component import component_class
from agent.component.base import ComponentBase from agent.component.base import ComponentBase
from api.db.services.file_service import FileService
from api.utils import get_uuid, hash_str2int
from rag.prompts.prompts import chunks_format
from rag.utils.redis_conn import REDIS_CONN
class Canvas: class Canvas:
@ -35,14 +42,6 @@ class Canvas:
"downstream": ["answer_0"], "downstream": ["answer_0"],
"upstream": [], "upstream": [],
}, },
"answer_0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["retrieval_0"],
"upstream": ["begin", "generate_0"],
},
"retrieval_0": { "retrieval_0": {
"obj": { "obj": {
"component_name": "Retrieval", "component_name": "Retrieval",
@ -61,19 +60,28 @@ class Canvas:
} }
}, },
"history": [], "history": [],
"messages": [], "path": ["begin"],
"reference": [], "retrieval": {"chunks": [], "doc_aggs": []},
"path": [["begin"]], "globals": {
"answer": [] "sys.query": "",
"sys.user_id": tenant_id,
"sys.conversation_turns": 0,
"sys.files": []
}
} }
""" """
def __init__(self, dsl: str, tenant_id=None): def __init__(self, dsl: str, tenant_id=None, task_id=None):
self.path = [] self.path = []
self.history = [] self.history = []
self.messages = []
self.answer = []
self.components = {} self.components = {}
self.error = ""
self.globals = {
"sys.query": "",
"sys.user_id": tenant_id,
"sys.conversation_turns": 0,
"sys.files": []
}
self.dsl = json.loads(dsl) if dsl else { self.dsl = json.loads(dsl) if dsl else {
"components": { "components": {
"begin": { "begin": {
@ -89,13 +97,17 @@ class Canvas:
} }
}, },
"history": [], "history": [],
"messages": [],
"reference": [],
"path": [], "path": [],
"answer": [] "retrieval": [],
"globals": {
"sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
} }
self._tenant_id = tenant_id self._tenant_id = tenant_id
self._embed_id = "" self.task_id = task_id if task_id else get_uuid()
self.load() self.load()
def load(self): def load(self):
@ -105,33 +117,31 @@ class Canvas:
cpn_nms.add(cpn["obj"]["component_name"]) cpn_nms.add(cpn["obj"]["component_name"])
assert "Begin" in cpn_nms, "There have to be an 'Begin' component." assert "Begin" in cpn_nms, "There have to be an 'Begin' component."
assert "Answer" in cpn_nms, "There have to be an 'Answer' component."
for k, cpn in self.components.items(): for k, cpn in self.components.items():
cpn_nms.add(cpn["obj"]["component_name"]) cpn_nms.add(cpn["obj"]["component_name"])
param = component_class(cpn["obj"]["component_name"] + "Param")() param = component_class(cpn["obj"]["component_name"] + "Param")()
param.update(cpn["obj"]["params"]) param.update(cpn["obj"]["params"])
param.check() try:
param.check()
except Exception as e:
raise ValueError(self.get_component_name(k) + f": {e}")
cpn["obj"] = component_class(cpn["obj"]["component_name"])(self, k, param) cpn["obj"] = component_class(cpn["obj"]["component_name"])(self, k, param)
if cpn["obj"].component_name == "Categorize":
for _, desc in param.category_description.items():
if desc["to"] not in cpn["downstream"]:
cpn["downstream"].append(desc["to"])
self.path = self.dsl["path"] self.path = self.dsl["path"]
self.history = self.dsl["history"] self.history = self.dsl["history"]
self.messages = self.dsl["messages"] self.globals = self.dsl["globals"]
self.answer = self.dsl["answer"] self.retrieval = self.dsl["retrieval"]
self.reference = self.dsl["reference"] self.memory = self.dsl.get("memory", [])
self._embed_id = self.dsl.get("embed_id", "")
def __str__(self): def __str__(self):
self.dsl["path"] = self.path self.dsl["path"] = self.path
self.dsl["history"] = self.history self.dsl["history"] = self.history
self.dsl["messages"] = self.messages self.dsl["globals"] = self.globals
self.dsl["answer"] = self.answer self.dsl["task_id"] = self.task_id
self.dsl["reference"] = self.reference self.dsl["retrieval"] = self.retrieval
self.dsl["embed_id"] = self._embed_id self.dsl["memory"] = self.memory
dsl = { dsl = {
"components": {} "components": {}
} }
@ -150,161 +160,255 @@ class Canvas:
dsl["components"][k][c] = deepcopy(cpn[c]) dsl["components"][k][c] = deepcopy(cpn[c])
return json.dumps(dsl, ensure_ascii=False) return json.dumps(dsl, ensure_ascii=False)
def reset(self): def reset(self, mem=False):
self.path = [] self.path = []
self.history = [] if not mem:
self.messages = [] self.history = []
self.answer = [] self.retrieval = []
self.reference = [] self.memory = []
for k, cpn in self.components.items(): for k, cpn in self.components.items():
self.components[k]["obj"].reset() self.components[k]["obj"].reset()
self._embed_id = ""
for k in self.globals.keys():
if isinstance(self.globals[k], str):
self.globals[k] = ""
elif isinstance(self.globals[k], int):
self.globals[k] = 0
elif isinstance(self.globals[k], float):
self.globals[k] = 0
elif isinstance(self.globals[k], list):
self.globals[k] = []
elif isinstance(self.globals[k], dict):
self.globals[k] = {}
else:
self.globals[k] = None
try:
REDIS_CONN.delete(f"{self.task_id}-logs")
except Exception as e:
logging.exception(e)
def get_component_name(self, cid): def get_component_name(self, cid):
for n in self.dsl["graph"]["nodes"]: for n in self.dsl.get("graph", {}).get("nodes", []):
if cid == n["id"]: if cid == n["id"]:
return n["data"]["name"] return n["data"]["name"]
return "" return ""
def run(self, running_hint_text = "is running...🕞", **kwargs): def run(self, **kwargs):
if not running_hint_text or not isinstance(running_hint_text, str): st = time.perf_counter()
running_hint_text = "is running...🕞" self.message_id = get_uuid()
bypass_begin = bool(kwargs.get("bypass_begin", False)) created_at = int(time.time())
self.add_user_input(kwargs.get("query"))
if self.answer: for k in kwargs.keys():
cpn_id = self.answer[0] if k in ["query", "user_id", "files"] and kwargs[k]:
self.answer.pop(0) if k == "files":
try: self.globals[f"sys.{k}"] = self.get_files(kwargs[k])
ans = self.components[cpn_id]["obj"].run(self.history, **kwargs)
except Exception as e:
ans = ComponentBase.be_output(str(e))
self.path[-1].append(cpn_id)
if kwargs.get("stream"):
for an in ans():
yield an
else:
yield ans
return
if not self.path:
self.components["begin"]["obj"].run(self.history, **kwargs)
self.path.append(["begin"])
if bypass_begin:
cpn = self.get_component("begin")
downstream = cpn["downstream"]
self.path.append(downstream)
self.path.append([])
ran = -1
waiting = []
without_dependent_checking = []
def prepare2run(cpns):
nonlocal ran, ans
for c in cpns:
if self.path[-1] and c == self.path[-1][-1]:
continue
cpn = self.components[c]["obj"]
if cpn.component_name == "Answer":
self.answer.append(c)
else: else:
logging.debug(f"Canvas.prepare2run: {c}") self.globals[f"sys.{k}"] = kwargs[k]
if c not in without_dependent_checking: if not self.globals["sys.conversation_turns"] :
cpids = cpn.get_dependent_components() self.globals["sys.conversation_turns"] = 0
if any([cc not in self.path[-1] for cc in cpids]): self.globals["sys.conversation_turns"] += 1
if c not in waiting:
waiting.append(c)
continue
yield "*'{}'* {}".format(self.get_component_name(c), running_hint_text)
if cpn.component_name.lower() == "iteration": def decorate(event, dt):
st_cpn = cpn.get_start() nonlocal created_at
assert st_cpn, "Start component not found for Iteration." return {
if not st_cpn["obj"].end(): "event": event,
cpn = st_cpn["obj"] #"conversation_id": "f3cc152b-24b0-4258-a1a1-7d5e9fc8a115",
c = cpn._id "message_id": self.message_id,
"created_at": created_at,
"task_id": self.task_id,
"data": dt
}
try: if not self.path or self.path[-1].lower().find("userfillup") < 0:
ans = cpn.run(self.history, **kwargs) self.path.append("begin")
except Exception as e: self.retrieval.append({"chunks": [], "doc_aggs": []})
logging.exception(f"Canvas.run got exception: {e}")
self.path[-1].append(c)
ran += 1
raise e
self.path[-1].append(c)
ran += 1 yield decorate("workflow_started", {"inputs": kwargs.get("inputs")})
self.retrieval.append({"chunks": {}, "doc_aggs": {}})
downstream = self.components[self.path[-2][-1]]["downstream"] def _run_batch(f, t):
if not downstream and self.components[self.path[-2][-1]].get("parent_id"): with ThreadPoolExecutor(max_workers=5) as executor:
cid = self.path[-2][-1] thr = []
pid = self.components[cid]["parent_id"] for i in range(f, t):
o, _ = self.components[cid]["obj"].output(allow_partial=False) cpn = self.get_component_obj(self.path[i])
oo, _ = self.components[pid]["obj"].output(allow_partial=False) if cpn.component_name.lower() in ["begin", "userfillup"]:
self.components[pid]["obj"].set_output(pd.concat([oo, o], ignore_index=True).dropna()) thr.append(executor.submit(cpn.invoke, inputs=kwargs.get("inputs", {})))
downstream = [pid] else:
thr.append(executor.submit(cpn.invoke, **cpn.get_input()))
for t in thr:
t.result()
for m in prepare2run(downstream): def _node_finished(cpn_obj):
yield {"content": m, "running_status": True} return decorate("node_finished",{
"inputs": cpn_obj.get_input_values(),
"outputs": cpn_obj.output(),
"component_id": cpn_obj._id,
"component_name": self.get_component_name(cpn_obj._id),
"component_type": self.get_component_type(cpn_obj._id),
"error": cpn_obj.error(),
"elapsed_time": time.perf_counter() - cpn_obj.output("_created_time"),
"created_at": cpn_obj.output("_created_time"),
})
while 0 <= ran < len(self.path[-1]): self.error = ""
logging.debug(f"Canvas.run: {ran} {self.path}") idx = len(self.path) - 1
cpn_id = self.path[-1][ran] partials = []
cpn = self.get_component(cpn_id) while idx < len(self.path):
if not any([cpn["downstream"], cpn.get("parent_id"), waiting]): to = len(self.path)
for i in range(idx, to):
yield decorate("node_started", {
"inputs": None, "created_at": int(time.time()),
"component_id": self.path[i],
"component_name": self.get_component_name(self.path[i]),
"component_type": self.get_component_type(self.path[i]),
"thoughts": self.get_component_thoughts(self.path[i])
})
_run_batch(idx, to)
# post processing of components invocation
for i in range(idx, to):
cpn = self.get_component(self.path[i])
cpn_obj = self.get_component_obj(self.path[i])
if cpn_obj.component_name.lower() == "message":
if isinstance(cpn_obj.output("content"), partial):
_m = ""
for m in cpn_obj.output("content")():
if not m:
continue
if m == "<think>":
yield decorate("message", {"content": "", "start_to_think": True})
elif m == "</think>":
yield decorate("message", {"content": "", "end_to_think": True})
else:
yield decorate("message", {"content": m})
_m += m
cpn_obj.set_output("content", _m)
else:
yield decorate("message", {"content": cpn_obj.output("content")})
yield decorate("message_end", {"reference": self.get_reference()})
while partials:
_cpn_obj = self.get_component_obj(partials[0])
if isinstance(_cpn_obj.output("content"), partial):
break
yield _node_finished(_cpn_obj)
partials.pop(0)
other_branch = False
if cpn_obj.error():
ex = cpn_obj.exception_handler()
if ex and ex["goto"]:
self.path.extend(ex["goto"])
other_branch = True
elif ex and ex["default_value"]:
yield decorate("message", {"content": ex["default_value"]})
yield decorate("message_end", {})
else:
self.error = cpn_obj.error()
if cpn_obj.component_name.lower() != "iteration":
if isinstance(cpn_obj.output("content"), partial):
if self.error:
cpn_obj.set_output("content", None)
yield _node_finished(cpn_obj)
else:
partials.append(self.path[i])
else:
yield _node_finished(cpn_obj)
def _append_path(cpn_id):
nonlocal other_branch
if other_branch:
return
if self.path[-1] == cpn_id:
return
self.path.append(cpn_id)
def _extend_path(cpn_ids):
nonlocal other_branch
if other_branch:
return
for cpn_id in cpn_ids:
_append_path(cpn_id)
if cpn_obj.component_name.lower() == "iterationitem" and cpn_obj.end():
iter = cpn_obj.get_parent()
yield _node_finished(iter)
_extend_path(self.get_component(cpn["parent_id"])["downstream"])
elif cpn_obj.component_name.lower() in ["categorize", "switch"]:
_extend_path(cpn_obj.output("_next"))
elif cpn_obj.component_name.lower() == "iteration":
_append_path(cpn_obj.get_start())
elif not cpn["downstream"] and cpn_obj.get_parent():
_append_path(cpn_obj.get_parent().get_start())
else:
_extend_path(cpn["downstream"])
if self.error:
logging.error(f"Runtime Error: {self.error}")
break break
idx = to
loop = self._find_loop() if any([self.get_component_obj(c).component_name.lower() == "userfillup" for c in self.path[idx:]]):
if loop: path = [c for c in self.path[idx:] if self.get_component(c)["obj"].component_name.lower() == "userfillup"]
raise OverflowError(f"Too much loops: {loop}") path.extend([c for c in self.path[idx:] if self.get_component(c)["obj"].component_name.lower() != "userfillup"])
another_inputs = {}
tips = ""
for c in path:
o = self.get_component_obj(c)
if o.component_name.lower() == "userfillup":
another_inputs.update(o.get_input_elements())
if o.get_param("enable_tips"):
tips = o.get_param("tips")
self.path = path
yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips})
return
downstream = [] self.path = self.path[:idx]
if cpn["obj"].component_name.lower() in ["switch", "categorize", "relevant"]: if not self.error:
switch_out = cpn["obj"].output()[1].iloc[0, 0] yield decorate("workflow_finished",
assert switch_out in self.components, \ {
"{}'s output: {} not valid.".format(cpn_id, switch_out) "inputs": kwargs.get("inputs"),
downstream = [switch_out] "outputs": self.get_component_obj(self.path[-1]).output(),
else: "elapsed_time": time.perf_counter() - st,
downstream = cpn["downstream"] "created_at": st,
})
self.history.append(("assistant", self.get_component_obj(self.path[-1]).output()))
if not downstream and cpn.get("parent_id"): def get_component(self, cpn_id) -> Union[None, dict[str, Any]]:
pid = cpn["parent_id"] return self.components.get(cpn_id)
_, o = cpn["obj"].output(allow_partial=False)
_, oo = self.components[pid]["obj"].output(allow_partial=False)
self.components[pid]["obj"].set_output(pd.concat([oo.dropna(axis=1), o.dropna(axis=1)], ignore_index=True).dropna())
downstream = [pid]
for m in prepare2run(downstream): def get_component_obj(self, cpn_id) -> ComponentBase:
yield {"content": m, "running_status": True} return self.components.get(cpn_id)["obj"]
if ran >= len(self.path[-1]) and waiting: def get_component_type(self, cpn_id) -> str:
without_dependent_checking = waiting return self.components.get(cpn_id)["obj"].component_name
waiting = []
for m in prepare2run(without_dependent_checking):
yield {"content": m, "running_status": True}
without_dependent_checking = []
ran -= 1
if self.answer: def get_component_input_form(self, cpn_id) -> dict:
cpn_id = self.answer[0] return self.components.get(cpn_id)["obj"].get_input_form()
self.answer.pop(0)
ans = self.components[cpn_id]["obj"].run(self.history, **kwargs)
self.path[-1].append(cpn_id)
if kwargs.get("stream"):
assert isinstance(ans, partial)
for an in ans():
yield an
else:
yield ans
else: def is_reff(self, exp: str) -> bool:
raise Exception("The dialog flow has no way to interact with you. Please add an 'Interact' component to the end of the flow.") exp = exp.strip("{").strip("}")
if exp.find("@") < 0:
return exp in self.globals
arr = exp.split("@")
if len(arr) != 2:
return False
if self.get_component(arr[0]) is None:
return False
return True
def get_component(self, cpn_id): def get_variable_value(self, exp: str) -> Any:
return self.components[cpn_id] exp = exp.strip("{").strip("}").strip(" ").strip("{").strip("}")
if exp.find("@") < 0:
return self.globals[exp]
cpn_id, var_nm = exp.split("@")
cpn = self.get_component(cpn_id)
if not cpn:
raise Exception(f"Can't find variable: '{cpn_id}@{var_nm}'")
return cpn["obj"].output(var_nm)
def get_tenant_id(self): def get_tenant_id(self):
return self._tenant_id return self._tenant_id
@ -314,8 +418,8 @@ class Canvas:
if window_size <= 0: if window_size <= 0:
return convs return convs
for role, obj in self.history[window_size * -1:]: for role, obj in self.history[window_size * -1:]:
if isinstance(obj, list) and obj and all([isinstance(o, dict) for o in obj]): if isinstance(obj, dict):
convs.append({"role": role, "content": '\n'.join([str(s.get("content", "")) for s in obj])}) convs.append({"role": role, "content": obj.get("content", "")})
else: else:
convs.append({"role": role, "content": str(obj)}) convs.append({"role": role, "content": str(obj)})
return convs return convs
@ -323,12 +427,6 @@ class Canvas:
def add_user_input(self, question): def add_user_input(self, question):
self.history.append(("user", question)) self.history.append(("user", question))
def set_embedding_model(self, embed_id):
self._embed_id = embed_id
def get_embedding_model(self):
return self._embed_id
def _find_loop(self, max_loops=6): def _find_loop(self, max_loops=6):
path = self.path[-1][::-1] path = self.path[-1][::-1]
if len(path) < 2: if len(path) < 2:
@ -363,17 +461,78 @@ class Canvas:
return self.components["begin"]["obj"]._param.prologue return self.components["begin"]["obj"]._param.prologue
def set_global_param(self, **kwargs): def set_global_param(self, **kwargs):
for k, v in kwargs.items(): self.globals.update(kwargs)
for q in self.components["begin"]["obj"]._param.query:
if k != q["key"]:
continue
q["value"] = v
def get_preset_param(self): def get_preset_param(self):
return self.components["begin"]["obj"]._param.query return self.components["begin"]["obj"]._param.inputs
def get_component_input_elements(self, cpnnm): def get_component_input_elements(self, cpnnm):
return self.components[cpnnm]["obj"].get_input_elements() return self.components[cpnnm]["obj"].get_input_elements()
def set_component_infor(self, cpn_id, infor): def get_files(self, files: Union[None, list[dict]]) -> list[str]:
self.components[cpn_id]["obj"].set_infor(infor) if not files:
return []
def image_to_base64(file):
return "data:{};base64,{}".format(file["mime_type"],
base64.b64encode(FileService.get_blob(file["created_by"], file["id"])).decode("utf-8"))
exe = ThreadPoolExecutor(max_workers=5)
threads = []
for file in files:
if file["mime_type"].find("image") >=0:
threads.append(exe.submit(image_to_base64, file))
continue
threads.append(exe.submit(FileService.parse, file["name"], FileService.get_blob(file["created_by"], file["id"]), True, file["created_by"]))
return [th.result() for th in threads]
def tool_use_callback(self, agent_id: str, func_name: str, params: dict, result: Any, elapsed_time=None):
agent_ids = agent_id.split("-->")
agent_name = self.get_component_name(agent_ids[0])
path = agent_name if len(agent_ids) < 2 else agent_name+"-->"+"-->".join(agent_ids[1:])
try:
bin = REDIS_CONN.get(f"{self.task_id}-{self.message_id}-logs")
if bin:
obj = json.loads(bin.encode("utf-8"))
if obj[-1]["component_id"] == agent_ids[0]:
obj[-1]["trace"].append({"path": path, "tool_name": func_name, "arguments": params, "result": result, "elapsed_time": elapsed_time})
else:
obj.append({
"component_id": agent_ids[0],
"trace": [{"path": path, "tool_name": func_name, "arguments": params, "result": result, "elapsed_time": elapsed_time}]
})
else:
obj = [{
"component_id": agent_ids[0],
"trace": [{"path": path, "tool_name": func_name, "arguments": params, "result": result, "elapsed_time": elapsed_time}]
}]
REDIS_CONN.set_obj(f"{self.task_id}-{self.message_id}-logs", obj, 60*10)
except Exception as e:
logging.exception(e)
def add_refernce(self, chunks: list[object], doc_infos: list[object]):
if not self.retrieval:
self.retrieval = [{"chunks": {}, "doc_aggs": {}}]
r = self.retrieval[-1]
for ck in chunks_format({"chunks": chunks}):
cid = hash_str2int(ck["id"], 100)
if cid not in r:
r["chunks"][cid] = ck
for doc in doc_infos:
if doc["doc_name"] not in r:
r["doc_aggs"][doc["doc_name"]] = doc
def get_reference(self):
if not self.retrieval:
return {"chunks": {}, "doc_aggs": {}}
return self.retrieval[-1]
def add_memory(self, user:str, assist:str, summ: str):
self.memory.append((user, assist, summ))
def get_memory(self) -> list[Tuple]:
return self.memory
def get_component_thoughts(self, cpn_id) -> str:
return self.components.get(cpn_id)["obj"].thoughts()

View File

@ -14,123 +14,44 @@
# limitations under the License. # limitations under the License.
# #
import os
import importlib import importlib
from .begin import Begin, BeginParam import inspect
from .generate import Generate, GenerateParam from types import ModuleType
from .retrieval import Retrieval, RetrievalParam from typing import Dict, Type
from .answer import Answer, AnswerParam
from .categorize import Categorize, CategorizeParam _package_path = os.path.dirname(__file__)
from .switch import Switch, SwitchParam __all_classes: Dict[str, Type] = {}
from .relevant import Relevant, RelevantParam
from .message import Message, MessageParam def _import_submodules() -> None:
from .rewrite import RewriteQuestion, RewriteQuestionParam for filename in os.listdir(_package_path): # noqa: F821
from .keyword import KeywordExtract, KeywordExtractParam if filename.startswith("__") or not filename.endswith(".py") or filename.startswith("base"):
from .concentrator import Concentrator, ConcentratorParam continue
from .baidu import Baidu, BaiduParam module_name = filename[:-3]
from .duckduckgo import DuckDuckGo, DuckDuckGoParam
from .wikipedia import Wikipedia, WikipediaParam try:
from .pubmed import PubMed, PubMedParam module = importlib.import_module(f".{module_name}", package=__name__)
from .arxiv import ArXiv, ArXivParam _extract_classes_from_module(module) # noqa: F821
from .google import Google, GoogleParam except ImportError as e:
from .bing import Bing, BingParam print(f"Warning: Failed to import module {module_name}: {str(e)}")
from .googlescholar import GoogleScholar, GoogleScholarParam
from .deepl import DeepL, DeepLParam def _extract_classes_from_module(module: ModuleType) -> None:
from .github import GitHub, GitHubParam for name, obj in inspect.getmembers(module):
from .baidufanyi import BaiduFanyi, BaiduFanyiParam if (inspect.isclass(obj) and
from .qweather import QWeather, QWeatherParam obj.__module__ == module.__name__ and not name.startswith("_")):
from .exesql import ExeSQL, ExeSQLParam __all_classes[name] = obj
from .yahoofinance import YahooFinance, YahooFinanceParam globals()[name] = obj
from .wencai import WenCai, WenCaiParam
from .jin10 import Jin10, Jin10Param _import_submodules()
from .tushare import TuShare, TuShareParam
from .akshare import AkShare, AkShareParam __all__ = list(__all_classes.keys()) + ["__all_classes"]
from .crawler import Crawler, CrawlerParam
from .invoke import Invoke, InvokeParam del _package_path, _import_submodules, _extract_classes_from_module
from .template import Template, TemplateParam
from .email import Email, EmailParam
from .iteration import Iteration, IterationParam
from .iterationitem import IterationItem, IterationItemParam
from .code import Code, CodeParam
def component_class(class_name): def component_class(class_name):
m = importlib.import_module("agent.component") m = importlib.import_module("agent.component")
c = getattr(m, class_name) try:
return c return getattr(m, class_name)
except Exception:
return getattr(importlib.import_module("agent.tools"), class_name)
__all__ = [
"Begin",
"BeginParam",
"Generate",
"GenerateParam",
"Retrieval",
"RetrievalParam",
"Answer",
"AnswerParam",
"Categorize",
"CategorizeParam",
"Switch",
"SwitchParam",
"Relevant",
"RelevantParam",
"Message",
"MessageParam",
"RewriteQuestion",
"RewriteQuestionParam",
"KeywordExtract",
"KeywordExtractParam",
"Concentrator",
"ConcentratorParam",
"Baidu",
"BaiduParam",
"DuckDuckGo",
"DuckDuckGoParam",
"Wikipedia",
"WikipediaParam",
"PubMed",
"PubMedParam",
"ArXiv",
"ArXivParam",
"Google",
"GoogleParam",
"Bing",
"BingParam",
"GoogleScholar",
"GoogleScholarParam",
"DeepL",
"DeepLParam",
"GitHub",
"GitHubParam",
"BaiduFanyi",
"BaiduFanyiParam",
"QWeather",
"QWeatherParam",
"ExeSQL",
"ExeSQLParam",
"YahooFinance",
"YahooFinanceParam",
"WenCai",
"WenCaiParam",
"Jin10",
"Jin10Param",
"TuShare",
"TuShareParam",
"AkShare",
"AkShareParam",
"Crawler",
"CrawlerParam",
"Invoke",
"InvokeParam",
"Iteration",
"IterationParam",
"IterationItem",
"IterationItemParam",
"Template",
"TemplateParam",
"Email",
"EmailParam",
"Code",
"CodeParam",
"component_class"
]

View File

@ -0,0 +1,349 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import re
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy
from functools import partial
from typing import Any
import json_repair
from timeit import default_timer as timer
from agent.tools.base import LLMToolPluginCallSession, ToolParamBase, ToolBase, ToolMeta
from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.mcp_server_service import MCPServerService
from api.utils.api_utils import timeout
from rag.prompts import message_fit_in
from rag.prompts.prompts import next_step, COMPLETE_TASK, analyze_task, \
citation_prompt, reflect, rank_memories, kb_prompt, citation_plus, full_question
from rag.utils.mcp_tool_call_conn import MCPToolCallSession, mcp_tool_metadata_to_openai_tool
from agent.component.llm import LLMParam, LLM
class AgentParam(LLMParam, ToolParamBase):
"""
Define the Agent component parameters.
"""
def __init__(self):
self.meta:ToolMeta = {
"name": "agent",
"description": "This is an agent for a specific task.",
"parameters": {
"user_prompt": {
"type": "string",
"description": "This is the order you need to send to the agent.",
"default": "",
"required": True
},
"reasoning": {
"type": "string",
"description": (
"Supervisor's reasoning for choosing the this agent. "
"Explain why this agent is being invoked and what is expected of it."
),
"required": True
},
"context": {
"type": "string",
"description": (
"All relevant background information, prior facts, decisions, "
"and state needed by the agent to solve the current query. "
"Should be as detailed and self-contained as possible."
),
"required": True
},
}
}
super().__init__()
self.function_name = "agent"
self.tools = []
self.mcp = []
self.max_rounds = 5
self.description = ""
class Agent(LLM, ToolBase):
component_name = "Agent"
def __init__(self, canvas, id, param: LLMParam):
LLM.__init__(self, canvas, id, param)
self.tools = {}
for cpn in self._param.tools:
cpn = self._load_tool_obj(cpn)
self.tools[cpn.get_meta()["function"]["name"]] = cpn
self.chat_mdl = LLMBundle(self._canvas.get_tenant_id(), TenantLLMService.llm_id2llm_type(self._param.llm_id), self._param.llm_id,
max_retries=self._param.max_retries,
retry_interval=self._param.delay_after_error,
max_rounds=self._param.max_rounds,
verbose_tool_use=True
)
self.tool_meta = [v.get_meta() for _,v in self.tools.items()]
for mcp in self._param.mcp:
_, mcp_server = MCPServerService.get_by_id(mcp["mcp_id"])
tool_call_session = MCPToolCallSession(mcp_server, mcp_server.variables)
for tnm, meta in mcp["tools"].items():
self.tool_meta.append(mcp_tool_metadata_to_openai_tool(meta))
self.tools[tnm] = tool_call_session
self.callback = partial(self._canvas.tool_use_callback, id)
self.toolcall_session = LLMToolPluginCallSession(self.tools, self.callback)
#self.chat_mdl.bind_tools(self.toolcall_session, self.tool_metas)
def _load_tool_obj(self, cpn: dict) -> object:
from agent.component import component_class
param = component_class(cpn["component_name"] + "Param")()
param.update(cpn["params"])
try:
param.check()
except Exception as e:
self.set_output("_ERROR", cpn["component_name"] + f" configuration error: {e}")
raise
cpn_id = f"{self._id}-->" + cpn.get("name", "").replace(" ", "_")
return component_class(cpn["component_name"])(self._canvas, cpn_id, param)
def get_meta(self) -> dict[str, Any]:
self._param.function_name= self._id.split("-->")[-1]
m = super().get_meta()
if hasattr(self._param, "user_prompt") and self._param.user_prompt:
m["function"]["parameters"]["properties"]["user_prompt"] = self._param.user_prompt
return m
def get_input_form(self) -> dict[str, dict]:
res = {}
for k, v in self.get_input_elements().items():
res[k] = {
"type": "line",
"name": v["name"]
}
for cpn in self._param.tools:
if not isinstance(cpn, LLM):
continue
res.update(cpn.get_input_form())
return res
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 20*60))
def _invoke(self, **kwargs):
if kwargs.get("user_prompt"):
usr_pmt = ""
if kwargs.get("reasoning"):
usr_pmt += "\nREASONING:\n{}\n".format(kwargs["reasoning"])
if kwargs.get("context"):
usr_pmt += "\nCONTEXT:\n{}\n".format(kwargs["context"])
if usr_pmt:
usr_pmt += "\nQUERY:\n{}\n".format(str(kwargs["user_prompt"]))
else:
usr_pmt = str(kwargs["user_prompt"])
self._param.prompts = [{"role": "user", "content": usr_pmt}]
if not self.tools:
return LLM._invoke(self, **kwargs)
prompt, msg = self._prepare_prompt_variables()
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self.stream_output_with_tools, prompt, msg))
return
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
use_tools = []
ans = ""
for delta_ans, tk in self._react_with_tools_streamly(prompt, msg, use_tools):
ans += delta_ans
if ans.find("**ERROR**") >= 0:
logging.error(f"Agent._chat got error. response: {ans}")
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
else:
self.set_output("_ERROR", ans)
return
self.set_output("content", ans)
if use_tools:
self.set_output("use_tools", use_tools)
return ans
def stream_output_with_tools(self, prompt, msg):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
answer_without_toolcall = ""
use_tools = []
for delta_ans,_ in self._react_with_tools_streamly(prompt, msg, use_tools):
if delta_ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
yield self.get_exception_default_value()
else:
self.set_output("_ERROR", delta_ans)
answer_without_toolcall += delta_ans
yield delta_ans
self.set_output("content", answer_without_toolcall)
if use_tools:
self.set_output("use_tools", use_tools)
def _gen_citations(self, text):
retrievals = self._canvas.get_reference()
retrievals = {"chunks": list(retrievals["chunks"].values()), "doc_aggs": list(retrievals["doc_aggs"].values())}
formated_refer = kb_prompt(retrievals, self.chat_mdl.max_length, True)
for delta_ans in self._generate_streamly([{"role": "system", "content": citation_plus("\n\n".join(formated_refer))},
{"role": "user", "content": text}
]):
yield delta_ans
def _react_with_tools_streamly(self, prompt, history: list[dict], use_tools):
token_count = 0
tool_metas = self.tool_meta
hist = deepcopy(history)
last_calling = ""
if len(hist) > 3:
st = timer()
user_request = full_question(messages=history, chat_mdl=self.chat_mdl)
self.callback("Multi-turn conversation optimization", {}, user_request, elapsed_time=timer()-st)
else:
user_request = history[-1]["content"]
def use_tool(name, args):
nonlocal hist, use_tools, token_count,last_calling,user_request
logging.info(f"{last_calling=} == {name=}")
# Summarize of function calling
#if all([
# isinstance(self.toolcall_session.get_tool_obj(name), Agent),
# last_calling,
# last_calling != name
#]):
# self.toolcall_session.get_tool_obj(name).add2system_prompt(f"The chat history with other agents are as following: \n" + self.get_useful_memory(user_request, str(args["user_prompt"])))
last_calling = name
tool_response = self.toolcall_session.tool_call(name, args)
use_tools.append({
"name": name,
"arguments": args,
"results": tool_response
})
# self.callback("add_memory", {}, "...")
#self.add_memory(hist[-2]["content"], hist[-1]["content"], name, args, str(tool_response))
return name, tool_response
def complete():
nonlocal hist
need2cite = self._param.cite and self._canvas.get_reference()["chunks"] and self._id.find("-->") < 0
cited = False
if hist[0]["role"] == "system" and need2cite:
if len(hist) < 7:
hist[0]["content"] += citation_prompt()
cited = True
yield "", token_count
_hist = hist
if len(hist) > 12:
_hist = [hist[0], hist[1], *hist[-10:]]
entire_txt = ""
for delta_ans in self._generate_streamly(_hist):
if not need2cite or cited:
yield delta_ans, 0
entire_txt += delta_ans
if not need2cite or cited:
return
st = timer()
txt = ""
for delta_ans in self._gen_citations(entire_txt):
yield delta_ans, 0
txt += delta_ans
self.callback("gen_citations", {}, txt, elapsed_time=timer()-st)
def append_user_content(hist, content):
if hist[-1]["role"] == "user":
hist[-1]["content"] += content
else:
hist.append({"role": "user", "content": content})
st = timer()
task_desc = analyze_task(self.chat_mdl, prompt, user_request, tool_metas)
self.callback("analyze_task", {}, task_desc, elapsed_time=timer()-st)
for _ in range(self._param.max_rounds + 1):
response, tk = next_step(self.chat_mdl, hist, tool_metas, task_desc)
# self.callback("next_step", {}, str(response)[:256]+"...")
token_count += tk
hist.append({"role": "assistant", "content": response})
try:
functions = json_repair.loads(re.sub(r"```.*", "", response))
if not isinstance(functions, list):
raise TypeError(f"List should be returned, but `{functions}`")
for f in functions:
if not isinstance(f, dict):
raise TypeError(f"An object type should be returned, but `{f}`")
with ThreadPoolExecutor(max_workers=5) as executor:
thr = []
for func in functions:
name = func["name"]
args = func["arguments"]
if name == COMPLETE_TASK:
append_user_content(hist, f"Respond with a formal answer. FORGET(DO NOT mention) about `{COMPLETE_TASK}`. The language for the response MUST be as the same as the first user request.\n")
for txt, tkcnt in complete():
yield txt, tkcnt
return
thr.append(executor.submit(use_tool, name, args))
st = timer()
reflection = reflect(self.chat_mdl, hist, [th.result() for th in thr])
append_user_content(hist, reflection)
self.callback("reflection", {}, str(reflection), elapsed_time=timer()-st)
except Exception as e:
logging.exception(msg=f"Wrong JSON argument format in LLM ReAct response: {e}")
e = f"\nTool call error, please correct the input parameter of response format and call it again.\n *** Exception ***\n{e}"
append_user_content(hist, str(e))
logging.warning( f"Exceed max rounds: {self._param.max_rounds}")
final_instruction = f"""
{user_request}
IMPORTANT: You have reached the conversation limit. Based on ALL the information and research you have gathered so far, please provide a DIRECT and COMPREHENSIVE final answer to the original request.
Instructions:
1. SYNTHESIZE all information collected during this conversation
2. Provide a COMPLETE response using existing data - do not suggest additional research
3. Structure your response as a FINAL DELIVERABLE, not a plan
4. If information is incomplete, state what you found and provide the best analysis possible with available data
5. DO NOT mention conversation limits or suggest further steps
6. Focus on delivering VALUE with the information already gathered
Respond immediately with your final comprehensive answer.
"""
append_user_content(hist, final_instruction)
for txt, tkcnt in complete():
yield txt, tkcnt
def get_useful_memory(self, goal: str, sub_goal:str, topn=3) -> str:
# self.callback("get_useful_memory", {"topn": 3}, "...")
mems = self._canvas.get_memory()
rank = rank_memories(self.chat_mdl, goal, sub_goal, [summ for (user, assist, summ) in mems])
try:
rank = json_repair.loads(re.sub(r"```.*", "", rank))[:topn]
mems = [mems[r] for r in rank]
return "\n\n".join([f"User: {u}\nAgent: {a}" for u, a,_ in mems])
except Exception as e:
logging.exception(e)
return "Error occurred."

View File

@ -1,92 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
from abc import ABC
from functools import partial
from typing import Tuple, Union
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class AnswerParam(ComponentParamBase):
"""
Define the Answer component parameters.
"""
def __init__(self):
super().__init__()
self.post_answers = []
def check(self):
return True
class Answer(ComponentBase, ABC):
component_name = "Answer"
def _run(self, history, **kwargs):
if kwargs.get("stream"):
return partial(self.stream_output)
ans = self.get_input()
if self._param.post_answers:
ans = pd.concat([ans, pd.DataFrame([{"content": random.choice(self._param.post_answers)}])], ignore_index=False)
return ans
def stream_output(self):
res = None
if hasattr(self, "exception") and self.exception:
res = {"content": str(self.exception)}
self.exception = None
yield res
self.set_output(res)
return
stream = self.get_stream_input()
if isinstance(stream, pd.DataFrame):
res = stream
answer = ""
for ii, row in stream.iterrows():
answer += row.to_dict()["content"]
yield {"content": answer}
elif stream is not None:
for st in stream():
res = st
yield st
if self._param.post_answers and res:
res["content"] += random.choice(self._param.post_answers)
yield res
if res is None:
res = {"content": ""}
self.set_output(res)
def set_exception(self, e):
self.exception = e
def output(self, allow_partial=True) -> Tuple[str, Union[pd.DataFrame, partial]]:
if allow_partial:
return super.output()
for r, c in self._canvas.history[::-1]:
if r == "user":
return self._param.output_var_name, pd.DataFrame([{"content": c}])
self._param.output_var_name, pd.DataFrame([])

View File

@ -1,68 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import arxiv
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class ArXivParam(ComponentParamBase):
"""
Define the ArXiv component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 6
self.sort_by = 'submittedDate'
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.sort_by, "ArXiv Search Sort_by",
['submittedDate', 'lastUpdatedDate', 'relevance'])
class ArXiv(ComponentBase, ABC):
component_name = "ArXiv"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return ArXiv.be_output("")
try:
sort_choices = {"relevance": arxiv.SortCriterion.Relevance,
"lastUpdatedDate": arxiv.SortCriterion.LastUpdatedDate,
'submittedDate': arxiv.SortCriterion.SubmittedDate}
arxiv_client = arxiv.Client()
search = arxiv.Search(
query=ans,
max_results=self._param.top_n,
sort_by=sort_choices[self._param.sort_by]
)
arxiv_res = [
{"content": 'Title: ' + i.title + '\nPdf_Url: <a href="' + i.pdf_url + '"></a> \nSummary: ' + i.summary} for
i in list(arxiv_client.results(search))]
except Exception as e:
return ArXiv.be_output("**ERROR**: " + str(e))
if not arxiv_res:
return ArXiv.be_output("")
df = pd.DataFrame(arxiv_res)
logging.debug(f"df: {str(df)}")
return df

View File

@ -1,79 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
import requests
from bs4 import BeautifulSoup
import re
from agent.component.base import ComponentBase, ComponentParamBase
class BaiduParam(ComponentParamBase):
"""
Define the Baidu component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
def check(self):
self.check_positive_integer(self.top_n, "Top N")
class Baidu(ComponentBase, ABC):
component_name = "Baidu"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return Baidu.be_output("")
try:
url = 'https://www.baidu.com/s?wd=' + ans + '&rn=' + str(self._param.top_n)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
'Connection': 'keep-alive',
}
response = requests.get(url=url, headers=headers)
# check if request success
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
url_res = []
title_res = []
body_res = []
for item in soup.select('.result.c-container'):
# extract title
title_res.append(item.select_one('h3 a').get_text(strip=True))
url_res.append(item.select_one('h3 a')['href'])
body_res.append(item.select_one('.c-abstract').get_text(strip=True) if item.select_one('.c-abstract') else '')
baidu_res = [{"content": re.sub('<em>|</em>', '', '<a href="' + url + '">' + title + '</a> ' + body)} for
url, title, body in zip(url_res, title_res, body_res)]
del body_res, url_res, title_res
except Exception as e:
return Baidu.be_output("**ERROR**: " + str(e))
if not baidu_res:
return Baidu.be_output("")
df = pd.DataFrame(baidu_res)
logging.debug(f"df: {str(df)}")
return df

View File

@ -1,96 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
from abc import ABC
import requests
from agent.component.base import ComponentBase, ComponentParamBase
from hashlib import md5
class BaiduFanyiParam(ComponentParamBase):
"""
Define the BaiduFanyi component parameters.
"""
def __init__(self):
super().__init__()
self.appid = "xxx"
self.secret_key = "xxx"
self.trans_type = 'translate'
self.parameters = []
self.source_lang = 'auto'
self.target_lang = 'auto'
self.domain = 'finance'
def check(self):
self.check_empty(self.appid, "BaiduFanyi APPID")
self.check_empty(self.secret_key, "BaiduFanyi Secret Key")
self.check_valid_value(self.trans_type, "Translate type", ['translate', 'fieldtranslate'])
self.check_valid_value(self.source_lang, "Source language",
['auto', 'zh', 'en', 'yue', 'wyw', 'jp', 'kor', 'fra', 'spa', 'th', 'ara', 'ru', 'pt',
'de', 'it', 'el', 'nl', 'pl', 'bul', 'est', 'dan', 'fin', 'cs', 'rom', 'slo', 'swe',
'hu', 'cht', 'vie'])
self.check_valid_value(self.target_lang, "Target language",
['auto', 'zh', 'en', 'yue', 'wyw', 'jp', 'kor', 'fra', 'spa', 'th', 'ara', 'ru', 'pt',
'de', 'it', 'el', 'nl', 'pl', 'bul', 'est', 'dan', 'fin', 'cs', 'rom', 'slo', 'swe',
'hu', 'cht', 'vie'])
self.check_valid_value(self.domain, "Translate field",
['it', 'finance', 'machinery', 'senimed', 'novel', 'academic', 'aerospace', 'wiki',
'news', 'law', 'contract'])
class BaiduFanyi(ComponentBase, ABC):
component_name = "BaiduFanyi"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return BaiduFanyi.be_output("")
try:
source_lang = self._param.source_lang
target_lang = self._param.target_lang
appid = self._param.appid
salt = random.randint(32768, 65536)
secret_key = self._param.secret_key
if self._param.trans_type == 'translate':
sign = md5((appid + ans + salt + secret_key).encode('utf-8')).hexdigest()
url = 'http://api.fanyi.baidu.com/api/trans/vip/translate?' + 'q=' + ans + '&from=' + source_lang + '&to=' + target_lang + '&appid=' + appid + '&salt=' + salt + '&sign=' + sign
headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = requests.post(url=url, headers=headers).json()
if response.get('error_code'):
BaiduFanyi.be_output("**Error**:" + response['error_msg'])
return BaiduFanyi.be_output(response['trans_result'][0]['dst'])
elif self._param.trans_type == 'fieldtranslate':
domain = self._param.domain
sign = md5((appid + ans + salt + domain + secret_key).encode('utf-8')).hexdigest()
url = 'http://api.fanyi.baidu.com/api/trans/vip/fieldtranslate?' + 'q=' + ans + '&from=' + source_lang + '&to=' + target_lang + '&appid=' + appid + '&salt=' + salt + '&domain=' + domain + '&sign=' + sign
headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = requests.post(url=url, headers=headers).json()
if response.get('error_code'):
BaiduFanyi.be_output("**Error**:" + response['error_msg'])
return BaiduFanyi.be_output(response['trans_result'][0]['dst'])
except Exception as e:
BaiduFanyi.be_output("**Error**:" + str(e))

View File

@ -13,17 +13,20 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import re
import time
from abc import ABC, abstractmethod
import builtins import builtins
import json import json
import logging
import os import os
from abc import ABC import logging
from functools import partial from typing import Any, List, Union
from typing import Any, Tuple, Union
import pandas as pd import pandas as pd
import trio
from agent import settings from agent import settings
from api.utils.api_utils import timeout
_FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params" _FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params"
_DEPRECATED_PARAMS = "_deprecated_params" _DEPRECATED_PARAMS = "_deprecated_params"
@ -33,12 +36,16 @@ _IS_RAW_CONF = "_is_raw_conf"
class ComponentParamBase(ABC): class ComponentParamBase(ABC):
def __init__(self): def __init__(self):
self.output_var_name = "output"
self.infor_var_name = "infor"
self.message_history_window_size = 22 self.message_history_window_size = 22
self.query = [] self.inputs = {}
self.inputs = [] self.outputs = {}
self.debug_inputs = [] self.description = ""
self.max_retries = 0
self.delay_after_error = 2.0
self.exception_method = None
self.exception_default_value = None
self.exception_goto = None
self.debug_inputs = {}
def set_name(self, name: str): def set_name(self, name: str):
self._name = name self._name = name
@ -89,6 +96,14 @@ class ComponentParamBase(ABC):
def as_dict(self): def as_dict(self):
def _recursive_convert_obj_to_dict(obj): def _recursive_convert_obj_to_dict(obj):
ret_dict = {} ret_dict = {}
if isinstance(obj, dict):
for k,v in obj.items():
if isinstance(v, dict) or (v and type(v).__name__ not in dir(builtins)):
ret_dict[k] = _recursive_convert_obj_to_dict(v)
else:
ret_dict[k] = v
return ret_dict
for attr_name in list(obj.__dict__): for attr_name in list(obj.__dict__):
if attr_name in [_FEEDED_DEPRECATED_PARAMS, _DEPRECATED_PARAMS, _USER_FEEDED_PARAMS, _IS_RAW_CONF]: if attr_name in [_FEEDED_DEPRECATED_PARAMS, _DEPRECATED_PARAMS, _USER_FEEDED_PARAMS, _IS_RAW_CONF]:
continue continue
@ -97,7 +112,7 @@ class ComponentParamBase(ABC):
if isinstance(attr, pd.DataFrame): if isinstance(attr, pd.DataFrame):
ret_dict[attr_name] = attr.to_dict() ret_dict[attr_name] = attr.to_dict()
continue continue
if attr and type(attr).__name__ not in dir(builtins): if isinstance(attr, dict) or (attr and type(attr).__name__ not in dir(builtins)):
ret_dict[attr_name] = _recursive_convert_obj_to_dict(attr) ret_dict[attr_name] = _recursive_convert_obj_to_dict(attr)
else: else:
ret_dict[attr_name] = attr ret_dict[attr_name] = attr
@ -110,11 +125,15 @@ class ComponentParamBase(ABC):
update_from_raw_conf = conf.get(_IS_RAW_CONF, True) update_from_raw_conf = conf.get(_IS_RAW_CONF, True)
if update_from_raw_conf: if update_from_raw_conf:
deprecated_params_set = self._get_or_init_deprecated_params_set() deprecated_params_set = self._get_or_init_deprecated_params_set()
feeded_deprecated_params_set = self._get_or_init_feeded_deprecated_params_set() feeded_deprecated_params_set = (
self._get_or_init_feeded_deprecated_params_set()
)
user_feeded_params_set = self._get_or_init_user_feeded_params_set() user_feeded_params_set = self._get_or_init_user_feeded_params_set()
setattr(self, _IS_RAW_CONF, False) setattr(self, _IS_RAW_CONF, False)
else: else:
feeded_deprecated_params_set = self._get_or_init_feeded_deprecated_params_set(conf) feeded_deprecated_params_set = (
self._get_or_init_feeded_deprecated_params_set(conf)
)
user_feeded_params_set = self._get_or_init_user_feeded_params_set(conf) user_feeded_params_set = self._get_or_init_user_feeded_params_set(conf)
def _recursive_update_param(param, config, depth, prefix): def _recursive_update_param(param, config, depth, prefix):
@ -150,11 +169,15 @@ class ComponentParamBase(ABC):
else: else:
# recursive set obj attr # recursive set obj attr
sub_params = _recursive_update_param(attr, config_value, depth + 1, prefix=f"{prefix}{config_key}.") sub_params = _recursive_update_param(
attr, config_value, depth + 1, prefix=f"{prefix}{config_key}."
)
setattr(param, config_key, sub_params) setattr(param, config_key, sub_params)
if not allow_redundant and redundant_attrs: if not allow_redundant and redundant_attrs:
raise ValueError(f"cpn `{getattr(self, '_name', type(self))}` has redundant parameters: `{[redundant_attrs]}`") raise ValueError(
f"cpn `{getattr(self, '_name', type(self))}` has redundant parameters: `{[redundant_attrs]}`"
)
return param return param
@ -185,7 +208,9 @@ class ComponentParamBase(ABC):
param_validation_path_prefix = home_dir + "/param_validation/" param_validation_path_prefix = home_dir + "/param_validation/"
param_name = type(self).__name__ param_name = type(self).__name__
param_validation_path = "/".join([param_validation_path_prefix, param_name + ".json"]) param_validation_path = "/".join(
[param_validation_path_prefix, param_name + ".json"]
)
validation_json = None validation_json = None
@ -218,7 +243,11 @@ class ComponentParamBase(ABC):
break break
if not value_legal: if not value_legal:
raise ValueError("Plase check runtime conf, {} = {} does not match user-parameter restriction".format(variable, value)) raise ValueError(
"Plase check runtime conf, {} = {} does not match user-parameter restriction".format(
variable, value
)
)
elif variable in validation_json: elif variable in validation_json:
self._validate_param(attr, validation_json) self._validate_param(attr, validation_json)
@ -226,63 +255,94 @@ class ComponentParamBase(ABC):
@staticmethod @staticmethod
def check_string(param, descr): def check_string(param, descr):
if type(param).__name__ not in ["str"]: if type(param).__name__ not in ["str"]:
raise ValueError(descr + " {} not supported, should be string type".format(param)) raise ValueError(
descr + " {} not supported, should be string type".format(param)
)
@staticmethod @staticmethod
def check_empty(param, descr): def check_empty(param, descr):
if not param: if not param:
raise ValueError(descr + " does not support empty value.") raise ValueError(
descr + " does not support empty value."
)
@staticmethod @staticmethod
def check_positive_integer(param, descr): def check_positive_integer(param, descr):
if type(param).__name__ not in ["int", "long"] or param <= 0: if type(param).__name__ not in ["int", "long"] or param <= 0:
raise ValueError(descr + " {} not supported, should be positive integer".format(param)) raise ValueError(
descr + " {} not supported, should be positive integer".format(param)
)
@staticmethod @staticmethod
def check_positive_number(param, descr): def check_positive_number(param, descr):
if type(param).__name__ not in ["float", "int", "long"] or param <= 0: if type(param).__name__ not in ["float", "int", "long"] or param <= 0:
raise ValueError(descr + " {} not supported, should be positive numeric".format(param)) raise ValueError(
descr + " {} not supported, should be positive numeric".format(param)
)
@staticmethod @staticmethod
def check_nonnegative_number(param, descr): def check_nonnegative_number(param, descr):
if type(param).__name__ not in ["float", "int", "long"] or param < 0: if type(param).__name__ not in ["float", "int", "long"] or param < 0:
raise ValueError(descr + " {} not supported, should be non-negative numeric".format(param)) raise ValueError(
descr
+ " {} not supported, should be non-negative numeric".format(param)
)
@staticmethod @staticmethod
def check_decimal_float(param, descr): def check_decimal_float(param, descr):
if type(param).__name__ not in ["float", "int"] or param < 0 or param > 1: if type(param).__name__ not in ["float", "int"] or param < 0 or param > 1:
raise ValueError(descr + " {} not supported, should be a float number in range [0, 1]".format(param)) raise ValueError(
descr
+ " {} not supported, should be a float number in range [0, 1]".format(
param
)
)
@staticmethod @staticmethod
def check_boolean(param, descr): def check_boolean(param, descr):
if type(param).__name__ != "bool": if type(param).__name__ != "bool":
raise ValueError(descr + " {} not supported, should be bool type".format(param)) raise ValueError(
descr + " {} not supported, should be bool type".format(param)
)
@staticmethod @staticmethod
def check_open_unit_interval(param, descr): def check_open_unit_interval(param, descr):
if type(param).__name__ not in ["float"] or param <= 0 or param >= 1: if type(param).__name__ not in ["float"] or param <= 0 or param >= 1:
raise ValueError(descr + " should be a numeric number between 0 and 1 exclusively") raise ValueError(
descr + " should be a numeric number between 0 and 1 exclusively"
)
@staticmethod @staticmethod
def check_valid_value(param, descr, valid_values): def check_valid_value(param, descr, valid_values):
if param not in valid_values: if param not in valid_values:
raise ValueError(descr + " {} is not supported, it should be in {}".format(param, valid_values)) raise ValueError(
descr
+ " {} is not supported, it should be in {}".format(param, valid_values)
)
@staticmethod @staticmethod
def check_defined_type(param, descr, types): def check_defined_type(param, descr, types):
if type(param).__name__ not in types: if type(param).__name__ not in types:
raise ValueError(descr + " {} not supported, should be one of {}".format(param, types)) raise ValueError(
descr + " {} not supported, should be one of {}".format(param, types)
)
@staticmethod @staticmethod
def check_and_change_lower(param, valid_list, descr=""): def check_and_change_lower(param, valid_list, descr=""):
if type(param).__name__ != "str": if type(param).__name__ != "str":
raise ValueError(descr + " {} not supported, should be one of {}".format(param, valid_list)) raise ValueError(
descr
+ " {} not supported, should be one of {}".format(param, valid_list)
)
lower_param = param.lower() lower_param = param.lower()
if lower_param in valid_list: if lower_param in valid_list:
return lower_param return lower_param
else: else:
raise ValueError(descr + " {} not supported, should be one of {}".format(param, valid_list)) raise ValueError(
descr
+ " {} not supported, should be one of {}".format(param, valid_list)
)
@staticmethod @staticmethod
def _greater_equal_than(value, limit): def _greater_equal_than(value, limit):
@ -296,7 +356,11 @@ class ComponentParamBase(ABC):
def _range(value, ranges): def _range(value, ranges):
in_range = False in_range = False
for left_limit, right_limit in ranges: for left_limit, right_limit in ranges:
if left_limit - settings.FLOAT_ZERO <= value <= right_limit + settings.FLOAT_ZERO: if (
left_limit - settings.FLOAT_ZERO
<= value
<= right_limit + settings.FLOAT_ZERO
):
in_range = True in_range = True
break break
@ -312,17 +376,24 @@ class ComponentParamBase(ABC):
def _warn_deprecated_param(self, param_name, descr): def _warn_deprecated_param(self, param_name, descr):
if self._deprecated_params_set.get(param_name): if self._deprecated_params_set.get(param_name):
logging.warning(f"{descr} {param_name} is deprecated and ignored in this version.") logging.warning(
f"{descr} {param_name} is deprecated and ignored in this version."
)
def _warn_to_deprecate_param(self, param_name, descr, new_param): def _warn_to_deprecate_param(self, param_name, descr, new_param):
if self._deprecated_params_set.get(param_name): if self._deprecated_params_set.get(param_name):
logging.warning(f"{descr} {param_name} will be deprecated in future release; please use {new_param} instead.") logging.warning(
f"{descr} {param_name} will be deprecated in future release; "
f"please use {new_param} instead."
)
return True return True
return False return False
class ComponentBase(ABC): class ComponentBase(ABC):
component_name: str component_name: str
thread_limiter = trio.CapacityLimiter(int(os.environ.get('MAX_CONCURRENT_CHATS', 10)))
variable_ref_patt = r"\{* *\{([a-zA-Z:0-9]+@[A-Za-z:0-9_.-]+|sys\.[a-z_]+)\} *\}*"
def __str__(self): def __str__(self):
""" """
@ -331,232 +402,158 @@ class ComponentBase(ABC):
"params": {} "params": {}
} }
""" """
out = getattr(self._param, self._param.output_var_name)
if isinstance(out, pd.DataFrame) and "chunks" in out:
del out["chunks"]
setattr(self._param, self._param.output_var_name, out)
return """{{ return """{{
"component_name": "{}", "component_name": "{}",
"params": {}, "params": {}
"output": {}, }}""".format(self.component_name,
"inputs": {} self._param
}}""".format(
self.component_name,
self._param,
json.dumps(json.loads(str(self._param)).get("output", {}), ensure_ascii=False),
json.dumps(json.loads(str(self._param)).get("inputs", []), ensure_ascii=False),
) )
def __init__(self, canvas, id, param: ComponentParamBase): def __init__(self, canvas, id, param: ComponentParamBase):
from agent.canvas import Canvas # Local import to avoid cyclic dependency from agent.canvas import Canvas # Local import to avoid cyclic dependency
assert isinstance(canvas, Canvas), "canvas must be an instance of Canvas" assert isinstance(canvas, Canvas), "canvas must be an instance of Canvas"
self._canvas = canvas self._canvas = canvas
self._id = id self._id = id
self._param = param self._param = param
self._param.check() self._param.check()
def get_dependent_components(self): def invoke(self, **kwargs) -> dict[str, Any]:
cpnts = set( self.set_output("_created_time", time.perf_counter())
[
para["component_id"].split("@")[0]
for para in self._param.query
if para.get("component_id") and para["component_id"].lower().find("answer") < 0 and para["component_id"].lower().find("begin") < 0
]
)
return list(cpnts)
def run(self, history, **kwargs):
logging.debug("{}, history: {}, kwargs: {}".format(self, json.dumps(history, ensure_ascii=False), json.dumps(kwargs, ensure_ascii=False)))
self._param.debug_inputs = []
try: try:
res = self._run(history, **kwargs) self._invoke(**kwargs)
self.set_output(res)
except Exception as e: except Exception as e:
self.set_output(pd.DataFrame([{"content": str(e)}])) if self.get_exception_default_value():
raise e self.set_exception_default_value()
else:
self.set_output("_ERROR", str(e))
logging.exception(e)
self._param.debug_inputs = {}
self.set_output("_elapsed_time", time.perf_counter() - self.output("_created_time"))
return self.output()
return res @timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
def _invoke(self, **kwargs):
def _run(self, history, **kwargs):
raise NotImplementedError() raise NotImplementedError()
def output(self, allow_partial=True) -> Tuple[str, Union[pd.DataFrame, partial]]: def output(self, var_nm: str=None) -> Union[dict[str, Any], Any]:
o = getattr(self._param, self._param.output_var_name) if var_nm:
if not isinstance(o, partial): return self._param.outputs.get(var_nm, {}).get("value", "")
if not isinstance(o, pd.DataFrame): return {k: o.get("value") for k,o in self._param.outputs.items()}
if isinstance(o, list):
return self._param.output_var_name, pd.DataFrame(o).dropna()
if o is None:
return self._param.output_var_name, pd.DataFrame()
return self._param.output_var_name, pd.DataFrame([{"content": str(o)}])
return self._param.output_var_name, o
if allow_partial or not isinstance(o, partial): def set_output(self, key: str, value: Any):
if not isinstance(o, partial) and not isinstance(o, pd.DataFrame): if key not in self._param.outputs:
return pd.DataFrame(o if isinstance(o, list) else [o]).dropna() self._param.outputs[key] = {"value": None, "type": str(type(value))}
return self._param.output_var_name, o self._param.outputs[key]["value"] = value
outs = None def error(self):
for oo in o(): return self._param.outputs.get("_ERROR", {}).get("value")
if not isinstance(oo, pd.DataFrame):
outs = pd.DataFrame(oo if isinstance(oo, list) else [oo]).dropna()
else:
outs = oo.dropna()
return self._param.output_var_name, outs
def reset(self): def reset(self):
setattr(self._param, self._param.output_var_name, None) for k in self._param.outputs.keys():
self._param.inputs = [] self._param.outputs[k]["value"] = None
for k in self._param.inputs.keys():
self._param.inputs[k]["value"] = None
self._param.debug_inputs = {}
def set_output(self, v): def get_input(self, key: str=None) -> Union[Any, dict[str, Any]]:
setattr(self._param, self._param.output_var_name, v) if key:
return self._param.inputs.get(key, {}).get("value")
def set_infor(self, v): res = {}
setattr(self._param, self._param.infor_var_name, v) for var, o in self.get_input_elements().items():
v = self.get_param(var)
def _fetch_outputs_from(self, sources: list[dict[str, Any]]) -> list[pd.DataFrame]: if v is None:
outs = []
for q in sources:
if q.get("component_id"):
if "@" in q["component_id"] and q["component_id"].split("@")[0].lower().find("begin") >= 0:
cpn_id, key = q["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
outs.append(pd.DataFrame([{"content": p.get("value", "")}]))
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
if q["component_id"].lower().find("answer") == 0:
txt = []
for r, c in self._canvas.history[::-1][: self._param.message_history_window_size][::-1]:
txt.append(f"{r.upper()}:{c}")
txt = "\n".join(txt)
outs.append(pd.DataFrame([{"content": txt}]))
continue
outs.append(self._canvas.get_component(q["component_id"])["obj"].output(allow_partial=False)[1])
elif q.get("value"):
outs.append(pd.DataFrame([{"content": q["value"]}]))
return outs
def get_input(self):
if self._param.debug_inputs:
return pd.DataFrame([{"content": v["value"]} for v in self._param.debug_inputs if v.get("value")])
reversed_cpnts = []
if len(self._canvas.path) > 1:
reversed_cpnts.extend(self._canvas.path[-2])
reversed_cpnts.extend(self._canvas.path[-1])
up_cpns = self.get_upstream()
reversed_up_cpnts = [cpn for cpn in reversed_cpnts if cpn in up_cpns]
if self._param.query:
self._param.inputs = []
outs = self._fetch_outputs_from(self._param.query)
for out in outs:
records = out.to_dict("records")
content: str
if len(records) > 1:
content = "\n".join([str(d["content"]) for d in records])
else:
content = records[0]["content"]
self._param.inputs.append({"component_id": records[0].get("component_id"), "content": content})
if outs:
df = pd.concat(outs, ignore_index=True)
if "content" in df:
df = df.drop_duplicates(subset=["content"]).reset_index(drop=True)
return df
upstream_outs = []
for u in reversed_up_cpnts[::-1]:
if self.get_component_name(u) in ["switch", "concentrator"]:
continue continue
if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval": if isinstance(v, str) and self._canvas.is_reff(v):
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1] self.set_input_value(var, self._canvas.get_variable_value(v))
if o is not None:
o["component_id"] = u
upstream_outs.append(o)
continue
# if self.component_name.lower()!="answer" and u not in self._canvas.get_component(self._id)["upstream"]: continue
if self.component_name.lower().find("switch") < 0 and self.get_component_name(u) in ["relevant", "categorize"]:
continue
if u.lower().find("answer") >= 0:
for r, c in self._canvas.history[::-1]:
if r == "user":
upstream_outs.append(pd.DataFrame([{"content": c, "component_id": u}]))
break
break
if self.component_name.lower().find("answer") >= 0 and self.get_component_name(u) in ["relevant"]:
continue
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None:
o["component_id"] = u
upstream_outs.append(o)
break
assert upstream_outs, "Can't inference the where the component input is. Please identify whose output is this component's input."
df = pd.concat(upstream_outs, ignore_index=True)
if "content" in df:
df = df.drop_duplicates(subset=["content"]).reset_index(drop=True)
self._param.inputs = []
for _, r in df.iterrows():
self._param.inputs.append({"component_id": r["component_id"], "content": r["content"]})
return df
def get_input_elements(self):
assert self._param.query, "Please verify the input parameters first."
eles = []
for q in self._param.query:
if q.get("component_id"):
cpn_id = q["component_id"]
if cpn_id.split("@")[0].lower().find("begin") >= 0:
cpn_id, key = cpn_id.split("@")
eles.extend(self._canvas.get_component(cpn_id)["obj"]._param.query)
continue
eles.append({"name": self._canvas.get_component_name(cpn_id), "key": cpn_id})
else: else:
eles.append({"key": q["value"], "name": q["value"], "value": q["value"]}) self.set_input_value(var, v)
return eles res[var] = self.get_input_value(var)
return res
def get_stream_input(self): def get_input_values(self) -> Union[Any, dict[str, Any]]:
reversed_cpnts = [] if self._param.debug_inputs:
if len(self._canvas.path) > 1: return self._param.debug_inputs
reversed_cpnts.extend(self._canvas.path[-2])
reversed_cpnts.extend(self._canvas.path[-1])
up_cpns = self.get_upstream()
reversed_up_cpnts = [cpn for cpn in reversed_cpnts if cpn in up_cpns]
for u in reversed_up_cpnts[::-1]: return {var: self.get_input_value(var) for var, o in self.get_input_elements().items()}
if self.get_component_name(u) in ["switch", "answer"]:
continue
return self._canvas.get_component(u)["obj"].output()[1]
@staticmethod def get_input_elements_from_text(self, txt: str) -> dict[str, dict[str, str]]:
def be_output(v): res = {}
return pd.DataFrame([{"content": v}]) for r in re.finditer(self.variable_ref_patt, txt, flags=re.IGNORECASE|re.DOTALL):
exp = r.group(1)
cpn_id, var_nm = exp.split("@") if exp.find("@")>0 else ("", exp)
res[exp] = {
"name": (self._canvas.get_component_name(cpn_id) +f"@{var_nm}") if cpn_id else exp,
"value": self._canvas.get_variable_value(exp),
"_retrival": self._canvas.get_variable_value(f"{cpn_id}@_references") if cpn_id else None,
"_cpn_id": cpn_id
}
return res
def get_component_name(self, cpn_id): def get_input_elements(self) -> dict[str, Any]:
return self._param.inputs
def get_input_form(self) -> dict[str, dict]:
return self._param.get_input_form()
def set_input_value(self, key: str, value: Any) -> None:
if key not in self._param.inputs:
self._param.inputs[key] = {"value": None}
self._param.inputs[key]["value"] = value
def get_input_value(self, key: str) -> Any:
if key not in self._param.inputs:
return None
return self._param.inputs[key].get("value")
def get_component_name(self, cpn_id) -> str:
return self._canvas.get_component(cpn_id)["obj"].component_name.lower() return self._canvas.get_component(cpn_id)["obj"].component_name.lower()
def debug(self, **kwargs): def get_param(self, name):
return self._run([], **kwargs) if hasattr(self._param, name):
return getattr(self._param, name)
def get_parent(self): def debug(self, **kwargs):
pid = self._canvas.get_component(self._id)["parent_id"] return self._invoke(**kwargs)
def get_parent(self) -> Union[object, None]:
pid = self._canvas.get_component(self._id).get("parent_id")
if not pid:
return
return self._canvas.get_component(pid)["obj"] return self._canvas.get_component(pid)["obj"]
def get_upstream(self): def get_upstream(self) -> List[str]:
cpn_nms = self._canvas.get_component(self._id)["upstream"] cpn_nms = self._canvas.get_component(self._id)['upstream']
return cpn_nms return cpn_nms
@staticmethod
def string_format(content: str, kv: dict[str, str]) -> str:
for n, v in kv.items():
def repl(_match, val=v):
return str(val) if val is not None else ""
content = re.sub(
r"\{%s\}" % re.escape(n),
repl,
content
)
return content
def exception_handler(self):
if not self._param.exception_method:
return
return {
"goto": self._param.exception_goto,
"default_value": self._param.exception_default_value
}
def get_exception_default_value(self):
if self._param.exception_method != "comment":
return ""
return self._param.exception_default_value
def set_exception_default_value(self):
self.set_output("result", self.get_exception_default_value())
@abstractmethod
def thoughts(self) -> str:
...

View File

@ -13,37 +13,40 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from functools import partial from agent.component.fillup import UserFillUpParam, UserFillUp
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class BeginParam(ComponentParamBase): class BeginParam(UserFillUpParam):
""" """
Define the Begin component parameters. Define the Begin component parameters.
""" """
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.mode = "conversational"
self.prologue = "Hi! I'm your smart assistant. What can I do for you?" self.prologue = "Hi! I'm your smart assistant. What can I do for you?"
self.query = []
def check(self): def check(self):
return True self.check_valid_value(self.mode, "The 'mode' should be either `conversational` or `task`", ["conversational", "task"])
def get_input_form(self) -> dict[str, dict]:
return getattr(self, "inputs")
class Begin(ComponentBase): class Begin(UserFillUp):
component_name = "Begin" component_name = "Begin"
def _run(self, history, **kwargs): def _invoke(self, **kwargs):
if kwargs.get("stream"): for k, v in kwargs.get("inputs", {}).items():
return partial(self.stream_output) if isinstance(v, dict) and v.get("type", "").lower().find("file") >=0:
return pd.DataFrame([{"content": self._param.prologue}]) if v.get("optional") and v.get("value", None) is None:
v = None
def stream_output(self): else:
res = {"content": self._param.prologue} v = self._canvas.get_files([v["value"]])
yield res else:
self.set_output(self.be_output(res)) v = v.get("value")
self.set_output(k, v)
self.set_input_value(k, v)
def thoughts(self) -> str:
return ""

View File

@ -1,84 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import requests
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class BingParam(ComponentParamBase):
"""
Define the Bing component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
self.channel = "Webpages"
self.api_key = "YOUR_ACCESS_KEY"
self.country = "CN"
self.language = "en"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.channel, "Bing Web Search or Bing News", ["Webpages", "News"])
self.check_empty(self.api_key, "Bing subscription key")
self.check_valid_value(self.country, "Bing Country",
['AR', 'AU', 'AT', 'BE', 'BR', 'CA', 'CL', 'DK', 'FI', 'FR', 'DE', 'HK', 'IN', 'ID',
'IT', 'JP', 'KR', 'MY', 'MX', 'NL', 'NZ', 'NO', 'CN', 'PL', 'PT', 'PH', 'RU', 'SA',
'ZA', 'ES', 'SE', 'CH', 'TW', 'TR', 'GB', 'US'])
self.check_valid_value(self.language, "Bing Languages",
['ar', 'eu', 'bn', 'bg', 'ca', 'ns', 'nt', 'hr', 'cs', 'da', 'nl', 'en', 'gb', 'et',
'fi', 'fr', 'gl', 'de', 'gu', 'he', 'hi', 'hu', 'is', 'it', 'jp', 'kn', 'ko', 'lv',
'lt', 'ms', 'ml', 'mr', 'nb', 'pl', 'br', 'pt', 'pa', 'ro', 'ru', 'sr', 'sk', 'sl',
'es', 'sv', 'ta', 'te', 'th', 'tr', 'uk', 'vi'])
class Bing(ComponentBase, ABC):
component_name = "Bing"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return Bing.be_output("")
try:
headers = {"Ocp-Apim-Subscription-Key": self._param.api_key, 'Accept-Language': self._param.language}
params = {"q": ans, "textDecorations": True, "textFormat": "HTML", "cc": self._param.country,
"answerCount": 1, "promote": self._param.channel}
if self._param.channel == "Webpages":
response = requests.get("https://api.bing.microsoft.com/v7.0/search", headers=headers, params=params)
response.raise_for_status()
search_results = response.json()
bing_res = [{"content": '<a href="' + i["url"] + '">' + i["name"] + '</a> ' + i["snippet"]} for i in
search_results["webPages"]["value"]]
elif self._param.channel == "News":
response = requests.get("https://api.bing.microsoft.com/v7.0/news/search", headers=headers,
params=params)
response.raise_for_status()
search_results = response.json()
bing_res = [{"content": '<a href="' + i["url"] + '">' + i["name"] + '</a> ' + i["description"]} for i
in search_results['news']['value']]
except Exception as e:
return Bing.be_output("**ERROR**: " + str(e))
if not bing_res:
return Bing.be_output("")
df = pd.DataFrame(bing_res)
logging.debug(f"df: {str(df)}")
return df

View File

@ -14,13 +14,18 @@
# limitations under the License. # limitations under the License.
# #
import logging import logging
import os
import re
from abc import ABC from abc import ABC
from api.db import LLMType from api.db import LLMType
from api.db.services.llm_service import LLMBundle from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate from agent.component.llm import LLMParam, LLM
from api.utils.api_utils import timeout
from rag.llm.chat_model import ERROR_PREFIX
class CategorizeParam(GenerateParam): class CategorizeParam(LLMParam):
""" """
Define the Categorize component parameters. Define the Categorize component parameters.
@ -28,10 +33,12 @@ class CategorizeParam(GenerateParam):
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.category_description = {} self.category_description = {}
self.prompt = "" self.query = "sys.query"
self.message_history_window_size = 1
self.update_prompt()
def check(self): def check(self):
super().check() self.check_positive_integer(self.message_history_window_size, "[Categorize] Message window size > 0")
self.check_empty(self.category_description, "[Categorize] Category examples") self.check_empty(self.category_description, "[Categorize] Category examples")
for k, v in self.category_description.items(): for k, v in self.category_description.items():
if not k: if not k:
@ -39,76 +46,92 @@ class CategorizeParam(GenerateParam):
if not v.get("to"): if not v.get("to"):
raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!") raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!")
def get_prompt(self, chat_hist): def get_input_form(self) -> dict[str, dict]:
return {
"query": {
"type": "line",
"name": "Query"
}
}
def update_prompt(self):
cate_lines = [] cate_lines = []
for c, desc in self.category_description.items(): for c, desc in self.category_description.items():
for line in desc.get("examples", "").split("\n"): for line in desc.get("examples", []):
if not line: if not line:
continue continue
cate_lines.append("USER: {}\nCategory: {}".format(line, c)) cate_lines.append("USER: \"" + re.sub(r"\n", " ", line, flags=re.DOTALL) + "\""+c)
descriptions = [] descriptions = []
for c, desc in self.category_description.items(): for c, desc in self.category_description.items():
if desc.get("description"): if desc.get("description"):
descriptions.append( descriptions.append(
"\nCategory: {}\nDescription: {}".format(c, desc["description"])) "\n------\nCategory: {}\nDescription: {}".format(c, desc["description"]))
self.prompt = """ self.sys_prompt = """
Role: You're a text classifier. You are an advanced classification system that categorizes user questions into specific types. Analyze the input question and classify it into ONE of the following categories:
Task: You need to categorize the users questions into {} categories, namely: {} {}
Here's description of each category: Here's description of each category:
{} - {}
You could learn from the following examples: ---- Instructions ----
{} - Consider both explicit mentions and implied context
You could learn from the above examples. - Prioritize the most specific applicable category
- Return only the category name without explanations
Requirements: - Use "Other" only when no other category fits
- Just mention the category names, no need for any additional words.
""".format(
---- Real Data ---- "\n - ".join(list(self.category_description.keys())),
USER: {}\n "\n".join(descriptions)
""".format(
len(self.category_description.keys()),
"/".join(list(self.category_description.keys())),
"\n".join(descriptions),
"\n\n- ".join(cate_lines),
chat_hist
) )
return self.prompt
if cate_lines:
self.sys_prompt += """
---- Examples ----
{}
""".format("\n".join(cate_lines))
class Categorize(Generate, ABC): class Categorize(LLM, ABC):
component_name = "Categorize" component_name = "Categorize"
def _run(self, history, **kwargs): @timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
input = self.get_input() def _invoke(self, **kwargs):
input = " - ".join(input["content"]) if "content" in input else "" msg = self._canvas.get_history(self._param.message_history_window_size)
if not msg:
msg = [{"role": "user", "content": ""}]
if kwargs.get("sys.query"):
msg[-1]["content"] = kwargs["sys.query"]
self.set_input_value("sys.query", kwargs["sys.query"])
else:
msg[-1]["content"] = self._canvas.get_variable_value(self._param.query)
self.set_input_value(self._param.query, msg[-1]["content"])
self._param.update_prompt()
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id) chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
self._canvas.set_component_infor(self._id, {"prompt":self._param.get_prompt(input),"messages": [{"role": "user", "content": "\nCategory: "}],"conf": self._param.gen_conf()})
ans = chat_mdl.chat(self._param.get_prompt(input), [{"role": "user", "content": "\nCategory: "}], user_prompt = """
self._param.gen_conf()) ---- Real Data ----
logging.debug(f"input: {input}, answer: {str(ans)}") {}
""".format(" | ".join(["{}: \"{}\"".format(c["role"].upper(), re.sub(r"\n", "", c["content"], flags=re.DOTALL)) for c in msg]))
ans = chat_mdl.chat(self._param.sys_prompt, [{"role": "user", "content": user_prompt}], self._param.gen_conf())
logging.info(f"input: {user_prompt}, answer: {str(ans)}")
if ERROR_PREFIX in ans:
raise Exception(ans)
# Count the number of times each category appears in the answer. # Count the number of times each category appears in the answer.
category_counts = {} category_counts = {}
for c in self._param.category_description.keys(): for c in self._param.category_description.keys():
count = ans.lower().count(c.lower()) count = ans.lower().count(c.lower())
category_counts[c] = count category_counts[c] = count
# If a category is found, return the category with the highest count. cpn_ids = list(self._param.category_description.items())[-1][1]["to"]
max_category = list(self._param.category_description.keys())[0]
if any(category_counts.values()): if any(category_counts.values()):
max_category = max(category_counts.items(), key=lambda x: x[1]) max_category = max(category_counts.items(), key=lambda x: x[1])[0]
res = Categorize.be_output(self._param.category_description[max_category[0]]["to"]) cpn_ids = self._param.category_description[max_category]["to"]
self.set_output(res)
return res
res = Categorize.be_output(list(self._param.category_description.items())[-1][1]["to"]) self.set_output("category_name", max_category)
self.set_output(res) self.set_output("_next", cpn_ids)
return res
def debug(self, **kwargs):
df = self._run([], **kwargs)
cpn_id = df.iloc[0, 0]
return Categorize.be_output(self._canvas.get_component_name(cpn_id))
def thoughts(self) -> str:
return "Which should it falls into {}? ...".format(",".join([f"`{c}`" for c, _ in self._param.category_description.items()]))

View File

@ -1,152 +0,0 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import base64
from abc import ABC
from enum import Enum
from typing import Optional
from pydantic import BaseModel, Field, field_validator
from agent.component.base import ComponentBase, ComponentParamBase
from api import settings
class Language(str, Enum):
PYTHON = "python"
NODEJS = "nodejs"
class CodeExecutionRequest(BaseModel):
code_b64: str = Field(..., description="Base64 encoded code string")
language: Language = Field(default=Language.PYTHON, description="Programming language")
arguments: Optional[dict] = Field(default={}, description="Arguments")
@field_validator("code_b64")
@classmethod
def validate_base64(cls, v: str) -> str:
try:
base64.b64decode(v, validate=True)
return v
except Exception as e:
raise ValueError(f"Invalid base64 encoding: {str(e)}")
@field_validator("language", mode="before")
@classmethod
def normalize_language(cls, v) -> str:
if isinstance(v, str):
low = v.lower()
if low in ("python", "python3"):
return "python"
elif low in ("javascript", "nodejs"):
return "nodejs"
raise ValueError(f"Unsupported language: {v}")
class CodeParam(ComponentParamBase):
"""
Define the code sandbox component parameters.
"""
def __init__(self):
super().__init__()
self.lang = "python"
self.script = ""
self.arguments = []
self.address = f"http://{settings.SANDBOX_HOST}:9385/run"
self.enable_network = True
def check(self):
self.check_valid_value(self.lang, "Support languages", ["python", "python3", "nodejs", "javascript"])
self.check_defined_type(self.enable_network, "Enable network", ["bool"])
class Code(ComponentBase, ABC):
component_name = "Code"
def _run(self, history, **kwargs):
arguments = {}
for input in self._param.arguments:
if "@" in input["component_id"]:
component_id = input["component_id"].split("@")[0]
referred_component_key = input["component_id"].split("@")[1]
referred_component = self._canvas.get_component(component_id)["obj"]
for param in referred_component._param.query:
if param["key"] == referred_component_key:
if "value" in param:
arguments[input["name"]] = param["value"]
else:
referred_component = self._canvas.get_component(input["component_id"])["obj"]
referred_component_name = referred_component.component_name
referred_component_id = referred_component._id
debug_inputs = self._param.debug_inputs
if debug_inputs:
for param in debug_inputs:
if param["key"] == referred_component_id:
if "value" in param and param["name"] == input["name"]:
arguments[input["name"]] = param["value"]
else:
if referred_component_name.lower() == "answer":
arguments[input["name"]] = self._canvas.get_history(1)[0]["content"]
continue
_, out = referred_component.output(allow_partial=False)
if not out.empty:
arguments[input["name"]] = "\n".join(out["content"])
return self._execute_code(
language=self._param.lang,
code=self._param.script,
arguments=arguments,
address=self._param.address,
enable_network=self._param.enable_network,
)
def _execute_code(self, language: str, code: str, arguments: dict, address: str, enable_network: bool):
import requests
try:
code_b64 = self._encode_code(code)
code_req = CodeExecutionRequest(code_b64=code_b64, language=language, arguments=arguments).model_dump()
except Exception as e:
return Code.be_output("**Error**: construct code request error: " + str(e))
try:
resp = requests.post(url=address, json=code_req, timeout=10)
body = resp.json()
if body:
stdout = body.get("stdout")
stderr = body.get("stderr")
return Code.be_output(stdout or stderr)
else:
return Code.be_output("**Error**: There is no response from sanbox")
except Exception as e:
return Code.be_output("**Error**: Internal error in sanbox: " + str(e))
def _encode_code(self, code: str) -> str:
return base64.b64encode(code.encode("utf-8")).decode("utf-8")
def get_input_elements(self):
elements = []
for input in self._param.arguments:
cpn_id = input["component_id"]
elements.append({"key": cpn_id, "name": input["name"]})
return elements
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -1,66 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from duckduckgo_search import DDGS
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class DuckDuckGoParam(ComponentParamBase):
"""
Define the DuckDuckGo component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
self.channel = "text"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.channel, "Web Search or News", ["text", "news"])
class DuckDuckGo(ComponentBase, ABC):
component_name = "DuckDuckGo"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return DuckDuckGo.be_output("")
try:
if self._param.channel == "text":
with DDGS() as ddgs:
# {'title': '', 'href': '', 'body': ''}
duck_res = [{"content": '<a href="' + i["href"] + '">' + i["title"] + '</a> ' + i["body"]} for i
in ddgs.text(ans, max_results=self._param.top_n)]
elif self._param.channel == "news":
with DDGS() as ddgs:
# {'date': '', 'title': '', 'body': '', 'url': '', 'image': '', 'source': ''}
duck_res = [{"content": '<a href="' + i["url"] + '">' + i["title"] + '</a> ' + i["body"]} for i
in ddgs.news(ans, max_results=self._param.top_n)]
except Exception as e:
return DuckDuckGo.be_output("**ERROR**: " + str(e))
if not duck_res:
return DuckDuckGo.be_output("")
df = pd.DataFrame(duck_res)
logging.debug("df: {df}")
return df

View File

@ -1,141 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import json
import smtplib
import logging
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
from email.utils import formataddr
from agent.component.base import ComponentBase, ComponentParamBase
class EmailParam(ComponentParamBase):
"""
Define the Email component parameters.
"""
def __init__(self):
super().__init__()
# Fixed configuration parameters
self.smtp_server = "" # SMTP server address
self.smtp_port = 465 # SMTP port
self.email = "" # Sender email
self.password = "" # Email authorization code
self.sender_name = "" # Sender name
def check(self):
# Check required parameters
self.check_empty(self.smtp_server, "SMTP Server")
self.check_empty(self.email, "Email")
self.check_empty(self.password, "Password")
self.check_empty(self.sender_name, "Sender Name")
class Email(ComponentBase, ABC):
component_name = "Email"
def _run(self, history, **kwargs):
# Get upstream component output and parse JSON
ans = self.get_input()
content = "".join(ans["content"]) if "content" in ans else ""
if not content:
return Email.be_output("No content to send")
success = False
try:
# Parse JSON string passed from upstream
email_data = json.loads(content)
# Validate required fields
if "to_email" not in email_data:
return Email.be_output("Missing required field: to_email")
# Create email object
msg = MIMEMultipart('alternative')
# Properly handle sender name encoding
msg['From'] = formataddr((str(Header(self._param.sender_name,'utf-8')), self._param.email))
msg['To'] = email_data["to_email"]
if "cc_email" in email_data and email_data["cc_email"]:
msg['Cc'] = email_data["cc_email"]
msg['Subject'] = Header(email_data.get("subject", "No Subject"), 'utf-8').encode()
# Use content from email_data or default content
email_content = email_data.get("content", "No content provided")
# msg.attach(MIMEText(email_content, 'plain', 'utf-8'))
msg.attach(MIMEText(email_content, 'html', 'utf-8'))
# Connect to SMTP server and send
logging.info(f"Connecting to SMTP server {self._param.smtp_server}:{self._param.smtp_port}")
context = smtplib.ssl.create_default_context()
with smtplib.SMTP(self._param.smtp_server, self._param.smtp_port) as server:
server.ehlo()
server.starttls(context=context)
server.ehlo()
# Login
logging.info(f"Attempting to login with email: {self._param.email}")
server.login(self._param.email, self._param.password)
# Get all recipient list
recipients = [email_data["to_email"]]
if "cc_email" in email_data and email_data["cc_email"]:
recipients.extend(email_data["cc_email"].split(','))
# Send email
logging.info(f"Sending email to recipients: {recipients}")
try:
server.send_message(msg, self._param.email, recipients)
success = True
except Exception as e:
logging.error(f"Error during send_message: {str(e)}")
# Try alternative method
server.sendmail(self._param.email, recipients, msg.as_string())
success = True
try:
server.quit()
except Exception as e:
# Ignore errors when closing connection
logging.warning(f"Non-fatal error during connection close: {str(e)}")
if success:
return Email.be_output("Email sent successfully")
except json.JSONDecodeError:
error_msg = "Invalid JSON format in input"
logging.error(error_msg)
return Email.be_output(error_msg)
except smtplib.SMTPAuthenticationError:
error_msg = "SMTP Authentication failed. Please check your email and authorization code."
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except smtplib.SMTPConnectError:
error_msg = f"Failed to connect to SMTP server {self._param.smtp_server}:{self._param.smtp_port}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except smtplib.SMTPException as e:
error_msg = f"SMTP error occurred: {str(e)}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logging.error(error_msg)
return Email.be_output(f"Failed to send email: {error_msg}")

View File

@ -1,155 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import re
from copy import deepcopy
import pandas as pd
import pymysql
import psycopg2
from agent.component import GenerateParam, Generate
import pyodbc
import logging
class ExeSQLParam(GenerateParam):
"""
Define the ExeSQL component parameters.
"""
def __init__(self):
super().__init__()
self.db_type = "mysql"
self.database = ""
self.username = ""
self.host = ""
self.port = 3306
self.password = ""
self.loop = 3
self.top_n = 30
def check(self):
super().check()
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgresql', 'mariadb', 'mssql'])
self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address")
self.check_positive_integer(self.port, "IP Port")
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.top_n, "Number of records")
if self.database == "rag_flow":
if self.host == "ragflow-mysql":
raise ValueError("For the security reason, it dose not support database named rag_flow.")
if self.password == "infini_rag_flow":
raise ValueError("For the security reason, it dose not support database named rag_flow.")
class ExeSQL(Generate, ABC):
component_name = "ExeSQL"
def _refactor(self, ans):
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
match = re.search(r"```sql\s*(.*?)\s*```", ans, re.DOTALL)
if match:
ans = match.group(1) # Query content
return ans
else:
print("no markdown")
ans = re.sub(r'^.*?SELECT ', 'SELECT ', (ans), flags=re.IGNORECASE)
ans = re.sub(r';.*?SELECT ', '; SELECT ', ans, flags=re.IGNORECASE)
ans = re.sub(r';[^;]*$', r';', ans)
if not ans:
raise Exception("SQL statement not found!")
return ans
def _run(self, history, **kwargs):
ans = self.get_input()
ans = "".join([str(a) for a in ans["content"]]) if "content" in ans else ""
ans = self._refactor(ans)
if self._param.db_type in ["mysql", "mariadb"]:
db = pymysql.connect(db=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'postgresql':
db = psycopg2.connect(dbname=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'mssql':
conn_str = (
r'DRIVER={ODBC Driver 17 for SQL Server};'
r'SERVER=' + self._param.host + ',' + str(self._param.port) + ';'
r'DATABASE=' + self._param.database + ';'
r'UID=' + self._param.username + ';'
r'PWD=' + self._param.password
)
db = pyodbc.connect(conn_str)
try:
cursor = db.cursor()
except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e))
if not hasattr(self, "_loop"):
setattr(self, "_loop", 0)
self._loop += 1
input_list = re.split(r';', ans.replace(r"\n", " "))
sql_res = []
for i in range(len(input_list)):
single_sql = input_list[i]
single_sql = single_sql.replace('```','')
while self._loop <= self._param.loop:
self._loop += 1
if not single_sql:
break
try:
cursor.execute(single_sql)
if cursor.rowcount == 0:
sql_res.append({"content": "No record in the database!"})
break
if self._param.db_type == 'mssql':
single_res = pd.DataFrame.from_records(cursor.fetchmany(self._param.top_n),
columns=[desc[0] for desc in cursor.description])
else:
single_res = pd.DataFrame([i for i in cursor.fetchmany(self._param.top_n)])
single_res.columns = [i[0] for i in cursor.description]
sql_res.append({"content": single_res.to_markdown(index=False, floatfmt=".6f")})
break
except Exception as e:
single_sql = self._regenerate_sql(single_sql, str(e), **kwargs)
single_sql = self._refactor(single_sql)
if self._loop > self._param.loop:
sql_res.append({"content": "Can't query the correct data via SQL statement."})
db.close()
if not sql_res:
return ExeSQL.be_output("")
return pd.DataFrame(sql_res)
def _regenerate_sql(self, failed_sql, error_message, **kwargs):
prompt = f'''
## You are the Repair SQL Statement Helper, please modify the original SQL statement based on the SQL query error report.
## The original SQL statement is as follows:{failed_sql}.
## The contents of the SQL query error report is as follows:{error_message}.
## Answer only the modified SQL statement. Please do not give any explanation, just answer the code.
'''
self._param.prompt = prompt
kwargs_ = deepcopy(kwargs)
kwargs_["stream"] = False
response = Generate._run(self, [], **kwargs_)
try:
regenerated_sql = response.loc[0, "content"]
return regenerated_sql
except Exception as e:
logging.error(f"Failed to regenerate SQL: {e}")
return None
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -13,24 +13,28 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
from abc import ABC
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
class ConcentratorParam(ComponentParamBase): class UserFillUpParam(ComponentParamBase):
"""
Define the Concentrator component parameters.
"""
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.enable_tips = True
self.tips = "Please fill up the form"
def check(self): def check(self) -> bool:
return True return True
class Concentrator(ComponentBase, ABC): class UserFillUp(ComponentBase):
component_name = "Concentrator" component_name = "UserFillUp"
def _invoke(self, **kwargs):
for k, v in kwargs.get("inputs", {}).items():
self.set_output(k, v)
def thoughts(self) -> str:
return "Waiting for your input..."
def _run(self, history, **kwargs):
return Concentrator.be_output("")

View File

@ -1,276 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import re
from functools import partial
from typing import Any
import pandas as pd
from api.db import LLMType
from api.db.services.conversation_service import structure_answer
from api.db.services.llm_service import LLMBundle
from api import settings
from agent.component.base import ComponentBase, ComponentParamBase
from plugin import GlobalPluginManager
from plugin.llm_tool_plugin import llm_tool_metadata_to_openai_tool
from rag.llm.chat_model import ToolCallSession
from rag.prompts import message_fit_in
class LLMToolPluginCallSession(ToolCallSession):
def tool_call(self, name: str, arguments: dict[str, Any]) -> str:
tool = GlobalPluginManager.get_llm_tool_by_name(name)
if tool is None:
raise ValueError(f"LLM tool {name} does not exist")
return tool().invoke(**arguments)
class GenerateParam(ComponentParamBase):
"""
Define the Generate component parameters.
"""
def __init__(self):
super().__init__()
self.llm_id = ""
self.prompt = ""
self.max_tokens = 0
self.temperature = 0
self.top_p = 0
self.presence_penalty = 0
self.frequency_penalty = 0
self.cite = True
self.parameters = []
self.llm_enabled_tools = []
def check(self):
self.check_decimal_float(self.temperature, "[Generate] Temperature")
self.check_decimal_float(self.presence_penalty, "[Generate] Presence penalty")
self.check_decimal_float(self.frequency_penalty, "[Generate] Frequency penalty")
self.check_nonnegative_number(self.max_tokens, "[Generate] Max tokens")
self.check_decimal_float(self.top_p, "[Generate] Top P")
self.check_empty(self.llm_id, "[Generate] LLM")
# self.check_defined_type(self.parameters, "Parameters", ["list"])
def gen_conf(self):
conf = {}
if self.max_tokens > 0:
conf["max_tokens"] = self.max_tokens
if self.temperature > 0:
conf["temperature"] = self.temperature
if self.top_p > 0:
conf["top_p"] = self.top_p
if self.presence_penalty > 0:
conf["presence_penalty"] = self.presence_penalty
if self.frequency_penalty > 0:
conf["frequency_penalty"] = self.frequency_penalty
return conf
class Generate(ComponentBase):
component_name = "Generate"
def get_dependent_components(self):
inputs = self.get_input_elements()
cpnts = set([i["key"] for i in inputs[1:] if i["key"].lower().find("answer") < 0 and i["key"].lower().find("begin") < 0])
return list(cpnts)
def set_cite(self, retrieval_res, answer):
if "empty_response" in retrieval_res.columns:
retrieval_res["empty_response"].fillna("", inplace=True)
chunks = json.loads(retrieval_res["chunks"][0])
answer, idx = settings.retrievaler.insert_citations(answer,
[ck["content_ltks"] for ck in chunks],
[ck["vector"] for ck in chunks],
LLMBundle(self._canvas.get_tenant_id(), LLMType.EMBEDDING,
self._canvas.get_embedding_model()), tkweight=0.7,
vtweight=0.3)
doc_ids = set([])
recall_docs = []
for i in idx:
did = chunks[int(i)]["doc_id"]
if did in doc_ids:
continue
doc_ids.add(did)
recall_docs.append({"doc_id": did, "doc_name": chunks[int(i)]["docnm_kwd"]})
for c in chunks:
del c["vector"]
del c["content_ltks"]
reference = {
"chunks": chunks,
"doc_aggs": recall_docs
}
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model providers -> API-Key'"
res = {"content": answer, "reference": reference}
res = structure_answer(None, res, "", "")
return res
def get_input_elements(self):
key_set = set([])
res = [{"key": "user", "name": "Input your question here:"}]
for r in re.finditer(r"\{([a-z]+[:@][a-z0-9_-]+)\}", self._param.prompt, flags=re.IGNORECASE):
cpn_id = r.group(1)
if cpn_id in key_set:
continue
if cpn_id.lower().find("begin@") == 0:
cpn_id, key = cpn_id.split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] != key:
continue
res.append({"key": r.group(1), "name": p["name"]})
key_set.add(r.group(1))
continue
cpn_nm = self._canvas.get_component_name(cpn_id)
if not cpn_nm:
continue
res.append({"key": cpn_id, "name": cpn_nm})
key_set.add(cpn_id)
return res
def _run(self, history, **kwargs):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
if len(self._param.llm_enabled_tools) > 0:
tools = GlobalPluginManager.get_llm_tools_by_names(self._param.llm_enabled_tools)
chat_mdl.bind_tools(
LLMToolPluginCallSession(),
[llm_tool_metadata_to_openai_tool(t.get_metadata()) for t in tools]
)
prompt = self._param.prompt
retrieval_res = []
self._param.inputs = []
for para in self.get_input_elements()[1:]:
if para["key"].lower().find("begin@") == 0:
cpn_id, key = para["key"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
kwargs[para["key"]] = p.get("value", "")
self._param.inputs.append(
{"component_id": para["key"], "content": kwargs[para["key"]]})
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
component_id = para["key"]
cpn = self._canvas.get_component(component_id)["obj"]
if cpn.component_name.lower() == "answer":
hist = self._canvas.get_history(1)
if hist:
hist = hist[0]["content"]
else:
hist = ""
kwargs[para["key"]] = hist
continue
_, out = cpn.output(allow_partial=False)
if "content" not in out.columns:
kwargs[para["key"]] = ""
else:
if cpn.component_name.lower() == "retrieval":
retrieval_res.append(out)
kwargs[para["key"]] = " - " + "\n - ".join([o if isinstance(o, str) else str(o) for o in out["content"]])
self._param.inputs.append({"component_id": para["key"], "content": kwargs[para["key"]]})
if retrieval_res:
retrieval_res = pd.concat(retrieval_res, ignore_index=True)
else:
retrieval_res = pd.DataFrame([])
for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
if not self._param.inputs and prompt.find("{input}") >= 0:
retrieval_res = self.get_input()
input = (" - " + "\n - ".join(
[c for c in retrieval_res["content"] if isinstance(c, str)])) if "content" in retrieval_res else ""
prompt = re.sub(r"\{input\}", re.escape(input), prompt)
downstreams = self._canvas.get_component(self._id)["downstream"]
if kwargs.get("stream") and len(downstreams) == 1 and self._canvas.get_component(downstreams[0])[
"obj"].component_name.lower() == "answer":
return partial(self.stream_output, chat_mdl, prompt, retrieval_res)
if "empty_response" in retrieval_res.columns and not "".join(retrieval_res["content"]):
empty_res = "\n- ".join([str(t) for t in retrieval_res["empty_response"] if str(t)])
res = {"content": empty_res if empty_res else "Nothing found in knowledgebase!", "reference": []}
return pd.DataFrame([res])
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1:
msg.append({"role": "user", "content": "Output: "})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2:
msg.append({"role": "user", "content": "Output: "})
ans = chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf())
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
self._canvas.set_component_infor(self._id, {"prompt":msg[0]["content"],"messages": msg[1:],"conf": self._param.gen_conf()})
if self._param.cite and "chunks" in retrieval_res.columns:
res = self.set_cite(retrieval_res, ans)
return pd.DataFrame([res])
return Generate.be_output(ans)
def stream_output(self, chat_mdl, prompt, retrieval_res):
res = None
if "empty_response" in retrieval_res.columns and not "".join(retrieval_res["content"]):
empty_res = "\n- ".join([str(t) for t in retrieval_res["empty_response"] if str(t)])
res = {"content": empty_res if empty_res else "Nothing found in knowledgebase!", "reference": []}
yield res
self.set_output(res)
return
msg = self._canvas.get_history(self._param.message_history_window_size)
if msg and msg[0]['role'] == 'assistant':
msg.pop(0)
if len(msg) < 1:
msg.append({"role": "user", "content": "Output: "})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2:
msg.append({"role": "user", "content": "Output: "})
answer = ""
for ans in chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf()):
res = {"content": ans, "reference": []}
answer = ans
yield res
if self._param.cite and "chunks" in retrieval_res.columns:
res = self.set_cite(retrieval_res, answer)
yield res
self._canvas.set_component_infor(self._id, {"prompt":msg[0]["content"],"messages": msg[1:],"conf": self._param.gen_conf()})
self.set_output(Generate.be_output(res))
def debug(self, **kwargs):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
prompt = self._param.prompt
for para in self._param.debug_inputs:
kwargs[para["key"]] = para.get("value", "")
for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
u = kwargs.get("user")
ans = chat_mdl.chat(prompt, [{"role": "user", "content": u if u else "Output: "}], self._param.gen_conf())
return pd.DataFrame([ans])

View File

@ -1,61 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
import requests
from agent.component.base import ComponentBase, ComponentParamBase
class GitHubParam(ComponentParamBase):
"""
Define the GitHub component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
def check(self):
self.check_positive_integer(self.top_n, "Top N")
class GitHub(ComponentBase, ABC):
component_name = "GitHub"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return GitHub.be_output("")
try:
url = 'https://api.github.com/search/repositories?q=' + ans + '&sort=stars&order=desc&per_page=' + str(
self._param.top_n)
headers = {"Content-Type": "application/vnd.github+json", "X-GitHub-Api-Version": '2022-11-28'}
response = requests.get(url=url, headers=headers).json()
github_res = [{"content": '<a href="' + i["html_url"] + '">' + i["name"] + '</a>' + str(
i["description"]) + '\n stars:' + str(i['watchers'])} for i in response['items']]
except Exception as e:
return GitHub.be_output("**ERROR**: " + str(e))
if not github_res:
return GitHub.be_output("")
df = pd.DataFrame(github_res)
logging.debug(f"df: {df}")
return df

View File

@ -1,70 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
from scholarly import scholarly
class GoogleScholarParam(ComponentParamBase):
"""
Define the GoogleScholar component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 6
self.sort_by = 'relevance'
self.year_low = None
self.year_high = None
self.patents = True
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.sort_by, "GoogleScholar Sort_by", ['date', 'relevance'])
self.check_boolean(self.patents, "Whether or not to include patents, defaults to True")
class GoogleScholar(ComponentBase, ABC):
component_name = "GoogleScholar"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return GoogleScholar.be_output("")
scholar_client = scholarly.search_pubs(ans, patents=self._param.patents, year_low=self._param.year_low,
year_high=self._param.year_high, sort_by=self._param.sort_by)
scholar_res = []
for i in range(self._param.top_n):
try:
pub = next(scholar_client)
scholar_res.append({"content": 'Title: ' + pub['bib']['title'] + '\n_Url: <a href="' + pub[
'pub_url'] + '"></a> ' + "\n author: " + ",".join(pub['bib']['author']) + '\n Abstract: ' + pub[
'bib'].get('abstract', 'no abstract')})
except StopIteration or Exception:
logging.exception("GoogleScholar")
break
if not scholar_res:
return GoogleScholar.be_output("")
df = pd.DataFrame(scholar_res)
logging.debug(f"df: {df}")
return df

View File

@ -14,9 +14,14 @@
# limitations under the License. # limitations under the License.
# #
import json import json
import logging
import os
import re import re
import time
from abc import ABC from abc import ABC
import requests import requests
from api.utils.api_utils import timeout
from deepdoc.parser import HtmlParser from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -48,28 +53,14 @@ class InvokeParam(ComponentParamBase):
class Invoke(ComponentBase, ABC): class Invoke(ComponentBase, ABC):
component_name = "Invoke" component_name = "Invoke"
def _run(self, history, **kwargs): @timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3))
def _invoke(self, **kwargs):
args = {} args = {}
for para in self._param.variables: for para in self._param.variables:
if para.get("component_id"): if para.get("value"):
if '@' in para["component_id"]:
component = para["component_id"].split('@')[0]
field = para["component_id"].split('@')[1]
cpn = self._canvas.get_component(component)["obj"]
for param in cpn._param.query:
if param["key"] == field:
if "value" in param:
args[para["key"]] = param["value"]
else:
cpn = self._canvas.get_component(para["component_id"])["obj"]
if cpn.component_name.lower() == "answer":
args[para["key"]] = self._canvas.get_history(1)[0]["content"]
continue
_, out = cpn.output(allow_partial=False)
if not out.empty:
args[para["key"]] = "\n".join(out["content"])
else:
args[para["key"]] = para["value"] args[para["key"]] = para["value"]
else:
args[para["key"]] = self._canvas.get_variable_value(para["ref"])
url = self._param.url.strip() url = self._param.url.strip()
if url.find("http") != 0: if url.find("http") != 0:
@ -83,50 +74,69 @@ class Invoke(ComponentBase, ABC):
if re.sub(r"https?:?/?/?", "", self._param.proxy): if re.sub(r"https?:?/?/?", "", self._param.proxy):
proxies = {"http": self._param.proxy, "https": self._param.proxy} proxies = {"http": self._param.proxy, "https": self._param.proxy}
if method == 'get': last_e = ""
response = requests.get(url=url, for _ in range(self._param.max_retries+1):
params=args, try:
headers=headers, if method == 'get':
proxies=proxies, response = requests.get(url=url,
timeout=self._param.timeout) params=args,
if self._param.clean_html: headers=headers,
sections = HtmlParser()(None, response.content) proxies=proxies,
return Invoke.be_output("\n".join(sections)) timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections))
else:
self.set_output("result", response.text)
return Invoke.be_output(response.text) if method == 'put':
if self._param.datatype.lower() == 'json':
response = requests.put(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
else:
response = requests.put(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections))
else:
self.set_output("result", response.text)
if method == 'put': if method == 'post':
if self._param.datatype.lower() == 'json': if self._param.datatype.lower() == 'json':
response = requests.put(url=url, response = requests.post(url=url,
json=args, json=args,
headers=headers, headers=headers,
proxies=proxies, proxies=proxies,
timeout=self._param.timeout) timeout=self._param.timeout)
else: else:
response = requests.put(url=url, response = requests.post(url=url,
data=args, data=args,
headers=headers, headers=headers,
proxies=proxies, proxies=proxies,
timeout=self._param.timeout) timeout=self._param.timeout)
if self._param.clean_html: if self._param.clean_html:
sections = HtmlParser()(None, response.content) self.set_output("result", "\n".join(sections))
return Invoke.be_output("\n".join(sections)) else:
return Invoke.be_output(response.text) self.set_output("result", response.text)
if method == 'post': return self.output("result")
if self._param.datatype.lower() == 'json': except Exception as e:
response = requests.post(url=url, last_e = e
json=args, logging.exception(f"Http request error: {e}")
headers=headers, time.sleep(self._param.delay_after_error)
proxies=proxies,
timeout=self._param.timeout) if last_e:
else: self.set_output("_ERROR", str(last_e))
response = requests.post(url=url, return f"Http request error: {last_e}"
data=args,
headers=headers, assert False, self.output()
proxies=proxies,
timeout=self._param.timeout) def thoughts(self) -> str:
if self._param.clean_html: return "Waiting for the server respond..."
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)

View File

@ -24,10 +24,18 @@ class IterationParam(ComponentParamBase):
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.delimiter = "," self.items_ref = ""
def get_input_form(self) -> dict[str, dict]:
return {
"items": {
"type": "json",
"name": "Items"
}
}
def check(self): def check(self):
self.check_empty(self.delimiter, "Delimiter") return True
class Iteration(ComponentBase, ABC): class Iteration(ComponentBase, ABC):
@ -38,8 +46,15 @@ class Iteration(ComponentBase, ABC):
if self._canvas.get_component(cid)["obj"].component_name.lower() != "iterationitem": if self._canvas.get_component(cid)["obj"].component_name.lower() != "iterationitem":
continue continue
if self._canvas.get_component(cid)["parent_id"] == self._id: if self._canvas.get_component(cid)["parent_id"] == self._id:
return self._canvas.get_component(cid) return cid
def _invoke(self, **kwargs):
arr = self._canvas.get_variable_value(self._param.items_ref)
if not isinstance(arr, list):
self.set_output("_ERROR", self._param.items_ref + " must be an array, but its type is "+str(type(arr)))
def thoughts(self) -> str:
return "Need to process {} items.".format(len(self._canvas.get_variable_value(self._param.items_ref)))
def _run(self, history, **kwargs):
return self.output(allow_partial=False)[1]

View File

@ -14,7 +14,6 @@
# limitations under the License. # limitations under the License.
# #
from abc import ABC from abc import ABC
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
@ -33,21 +32,52 @@ class IterationItem(ComponentBase, ABC):
super().__init__(canvas, id, param) super().__init__(canvas, id, param)
self._idx = 0 self._idx = 0
def _run(self, history, **kwargs): def _invoke(self, **kwargs):
parent = self.get_parent() parent = self.get_parent()
ans = parent.get_input() arr = self._canvas.get_variable_value(parent._param.items_ref)
ans = parent._param.delimiter.join(ans["content"]) if "content" in ans else "" if not isinstance(arr, list):
ans = [a.strip() for a in ans.split(parent._param.delimiter)]
if not ans:
self._idx = -1 self._idx = -1
return pd.DataFrame() raise Exception(parent._param.items_ref + " must be an array, but its type is "+str(type(arr)))
df = pd.DataFrame([{"content": ans[self._idx]}]) if self._idx > 0:
self._idx += 1 self.output_collation()
if self._idx >= len(ans):
if self._idx >= len(arr):
self._idx = -1 self._idx = -1
return df return
self.set_output("item", arr[self._idx])
self.set_output("index", self._idx)
self._idx += 1
def output_collation(self):
pid = self.get_parent()._id
for cid in self._canvas.components.keys():
obj = self._canvas.get_component_obj(cid)
p = obj.get_parent()
if not p:
continue
if p._id != pid:
continue
if p.component_name.lower() in ["categorize", "message", "switch", "userfillup", "interationitem"]:
continue
for k, o in p._param.outputs.items():
if "ref" not in o:
continue
_cid, var = o["ref"].split("@")
if _cid != cid:
continue
res = p.output(k)
if not res:
res = []
res.append(obj.output(var))
p.set_output(k, res)
def end(self): def end(self):
return self._idx == -1 return self._idx == -1
def thoughts(self) -> str:
return "Next turn..."

View File

@ -1,72 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import re
from abc import ABC
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate
class KeywordExtractParam(GenerateParam):
"""
Define the KeywordExtract component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 1
def check(self):
super().check()
self.check_positive_integer(self.top_n, "Top N")
def get_prompt(self):
self.prompt = """
- Role: You're a question analyzer.
- Requirements:
- Summarize user's question, and give top %s important keyword/phrase.
- Use comma as a delimiter to separate keywords/phrases.
- Answer format: (in language of user's question)
- keyword:
""" % self.top_n
return self.prompt
class KeywordExtract(Generate, ABC):
component_name = "KeywordExtract"
def _run(self, history, **kwargs):
query = self.get_input()
if hasattr(query, "to_dict") and "content" in query:
query = ", ".join(map(str, query["content"].dropna()))
else:
query = str(query)
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
self._canvas.set_component_infor(self._id, {"prompt":self._param.get_prompt(),"messages": [{"role": "user", "content": query}],"conf": self._param.gen_conf()})
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": query}],
self._param.gen_conf())
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
ans = re.sub(r".*keyword:", "", ans).strip()
logging.debug(f"ans: {ans}")
return KeywordExtract.be_output(ans)
def debug(self, **kwargs):
return self._run([], **kwargs)

270
agent/component/llm.py Normal file
View File

@ -0,0 +1,270 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import logging
import os
import re
from typing import Any, Generator
import json_repair
from copy import deepcopy
from functools import partial
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
from rag.prompts import message_fit_in, citation_prompt
from rag.prompts.prompts import tool_call_summary
class LLMParam(ComponentParamBase):
"""
Define the LLM component parameters.
"""
def __init__(self):
super().__init__()
self.llm_id = ""
self.sys_prompt = ""
self.prompts = [{"role": "user", "content": "{sys.query}"}]
self.max_tokens = 0
self.temperature = 0
self.top_p = 0
self.presence_penalty = 0
self.frequency_penalty = 0
self.output_structure = None
self.cite = True
self.visual_files_var = None
def check(self):
self.check_decimal_float(float(self.temperature), "[Agent] Temperature")
self.check_decimal_float(float(self.presence_penalty), "[Agent] Presence penalty")
self.check_decimal_float(float(self.frequency_penalty), "[Agent] Frequency penalty")
self.check_nonnegative_number(int(self.max_tokens), "[Agent] Max tokens")
self.check_decimal_float(float(self.top_p), "[Agent] Top P")
self.check_empty(self.llm_id, "[Agent] LLM")
self.check_empty(self.sys_prompt, "[Agent] System prompt")
self.check_empty(self.prompts, "[Agent] User prompt")
def gen_conf(self):
conf = {}
def get_attr(nm):
try:
return getattr(self, nm)
except Exception:
pass
if int(self.max_tokens) > 0 and get_attr("maxTokensEnabled"):
conf["max_tokens"] = int(self.max_tokens)
if float(self.temperature) > 0 and get_attr("temperatureEnabled"):
conf["temperature"] = float(self.temperature)
if float(self.top_p) > 0 and get_attr("topPEnabled"):
conf["top_p"] = float(self.top_p)
if float(self.presence_penalty) > 0 and get_attr("presencePenaltyEnabled"):
conf["presence_penalty"] = float(self.presence_penalty)
if float(self.frequency_penalty) > 0 and get_attr("frequencyPenaltyEnabled"):
conf["frequency_penalty"] = float(self.frequency_penalty)
return conf
class LLM(ComponentBase):
component_name = "LLM"
def __init__(self, canvas, id, param: ComponentParamBase):
super().__init__(canvas, id, param)
self.chat_mdl = LLMBundle(self._canvas.get_tenant_id(), TenantLLMService.llm_id2llm_type(self._param.llm_id),
self._param.llm_id, max_retries=self._param.max_retries,
retry_interval=self._param.delay_after_error
)
self.imgs = []
def get_input_form(self) -> dict[str, dict]:
res = {}
for k, v in self.get_input_elements().items():
res[k] = {
"type": "line",
"name": v["name"]
}
return res
def get_input_elements(self) -> dict[str, Any]:
res = self.get_input_elements_from_text(self._param.sys_prompt)
for prompt in self._param.prompts:
d = self.get_input_elements_from_text(prompt["content"])
res.update(d)
return res
def set_debug_inputs(self, inputs: dict[str, dict]):
self._param.debug_inputs = inputs
def add2system_prompt(self, txt):
self._param.sys_prompt += txt
def _prepare_prompt_variables(self):
if self._param.visual_files_var:
self.imgs = self._canvas.get_variable_value(self._param.visual_files_var)
if not self.imgs:
self.imgs = []
self.imgs = [img for img in self.imgs if img[:len("data:image/")] == "data:image/"]
if self.imgs and TenantLLMService.llm_id2llm_type(self._param.llm_id) == LLMType.CHAT.value:
self.chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.IMAGE2TEXT.value,
self._param.llm_id, max_retries=self._param.max_retries,
retry_interval=self._param.delay_after_error
)
args = {}
vars = self.get_input_elements() if not self._param.debug_inputs else self._param.debug_inputs
prompt = self._param.sys_prompt
for k, o in vars.items():
args[k] = o["value"]
if not isinstance(args[k], str):
try:
args[k] = json.dumps(args[k], ensure_ascii=False)
except Exception:
args[k] = str(args[k])
self.set_input_value(k, args[k])
msg = self._canvas.get_history(self._param.message_history_window_size)[:-1]
msg.extend(deepcopy(self._param.prompts))
prompt = self.string_format(prompt, args)
for m in msg:
m["content"] = self.string_format(m["content"], args)
if self._param.cite and self._canvas.get_reference()["chunks"]:
prompt += citation_prompt()
return prompt, msg
def _generate(self, msg:list[dict], **kwargs) -> str:
if not self.imgs:
return self.chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs)
return self.chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs)
def _generate_streamly(self, msg:list[dict], **kwargs) -> Generator[str, None, None]:
ans = ""
last_idx = 0
endswith_think = False
def delta(txt):
nonlocal ans, last_idx, endswith_think
delta_ans = txt[last_idx:]
ans = txt
if delta_ans.find("<think>") == 0:
last_idx += len("<think>")
return "<think>"
elif delta_ans.find("<think>") > 0:
delta_ans = txt[last_idx:last_idx+delta_ans.find("<think>")]
last_idx += delta_ans.find("<think>")
return delta_ans
elif delta_ans.endswith("</think>"):
endswith_think = True
elif endswith_think:
endswith_think = False
return "</think>"
last_idx = len(ans)
if ans.endswith("</think>"):
last_idx -= len("</think>")
return re.sub(r"(<think>|</think>)", "", delta_ans)
if not self.imgs:
for txt in self.chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs):
yield delta(txt)
else:
for txt in self.chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs):
yield delta(txt)
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
def _invoke(self, **kwargs):
def clean_formated_answer(ans: str) -> str:
ans = re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
ans = re.sub(r"^.*```json", "", ans, flags=re.DOTALL)
return re.sub(r"```\n*$", "", ans, flags=re.DOTALL)
prompt, msg = self._prepare_prompt_variables()
error = ""
if self._param.output_structure:
prompt += "\nThe output MUST follow this JSON format:\n"+json.dumps(self._param.output_structure, ensure_ascii=False, indent=2)
prompt += "\nRedundant information is FORBIDDEN."
for _ in range(self._param.max_retries+1):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
error = ""
ans = self._generate(msg)
msg.pop(0)
if ans.find("**ERROR**") >= 0:
logging.error(f"LLM response error: {ans}")
error = ans
continue
try:
self.set_output("structured_content", json_repair.loads(clean_formated_answer(ans)))
return
except Exception:
msg.append({"role": "user", "content": "The answer can't not be parsed as JSON"})
error = "The answer can't not be parsed as JSON"
if error:
self.set_output("_ERROR", error)
return
downstreams = self._canvas.get_component(self._id)["downstream"] if self._canvas.get_component(self._id) else []
ex = self.exception_handler()
if any([self._canvas.get_component_obj(cid).component_name.lower()=="message" for cid in downstreams]) and not self._param.output_structure and not (ex and ex["goto"]):
self.set_output("content", partial(self._stream_output, prompt, msg))
return
for _ in range(self._param.max_retries+1):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
error = ""
ans = self._generate(msg)
msg.pop(0)
if ans.find("**ERROR**") >= 0:
logging.error(f"LLM response error: {ans}")
error = ans
continue
self.set_output("content", ans)
break
if error:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
else:
self.set_output("_ERROR", error)
def _stream_output(self, prompt, msg):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
answer = ""
for ans in self._generate_streamly(msg):
if ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
yield self.get_exception_default_value()
else:
self.set_output("_ERROR", ans)
return
yield ans
answer += ans
self.set_output("content", answer)
def add_memory(self, user:str, assist:str, func_name: str, params: dict, results: str):
summ = tool_call_summary(self.chat_mdl, func_name, params, results)
logging.info(f"[MEMORY]: {summ}")
self._canvas.add_memory(user, assist, summ)
def thoughts(self) -> str:
_, msg = self._prepare_prompt_variables()
return "⌛Give me a moment—starting from: \n\n" + re.sub(r"(User's query:|[\\]+)", '', msg[-1]['content'], flags=re.DOTALL) + "\n\nIll figure out our best next move."

View File

@ -13,43 +13,138 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import json
import os
import random import random
from abc import ABC import re
from functools import partial from functools import partial
from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from jinja2 import Template as Jinja2Template
from api.utils.api_utils import timeout
class MessageParam(ComponentParamBase): class MessageParam(ComponentParamBase):
""" """
Define the Message component parameters. Define the Message component parameters.
""" """
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self.messages = [] self.content = []
self.stream = True
self.outputs = {
"content": {
"type": "str"
}
}
def check(self): def check(self):
self.check_empty(self.messages, "[Message]") self.check_empty(self.content, "[Message] Content")
self.check_boolean(self.stream, "[Message] stream")
return True return True
class Message(ComponentBase, ABC): class Message(ComponentBase):
component_name = "Message" component_name = "Message"
def _run(self, history, **kwargs): def get_kwargs(self, script:str, kwargs:dict = {}, delimeter:str=None) -> tuple[str, dict[str, str | list | Any]]:
if kwargs.get("stream"): for k,v in self.get_input_elements_from_text(script).items():
return partial(self.stream_output) if k in kwargs:
continue
v = v["value"]
if not v:
v = ""
ans = ""
if isinstance(v, partial):
for t in v():
ans += t
elif isinstance(v, list) and delimeter:
ans = delimeter.join([str(vv) for vv in v])
elif not isinstance(v, str):
try:
ans = json.dumps(v, ensure_ascii=False)
except Exception:
pass
else:
ans = v
if not ans:
ans = ""
kwargs[k] = ans
self.set_input_value(k, ans)
res = Message.be_output(random.choice(self._param.messages)) _kwargs = {}
self.set_output(res) for n, v in kwargs.items():
return res _n = re.sub("[@:.]", "_", n)
script = re.sub(r"\{%s\}" % re.escape(n), _n, script)
_kwargs[_n] = v
return script, _kwargs
def stream_output(self): def _stream(self, rand_cnt:str):
res = None s = 0
if self._param.messages: all_content = ""
res = {"content": random.choice(self._param.messages)} cache = {}
yield res for r in re.finditer(self.variable_ref_patt, rand_cnt, flags=re.DOTALL):
all_content += rand_cnt[s: r.start()]
yield rand_cnt[s: r.start()]
s = r.end()
exp = r.group(1)
if exp in cache:
yield cache[exp]
all_content += cache[exp]
continue
self.set_output(res) v = self._canvas.get_variable_value(exp)
if not v:
v = ""
if isinstance(v, partial):
cnt = ""
for t in v():
all_content += t
cnt += t
yield t
continue
elif not isinstance(v, str):
try:
v = json.dumps(v, ensure_ascii=False, indent=2)
except Exception:
v = str(v)
yield v
all_content += v
cache[exp] = v
if s < len(rand_cnt):
all_content += rand_cnt[s: ]
yield rand_cnt[s: ]
self.set_output("content", all_content)
def _is_jinjia2(self, content:str) -> bool:
patt = [
r"\{%.*%\}", "{{", "}}"
]
return any([re.search(p, content) for p in patt])
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
def _invoke(self, **kwargs):
rand_cnt = random.choice(self._param.content)
if self._param.stream and not self._is_jinjia2(rand_cnt):
self.set_output("content", partial(self._stream, rand_cnt))
return
rand_cnt, kwargs = self.get_kwargs(rand_cnt, kwargs)
template = Jinja2Template(rand_cnt)
try:
content = template.render(kwargs)
except Exception:
pass
for n, v in kwargs.items():
content = re.sub(n, v, content)
self.set_output("content", content)
def thoughts(self) -> str:
return ""

View File

@ -1,69 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from Bio import Entrez
import re
import pandas as pd
import xml.etree.ElementTree as ET
from agent.component.base import ComponentBase, ComponentParamBase
class PubMedParam(ComponentParamBase):
"""
Define the PubMed component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 5
self.email = "A.N.Other@example.com"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
class PubMed(ComponentBase, ABC):
component_name = "PubMed"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return PubMed.be_output("")
try:
Entrez.email = self._param.email
pubmedids = Entrez.read(Entrez.esearch(db='pubmed', retmax=self._param.top_n, term=ans))['IdList']
pubmedcnt = ET.fromstring(re.sub(r'<(/?)b>|<(/?)i>', '', Entrez.efetch(db='pubmed', id=",".join(pubmedids),
retmode="xml").read().decode(
"utf-8")))
pubmed_res = [{"content": 'Title:' + child.find("MedlineCitation").find("Article").find(
"ArticleTitle").text + '\nUrl:<a href=" https://pubmed.ncbi.nlm.nih.gov/' + child.find(
"MedlineCitation").find("PMID").text + '">' + '</a>\n' + 'Abstract:' + (
child.find("MedlineCitation").find("Article").find("Abstract").find(
"AbstractText").text if child.find("MedlineCitation").find(
"Article").find("Abstract") else "No abstract available")} for child in
pubmedcnt.findall("PubmedArticle")]
except Exception as e:
return PubMed.be_output("**ERROR**: " + str(e))
if not pubmed_res:
return PubMed.be_output("")
df = pd.DataFrame(pubmed_res)
logging.debug(f"df: {df}")
return df

View File

@ -1,83 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate
from rag.utils import num_tokens_from_string, encoder
class RelevantParam(GenerateParam):
"""
Define the Relevant component parameters.
"""
def __init__(self):
super().__init__()
self.prompt = ""
self.yes = ""
self.no = ""
def check(self):
super().check()
self.check_empty(self.yes, "[Relevant] 'Yes'")
self.check_empty(self.no, "[Relevant] 'No'")
def get_prompt(self):
self.prompt = """
You are a grader assessing relevance of a retrieved document to a user question.
It does not need to be a stringent test. The goal is to filter out erroneous retrievals.
If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant.
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.
No other words needed except 'yes' or 'no'.
"""
return self.prompt
class Relevant(Generate, ABC):
component_name = "Relevant"
def _run(self, history, **kwargs):
q = ""
for r, c in self._canvas.history[::-1]:
if r == "user":
q = c
break
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return Relevant.be_output(self._param.no)
ans = "Documents: \n" + ans
ans = f"Question: {q}\n" + ans
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
if num_tokens_from_string(ans) >= chat_mdl.max_length - 4:
ans = encoder.decode(encoder.encode(ans)[:chat_mdl.max_length - 4])
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": ans}],
self._param.gen_conf())
logging.debug(ans)
if ans.lower().find("yes") >= 0:
return Relevant.be_output(self._param.yes)
if ans.lower().find("no") >= 0:
return Relevant.be_output(self._param.no)
assert False, f"Relevant component got: {ans}"
def debug(self, **kwargs):
return self._run([], **kwargs)

View File

@ -1,135 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import logging
import re
from abc import ABC
import pandas as pd
from api.db import LLMType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api import settings
from agent.component.base import ComponentBase, ComponentParamBase
from rag.app.tag import label_question
from rag.prompts import kb_prompt
from rag.utils.tavily_conn import Tavily
class RetrievalParam(ComponentParamBase):
"""
Define the Retrieval component parameters.
"""
def __init__(self):
super().__init__()
self.similarity_threshold = 0.2
self.keywords_similarity_weight = 0.5
self.top_n = 8
self.top_k = 1024
self.kb_ids = []
self.kb_vars = []
self.rerank_id = ""
self.empty_response = ""
self.tavily_api_key = ""
self.use_kg = False
def check(self):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
self.check_decimal_float(self.keywords_similarity_weight, "[Retrieval] Keyword similarity weight")
self.check_positive_number(self.top_n, "[Retrieval] Top N")
class Retrieval(ComponentBase, ABC):
component_name = "Retrieval"
def _run(self, history, **kwargs):
query = self.get_input()
query = str(query["content"][0]) if "content" in query else ""
query = re.split(r"(USER:|ASSISTANT:)", query)[-1]
kb_ids: list[str] = self._param.kb_ids or []
kb_vars = self._fetch_outputs_from(self._param.kb_vars)
if len(kb_vars) > 0:
for kb_var in kb_vars:
if len(kb_var) == 1:
kb_var_value = str(kb_var["content"][0])
for v in kb_var_value.split(","):
kb_ids.append(v)
else:
for v in kb_var.to_dict("records"):
kb_ids.append(v["content"])
filtered_kb_ids: list[str] = [kb_id for kb_id in kb_ids if kb_id]
kbs = KnowledgebaseService.get_by_ids(filtered_kb_ids)
if not kbs:
return Retrieval.be_output("")
embd_nms = list(set([kb.embd_id for kb in kbs]))
assert len(embd_nms) == 1, "Knowledge bases use different embedding models."
embd_mdl = None
if embd_nms:
embd_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.EMBEDDING, embd_nms[0])
self._canvas.set_embedding_model(embd_nms[0])
rerank_mdl = None
if self._param.rerank_id:
rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id)
if kbs:
query = re.sub(r"^user[:\s]*", "", query, flags=re.IGNORECASE)
kbinfos = settings.retrievaler.retrieval(
query,
embd_mdl,
[kb.tenant_id for kb in kbs],
filtered_kb_ids,
1,
self._param.top_n,
self._param.similarity_threshold,
1 - self._param.keywords_similarity_weight,
aggs=False,
rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs),
)
else:
kbinfos = {"chunks": [], "doc_aggs": []}
if self._param.use_kg and kbs:
ck = settings.kg_retrievaler.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
kbinfos["chunks"].insert(0, ck)
if self._param.tavily_api_key:
tav = Tavily(self._param.tavily_api_key)
tav_res = tav.retrieve_chunks(query)
kbinfos["chunks"].extend(tav_res["chunks"])
kbinfos["doc_aggs"].extend(tav_res["doc_aggs"])
if not kbinfos["chunks"]:
df = Retrieval.be_output("")
if self._param.empty_response and self._param.empty_response.strip():
df["empty_response"] = self._param.empty_response
return df
df = pd.DataFrame({"content": kb_prompt(kbinfos, 200000), "chunks": json.dumps(kbinfos["chunks"])})
logging.debug("{} {}".format(query, df))
return df.dropna()

View File

@ -1,94 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
from agent.component import GenerateParam, Generate
from rag.prompts import full_question
class RewriteQuestionParam(GenerateParam):
"""
Define the QuestionRewrite component parameters.
"""
def __init__(self):
super().__init__()
self.temperature = 0.9
self.prompt = ""
self.language = ""
def check(self):
super().check()
class RewriteQuestion(Generate, ABC):
component_name = "RewriteQuestion"
def _run(self, history, **kwargs):
hist = self._canvas.get_history(self._param.message_history_window_size)
query = self.get_input()
query = str(query["content"][0]) if "content" in query else ""
messages = [h for h in hist if h["role"]!="system"]
if messages[-1]["role"] != "user":
messages.append({"role": "user", "content": query})
ans = full_question(self._canvas.get_tenant_id(), self._param.llm_id, messages, self.gen_lang(self._param.language))
self._canvas.history.pop()
self._canvas.history.append(("user", ans))
return RewriteQuestion.be_output(ans)
@staticmethod
def gen_lang(language):
# convert code lang to language word for the prompt
language_dict = {'af': 'Afrikaans', 'ak': 'Akan', 'sq': 'Albanian', 'ws': 'Samoan', 'am': 'Amharic',
'ar': 'Arabic', 'hy': 'Armenian', 'az': 'Azerbaijani', 'eu': 'Basque', 'be': 'Belarusian',
'bem': 'Bemba', 'bn': 'Bengali', 'bh': 'Bihari',
'xx-bork': 'Bork', 'bs': 'Bosnian', 'br': 'Breton', 'bg': 'Bulgarian', 'bt': 'Bhutani',
'km': 'Cambodian', 'ca': 'Catalan', 'chr': 'Cherokee', 'ny': 'Chichewa', 'zh-cn': 'Chinese',
'zh-tw': 'Chinese', 'co': 'Corsican',
'hr': 'Croatian', 'cs': 'Czech', 'da': 'Danish', 'nl': 'Dutch', 'xx-elmer': 'Elmer',
'en': 'English', 'eo': 'Esperanto', 'et': 'Estonian', 'ee': 'Ewe', 'fo': 'Faroese',
'tl': 'Filipino', 'fi': 'Finnish', 'fr': 'French',
'fy': 'Frisian', 'gaa': 'Ga', 'gl': 'Galician', 'ka': 'Georgian', 'de': 'German',
'el': 'Greek', 'kl': 'Greenlandic', 'gn': 'Guarani', 'gu': 'Gujarati', 'xx-hacker': 'Hacker',
'ht': 'Haitian Creole', 'ha': 'Hausa', 'haw': 'Hawaiian',
'iw': 'Hebrew', 'hi': 'Hindi', 'hu': 'Hungarian', 'is': 'Icelandic', 'ig': 'Igbo',
'id': 'Indonesian', 'ia': 'Interlingua', 'ga': 'Irish', 'it': 'Italian', 'ja': 'Japanese',
'jw': 'Javanese', 'kn': 'Kannada', 'kk': 'Kazakh', 'rw': 'Kinyarwanda',
'rn': 'Kirundi', 'xx-klingon': 'Klingon', 'kg': 'Kongo', 'ko': 'Korean', 'kri': 'Krio',
'ku': 'Kurdish', 'ckb': 'Kurdish (Sorani)', 'ky': 'Kyrgyz', 'lo': 'Laothian', 'la': 'Latin',
'lv': 'Latvian', 'ln': 'Lingala', 'lt': 'Lithuanian',
'loz': 'Lozi', 'lg': 'Luganda', 'ach': 'Luo', 'mk': 'Macedonian', 'mg': 'Malagasy',
'ms': 'Malay', 'ml': 'Malayalam', 'mt': 'Maltese', 'mv': 'Maldivian', 'mi': 'Maori',
'mr': 'Marathi', 'mfe': 'Mauritian Creole', 'mo': 'Moldavian', 'mn': 'Mongolian',
'sr-me': 'Montenegrin', 'my': 'Burmese', 'ne': 'Nepali', 'pcm': 'Nigerian Pidgin',
'nso': 'Northern Sotho', 'no': 'Norwegian', 'nn': 'Norwegian Nynorsk', 'oc': 'Occitan',
'or': 'Oriya', 'om': 'Oromo', 'ps': 'Pashto', 'fa': 'Persian',
'xx-pirate': 'Pirate', 'pl': 'Polish', 'pt': 'Portuguese', 'pt-br': 'Portuguese (Brazilian)',
'pt-pt': 'Portuguese (Portugal)', 'pa': 'Punjabi', 'qu': 'Quechua', 'ro': 'Romanian',
'rm': 'Romansh', 'nyn': 'Runyankole', 'ru': 'Russian', 'gd': 'Scots Gaelic',
'sr': 'Serbian', 'sh': 'Serbo-Croatian', 'st': 'Sesotho', 'tn': 'Setswana',
'crs': 'Seychellois Creole', 'sn': 'Shona', 'sd': 'Sindhi', 'si': 'Sinhalese', 'sk': 'Slovak',
'sl': 'Slovenian', 'so': 'Somali', 'es': 'Spanish', 'es-419': 'Spanish (Latin America)',
'su': 'Sundanese',
'sw': 'Swahili', 'sv': 'Swedish', 'tg': 'Tajik', 'ta': 'Tamil', 'tt': 'Tatar', 'te': 'Telugu',
'th': 'Thai', 'ti': 'Tigrinya', 'to': 'Tongan', 'lua': 'Tshiluba', 'tum': 'Tumbuka',
'tr': 'Turkish', 'tk': 'Turkmen', 'tw': 'Twi',
'ug': 'Uyghur', 'uk': 'Ukrainian', 'ur': 'Urdu', 'uz': 'Uzbek', 'vu': 'Vanuatu',
'vi': 'Vietnamese', 'cy': 'Welsh', 'wo': 'Wolof', 'xh': 'Xhosa', 'yi': 'Yiddish',
'yo': 'Yoruba', 'zu': 'Zulu'}
if language in language_dict:
return language_dict[language]
else:
return ""

View File

@ -0,0 +1,100 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import re
from abc import ABC
from jinja2 import Template as Jinja2Template
from agent.component.base import ComponentParamBase
from api.utils.api_utils import timeout
from .message import Message
class StringTransformParam(ComponentParamBase):
"""
Define the code sandbox component parameters.
"""
def __init__(self):
super().__init__()
self.method = "split"
self.script = ""
self.split_ref = ""
self.delimiters = [","]
self.outputs = {"result": {"value": "", "type": "string"}}
def check(self):
self.check_valid_value(self.method, "Support method", ["split", "merge"])
self.check_empty(self.delimiters, "delimiters")
class StringTransform(Message, ABC):
component_name = "StringTransform"
def get_input_form(self) -> dict[str, dict]:
if self._param.method == "split":
return {
"line": {
"name": "String",
"type": "line"
}
}
return {k: {
"name": o["name"],
"type": "line"
} for k, o in self.get_input_elements_from_text(self._param.script).items()}
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
def _invoke(self, **kwargs):
if self._param.method == "split":
self._split(kwargs.get("line"))
else:
self._merge(kwargs)
def _split(self, line:str|None = None):
var = self._canvas.get_variable_value(self._param.split_ref) if not line else line
if not var:
var = ""
assert isinstance(var, str), "The input variable is not a string: {}".format(type(var))
self.set_input_value(self._param.split_ref, var)
res = []
for i,s in enumerate(re.split(r"(%s)"%("|".join([re.escape(d) for d in self._param.delimiters])), var, flags=re.DOTALL)):
if i % 2 == 1:
continue
res.append(s)
self.set_output("result", res)
def _merge(self, kwargs:dict[str, str] = {}):
script = self._param.script
script, kwargs = self.get_kwargs(script, kwargs, self._param.delimiters[0])
if self._is_jinjia2(script):
template = Jinja2Template(script)
try:
script = template.render(kwargs)
except Exception:
pass
for k,v in kwargs.items():
if not v:
v = ""
script = re.sub(k, v, script)
self.set_output("result", script)
def thoughts(self) -> str:
return f"It's {self._param.method}ing."

View File

@ -13,8 +13,13 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# #
import numbers
import os
from abc import ABC from abc import ABC
from typing import Any
from agent.component.base import ComponentBase, ComponentParamBase from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
class SwitchParam(ComponentParamBase): class SwitchParam(ComponentParamBase):
@ -34,7 +39,7 @@ class SwitchParam(ComponentParamBase):
} }
""" """
self.conditions = [] self.conditions = []
self.end_cpn_id = "answer:0" self.end_cpn_ids = []
self.operators = ['contains', 'not contains', 'start with', 'end with', 'empty', 'not empty', '=', '', '>', self.operators = ['contains', 'not contains', 'start with', 'end with', 'empty', 'not empty', '=', '', '>',
'<', '', ''] '<', '', '']
@ -43,54 +48,46 @@ class SwitchParam(ComponentParamBase):
for cond in self.conditions: for cond in self.conditions:
if not cond["to"]: if not cond["to"]:
raise ValueError("[Switch] 'To' can not be empty!") raise ValueError("[Switch] 'To' can not be empty!")
self.check_empty(self.end_cpn_ids, "[Switch] the ELSE/Other destination can not be empty.")
def get_input_form(self) -> dict[str, dict]:
return {
"urls": {
"name": "URLs",
"type": "line"
}
}
class Switch(ComponentBase, ABC): class Switch(ComponentBase, ABC):
component_name = "Switch" component_name = "Switch"
def get_dependent_components(self): @timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 3))
res = [] def _invoke(self, **kwargs):
for cond in self._param.conditions:
for item in cond["items"]:
if not item["cpn_id"]:
continue
if item["cpn_id"].lower().find("begin") >= 0 or item["cpn_id"].lower().find("answer") >= 0:
continue
cid = item["cpn_id"].split("@")[0]
res.append(cid)
return list(set(res))
def _run(self, history, **kwargs):
for cond in self._param.conditions: for cond in self._param.conditions:
res = [] res = []
for item in cond["items"]: for item in cond["items"]:
if not item["cpn_id"]: if not item["cpn_id"]:
continue continue
cid = item["cpn_id"].split("@")[0] cpn_v = self._canvas.get_variable_value(item["cpn_id"])
if item["cpn_id"].find("@") > 0: self.set_input_value(item["cpn_id"], cpn_v)
cpn_id, key = item["cpn_id"].split("@") operatee = item.get("value", "")
for p in self._canvas.get_component(cid)["obj"]._param.query: if isinstance(cpn_v, numbers.Number):
if p["key"] == key: operatee = float(operatee)
res.append(self.process_operator(p.get("value",""), item["operator"], item.get("value", ""))) res.append(self.process_operator(cpn_v, item["operator"], operatee))
break
else:
out = self._canvas.get_component(cid)["obj"].output(allow_partial=False)[1]
cpn_input = "" if "content" not in out.columns else " ".join([str(s) for s in out["content"]])
res.append(self.process_operator(cpn_input, item["operator"], item.get("value", "")))
if cond["logical_operator"] != "and" and any(res): if cond["logical_operator"] != "and" and any(res):
return Switch.be_output(cond["to"]) self.set_output("next", [self._canvas.get_component_name(cpn_id) for cpn_id in cond["to"]])
self.set_output("_next", cond["to"])
return
if all(res): if all(res):
return Switch.be_output(cond["to"]) self.set_output("next", [self._canvas.get_component_name(cpn_id) for cpn_id in cond["to"]])
self.set_output("_next", cond["to"])
return
return Switch.be_output(self._param.end_cpn_id) self.set_output("next", [self._canvas.get_component_name(cpn_id) for cpn_id in self._param.end_cpn_ids])
self.set_output("_next", self._param.end_cpn_ids)
def process_operator(self, input: str, operator: str, value: str) -> bool:
if not isinstance(input, str) or not isinstance(value, str):
raise ValueError('Invalid input or value type: string')
def process_operator(self, input: Any, operator: str, value: Any) -> bool:
if operator == "contains": if operator == "contains":
return True if value.lower() in input.lower() else False return True if value.lower() in input.lower() else False
elif operator == "not contains": elif operator == "not contains":
@ -128,4 +125,7 @@ class Switch(ComponentBase, ABC):
except Exception: except Exception:
return True if input <= value else False return True if input <= value else False
raise ValueError('Not supported operator' + operator) raise ValueError('Not supported operator' + operator)
def thoughts(self) -> str:
return "Im weighing a few options and will pick the next step shortly."

View File

@ -1,147 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import re
from jinja2 import StrictUndefined
from jinja2.sandbox import SandboxedEnvironment
from agent.component.base import ComponentBase, ComponentParamBase
class TemplateParam(ComponentParamBase):
"""
Define the Generate component parameters.
"""
def __init__(self):
super().__init__()
self.content = ""
self.parameters = []
def check(self):
self.check_empty(self.content, "[Template] Content")
return True
class Template(ComponentBase):
component_name = "Template"
def get_dependent_components(self):
inputs = self.get_input_elements()
cpnts = set([i["key"] for i in inputs if i["key"].lower().find("answer") < 0 and i["key"].lower().find("begin") < 0])
return list(cpnts)
def get_input_elements(self):
key_set = set([])
res = []
for r in re.finditer(r"\{([a-z]+[:@][a-z0-9_-]+)\}", self._param.content, flags=re.IGNORECASE):
cpn_id = r.group(1)
if cpn_id in key_set:
continue
if cpn_id.lower().find("begin@") == 0:
cpn_id, key = cpn_id.split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] != key:
continue
res.append({"key": r.group(1), "name": p["name"]})
key_set.add(r.group(1))
continue
cpn_nm = self._canvas.get_component_name(cpn_id)
if not cpn_nm:
continue
res.append({"key": cpn_id, "name": cpn_nm})
key_set.add(cpn_id)
return res
def _run(self, history, **kwargs):
content = self._param.content
self._param.inputs = []
for para in self.get_input_elements():
if para["key"].lower().find("begin@") == 0:
cpn_id, key = para["key"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
value = p.get("value", "")
self.make_kwargs(para, kwargs, value)
origin_pattern = "{begin@" + key + "}"
new_pattern = "begin_" + key
content = content.replace(origin_pattern, new_pattern)
kwargs[new_pattern] = kwargs.pop(origin_pattern, "")
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
component_id = para["key"]
cpn = self._canvas.get_component(component_id)["obj"]
if cpn.component_name.lower() == "answer":
hist = self._canvas.get_history(1)
if hist:
hist = hist[0]["content"]
else:
hist = ""
self.make_kwargs(para, kwargs, hist)
if ":" in component_id:
origin_pattern = "{" + component_id + "}"
new_pattern = component_id.replace(":", "_")
content = content.replace(origin_pattern, new_pattern)
kwargs[new_pattern] = kwargs.pop(component_id, "")
continue
_, out = cpn.output(allow_partial=False)
result = ""
if "content" in out.columns:
result = "\n".join([o if isinstance(o, str) else str(o) for o in out["content"]])
self.make_kwargs(para, kwargs, result)
env = SandboxedEnvironment(
autoescape=True,
undefined=StrictUndefined,
)
template = env.from_string(content)
try:
content = template.render(kwargs)
except Exception:
pass
for n, v in kwargs.items():
if not isinstance(v, str):
try:
v = json.dumps(v, ensure_ascii=False)
except Exception:
pass
# Process backslashes in strings, Use Lambda function to avoid escape issues
if isinstance(v, str):
v = v.replace("\\", "\\\\")
content = re.sub(r"\{%s\}" % re.escape(n), lambda match: v, content)
content = re.sub(r"(#+)", r" \1 ", content)
return Template.be_output(content)
def make_kwargs(self, para, kwargs, value):
self._param.inputs.append({"component_id": para["key"], "content": value})
try:
value = json.loads(value)
except Exception:
pass
kwargs[para["key"]] = value

View File

@ -1,80 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
import pywencai
from agent.component.base import ComponentBase, ComponentParamBase
class WenCaiParam(ComponentParamBase):
"""
Define the WenCai component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
self.query_type = "stock"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.query_type, "Query type",
['stock', 'zhishu', 'fund', 'hkstock', 'usstock', 'threeboard', 'conbond', 'insurance',
'futures', 'lccp',
'foreign_exchange'])
class WenCai(ComponentBase, ABC):
component_name = "WenCai"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = ",".join(ans["content"]) if "content" in ans else ""
if not ans:
return WenCai.be_output("")
try:
wencai_res = []
res = pywencai.get(query=ans, query_type=self._param.query_type, perpage=self._param.top_n)
if isinstance(res, pd.DataFrame):
wencai_res.append({"content": res.to_markdown()})
if isinstance(res, dict):
for item in res.items():
if isinstance(item[1], list):
wencai_res.append({"content": item[0] + "\n" + pd.DataFrame(item[1]).to_markdown()})
continue
if isinstance(item[1], str):
wencai_res.append({"content": item[0] + "\n" + item[1]})
continue
if isinstance(item[1], dict):
if "meta" in item[1].keys():
continue
wencai_res.append({"content": pd.DataFrame.from_dict(item[1], orient='index').to_markdown()})
continue
if isinstance(item[1], pd.DataFrame):
if "image_url" in item[1].columns:
continue
wencai_res.append({"content": item[1].to_markdown()})
continue
wencai_res.append({"content": item[0] + "\n" + str(item[1])})
except Exception as e:
return WenCai.be_output("**ERROR**: " + str(e))
if not wencai_res:
return WenCai.be_output("")
return pd.DataFrame(wencai_res)

View File

@ -1,67 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import wikipedia
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
class WikipediaParam(ComponentParamBase):
"""
Define the Wikipedia component parameters.
"""
def __init__(self):
super().__init__()
self.top_n = 10
self.language = "en"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.language, "Wikipedia languages",
['af', 'pl', 'ar', 'ast', 'az', 'bg', 'nan', 'bn', 'be', 'ca', 'cs', 'cy', 'da', 'de',
'et', 'el', 'en', 'es', 'eo', 'eu', 'fa', 'fr', 'gl', 'ko', 'hy', 'hi', 'hr', 'id',
'it', 'he', 'ka', 'lld', 'la', 'lv', 'lt', 'hu', 'mk', 'arz', 'ms', 'min', 'my', 'nl',
'ja', 'nb', 'nn', 'ce', 'uz', 'pt', 'kk', 'ro', 'ru', 'ceb', 'sk', 'sl', 'sr', 'sh',
'fi', 'sv', 'ta', 'tt', 'th', 'tg', 'azb', 'tr', 'uk', 'ur', 'vi', 'war', 'zh', 'yue'])
class Wikipedia(ComponentBase, ABC):
component_name = "Wikipedia"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return Wikipedia.be_output("")
try:
wiki_res = []
wikipedia.set_lang(self._param.language)
wiki_engine = wikipedia
for wiki_key in wiki_engine.search(ans, results=self._param.top_n):
page = wiki_engine.page(title=wiki_key, auto_suggest=False)
wiki_res.append({"content": '<a href="' + page.url + '">' + page.title + '</a> ' + page.summary})
except Exception as e:
return Wikipedia.be_output("**ERROR**: " + str(e))
if not wiki_res:
return Wikipedia.be_output("")
df = pd.DataFrame(wiki_res)
logging.debug(f"df: {df}")
return df

View File

@ -1,84 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
import yfinance as yf
class YahooFinanceParam(ComponentParamBase):
"""
Define the YahooFinance component parameters.
"""
def __init__(self):
super().__init__()
self.info = True
self.history = False
self.count = False
self.financials = False
self.income_stmt = False
self.balance_sheet = False
self.cash_flow_statement = False
self.news = True
def check(self):
self.check_boolean(self.info, "get all stock info")
self.check_boolean(self.history, "get historical market data")
self.check_boolean(self.count, "show share count")
self.check_boolean(self.financials, "show financials")
self.check_boolean(self.income_stmt, "income statement")
self.check_boolean(self.balance_sheet, "balance sheet")
self.check_boolean(self.cash_flow_statement, "cash flow statement")
self.check_boolean(self.news, "show news")
class YahooFinance(ComponentBase, ABC):
component_name = "YahooFinance"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = "".join(ans["content"]) if "content" in ans else ""
if not ans:
return YahooFinance.be_output("")
yohoo_res = []
try:
msft = yf.Ticker(ans)
if self._param.info:
yohoo_res.append({"content": "info:\n" + pd.Series(msft.info).to_markdown() + "\n"})
if self._param.history:
yohoo_res.append({"content": "history:\n" + msft.history().to_markdown() + "\n"})
if self._param.financials:
yohoo_res.append({"content": "calendar:\n" + pd.DataFrame(msft.calendar).to_markdown() + "\n"})
if self._param.balance_sheet:
yohoo_res.append({"content": "balance sheet:\n" + msft.balance_sheet.to_markdown() + "\n"})
yohoo_res.append(
{"content": "quarterly balance sheet:\n" + msft.quarterly_balance_sheet.to_markdown() + "\n"})
if self._param.cash_flow_statement:
yohoo_res.append({"content": "cash flow statement:\n" + msft.cashflow.to_markdown() + "\n"})
yohoo_res.append(
{"content": "quarterly cash flow statement:\n" + msft.quarterly_cashflow.to_markdown() + "\n"})
if self._param.news:
yohoo_res.append({"content": "news:\n" + pd.DataFrame(msft.news).to_markdown() + "\n"})
except Exception:
logging.exception("YahooFinance got exception")
if not yohoo_res:
return YahooFinance.be_output("")
return pd.DataFrame(yohoo_res)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,902 @@
{
"id": 8,
"title": "Generate SEO Blog",
"description": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI “writers”, where each agent plays a specialized role — just like a real editorial team.",
"canvas_type": "Agent",
"dsl": {
"components": {
"Agent:LuckyApplesGrab": {
"downstream": [
"Message:ModernSwansThrow"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The user query is {sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Lead Agent**, responsible for initiating the multi-agent SEO blog generation process. You will receive the user\u2019s topic and blog goal, interpret the intent, and coordinate the downstream writing agents.\n\n# Goals\n\n1. Parse the user's initial input.\n\n2. Generate a high-level blog intent summary and writing plan.\n\n3. Provide clear instructions to the following Sub_Agents:\n\n - `Outline Agent` \u2192 Create the blog outline.\n\n - `Body Agent` \u2192 Write all sections based on outline.\n\n - `Editor Agent` \u2192 Polish and finalize the blog post.\n\n4. Merge outputs into a complete, readable blog draft in Markdown format.\n\n# Input\n\nYou will receive:\n\n- Blog topic\n\n- Target audience\n\n- Blog goal (e.g., SEO, education, product marketing)\n\n# Output Format\n\n```markdown\n\n## Parsed Writing Plan\n\n- **Topic**: [Extracted from user input]\n\n- **Audience**: [Summarized from user input]\n\n- **Intent**: [Inferred goal and style]\n\n- **Blog Type**: [e.g., Tutorial / Informative Guide / Marketing Content]\n\n- **Long-tail Keywords**: \n\n - keyword 1\n\n - keyword 2\n\n - keyword 3\n\n - ...\n\n## Instructions for Outline Agent\n\nPlease generate a structured outline including H2 and H3 headings. Assign 1\u20132 relevant keywords to each section. Keep it aligned with the user\u2019s intent and audience level.\n\n## Instructions for Body Agent\n\nWrite the full content based on the outline. Each section should be concise (500\u2013600 words), informative, and optimized for SEO. Use `Tavily Search` only when additional examples or context are needed.\n\n## Instructions for Editor Agent\n\nReview and refine the combined content. Improve transitions, ensure keyword integration, and add a meta title + meta description. Maintain Markdown formatting.\n\n\n## Guides\n\n- Do not generate blog content directly.\n\n- Focus on correct intent recognition and instruction generation.\n\n- Keep communication to downstream agents simple, scoped, and accurate.\n\n\n## Input Examples (and how to handle them)\n\nInput: \"I want to write about RAGFlow.\"\n\u2192 Output: Informative Guide, Audience: AI developers, Intent: explain what RAGFlow is and its use cases\n\nInput: \"Need a blog to promote our prompt design tool.\"\n\u2192 Output: Marketing Content, Audience: product managers or tool adopters, Intent: raise awareness and interest in the product\n\nInput: \"How to get more Google traffic using AI\"\n\u2192 Output: How-to, Audience: SEO marketers, Intent: guide readers on applying AI for SEO growth",
"temperature": "0.1",
"temperatureEnabled": true,
"tools": [
{
"component_name": "Agent",
"id": "Agent:SlickSpidersTurn",
"name": "Outline Agent",
"params": {
"delay_after_error": 1,
"description": "Generates a clear and SEO-friendly blog outline using H2/H3 headings based on the topic, audience, and intent provided by the lead agent. Each section includes suggested keywords for optimized downstream writing.\n",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.3,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 2,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Balance",
"presencePenaltyEnabled": false,
"presence_penalty": 0.2,
"prompts": [
{
"content": "{sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Outline Agent**, a sub-agent in a multi-agent SEO blog writing system. You operate under the instruction of the `Lead Agent`, and your sole responsibility is to create a clear, well-structured, and SEO-optimized blog outline.\n\n# Tool Access:\n\n- You have access to a search tool called `Tavily Search`.\n\n- If you are unsure how to structure a section, you may call this tool to search for related blog outlines or content from Google.\n\n- Do not overuse it. Your job is to extract **structure**, not to write paragraphs.\n\n\n# Goals\n\n1. Create a well-structured outline with appropriate H2 and H3 headings.\n\n2. Ensure logical flow from introduction to conclusion.\n\n3. Assign 1\u20132 suggested long-tail keywords to each major section for SEO alignment.\n\n4. Make the structure suitable for downstream paragraph writing.\n\n\n\n\n#Note\n\n- Use concise, scannable section titles.\n\n- Do not write full paragraphs.\n\n- Prioritize clarity, logical progression, and SEO alignment.\n\n\n\n- If the blog type is \u201cTutorial\u201d or \u201cHow-to\u201d, include step-based sections.\n\n\n# Input\n\nYou will receive:\n\n- Writing Type (e.g., Tutorial, Informative Guide)\n\n- Target Audience\n\n- User Intent Summary\n\n- 3\u20135 long-tail keywords\n\n\nUse this information to design a structure that both informs readers and maximizes search engine visibility.\n\n# Output Format\n\n```markdown\n\n## Blog Title (suggested)\n\n[Give a short, SEO-friendly title suggestion]\n\n## Outline\n\n### Introduction\n\n- Purpose of the article\n\n- Brief context\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 1]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 2]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 3]\n\n- [Optional H3 Subsection Title A]\n\n - [Explanation of sub-point]\n\n- [Optional H3 Subsection Title B]\n\n - [Explanation of sub-point]\n\n- **Suggested keywords**: [keyword1]\n\n### Conclusion\n\n- Recap key takeaways\n\n- Optional CTA (Call to Action)\n\n- **Suggested keywords**: [keyword3]\n\n",
"temperature": 0.5,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.85,
"user_prompt": "This is the order you need to send to the agent.",
"visual_files_var": ""
}
},
{
"component_name": "Agent",
"id": "Agent:IcyPawsRescue",
"name": "Body Agent",
"params": {
"delay_after_error": 1,
"description": "Writes the full blog content section-by-section following the outline structure. It integrates target keywords naturally and uses Tavily Search only when additional facts or examples are needed.\n",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "{sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Body Agent**, a sub-agent in a multi-agent SEO blog writing system. You operate under the instruction of the `Lead Agent`, and your job is to write the full blog content based on the outline created by the `OutlineWriter_Agent`.\n\n\n\n# Tool Access:\n\nYou can use the `Tavily Search` tool to retrieve relevant content, statistics, or examples to support each section you're writing.\n\nUse it **only** when the provided outline lacks enough information, or if the section requires factual grounding.\n\nAlways cite the original link or indicate source where possible.\n\n\n# Goals\n\n1. Write each section (based on H2/H3 structure) as a complete and natural blog paragraph.\n\n2. Integrate the suggested long-tail keywords naturally into each section.\n\n3. When appropriate, use the `Tavily Search` tool to enrich your writing with relevant facts, examples, or quotes.\n\n4. Ensure each section is clear, engaging, and informative, suitable for both human readers and search engines.\n\n\n# Style Guidelines\n\n- Write in a tone appropriate to the audience. Be explanatory, not promotional, unless it's a marketing blog.\n\n- Avoid generic filler content. Prioritize clarity, structure, and value.\n\n- Ensure SEO keywords are embedded seamlessly, not forcefully.\n\n\n\n- Maintain writing rhythm. Vary sentence lengths. Use transitions between ideas.\n\n\n# Input\n\n\nYou will receive:\n\n- Blog title\n\n- Structured outline (including section titles, keywords, and descriptions)\n\n- Target audience\n\n- Blog type and user intent\n\nYou must **follow the outline strictly**. Write content **section-by-section**, based on the structure.\n\n\n# Output Format\n\n```markdown\n\n## H2: [Section Title]\n\n[Your generated content for this section \u2014 500-600 words, using keywords naturally.]\n\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "This is the order you need to send to the agent.",
"visual_files_var": ""
}
},
{
"component_name": "Agent",
"id": "Agent:TenderAdsAllow",
"name": "Editor Agent",
"params": {
"delay_after_error": 1,
"description": "Polishes and finalizes the entire blog post. Enhances clarity, checks keyword usage, improves flow, and generates a meta title and description for SEO. Operates after all sections are completed.\n\n",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 2,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "{sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Editor Agent**, the final agent in a multi-agent SEO blog writing workflow. You are responsible for finalizing the blog post for both human readability and SEO effectiveness.\n\n# Goals\n\n1. Polish the entire blog content for clarity, coherence, and style.\n\n2. Improve transitions between sections, ensure logical flow.\n\n3. Verify that keywords are used appropriately and effectively.\n\n4. Conduct a lightweight SEO audit \u2014 checking keyword density, structure (H1/H2/H3), and overall searchability.\n\n\n\n## Integration Responsibilities\n\n- Maintain alignment with Lead Agent's original intent and audience\n\n- Preserve the structure and keyword strategy from Outline Agent\n\n- Enhance and polish Body Agent's content without altering core information\n\n# Style Guidelines\n\n- Be precise. Avoid bloated or vague language.\n\n- Maintain an informative and engaging tone, suitable to the target audience.\n\n- Do not remove keywords unless absolutely necessary for clarity.\n\n- Ensure paragraph flow and section continuity.\n\n\n\n# Input\n\nYou will receive:\n\n- Full blog content, written section-by-section\n\n- Original outline with suggested keywords\n\n- Target audience and writing type\n\n# Output Format\n\n```markdown\n\n[The revised, fully polished blog post content goes here.]\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "This is the order you need to send to the agent.",
"visual_files_var": ""
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"begin"
]
},
"Message:ModernSwansThrow": {
"downstream": [],
"obj": {
"component_name": "Message",
"params": {
"content": [
"{Agent:LuckyApplesGrab@content}"
]
}
},
"upstream": [
"Agent:LuckyApplesGrab"
]
},
"begin": {
"downstream": [
"Agent:LuckyApplesGrab"
],
"obj": {
"component_name": "Begin",
"params": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SEO blog assistant.\n\nTo get started, please tell me:\n1. What topic you want the blog to cover\n2. Who is the target audience\n3. What you hope to achieve with this blog (e.g., SEO traffic, teaching beginners, promoting a product)\n"
}
},
"upstream": []
}
},
"globals": {
"sys.conversation_turns": 0,
"sys.files": [],
"sys.query": "",
"sys.user_id": ""
},
"graph": {
"edges": [
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Agent:LuckyApplesGrabend",
"source": "begin",
"sourceHandle": "start",
"target": "Agent:LuckyApplesGrab",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:LuckyApplesGrabstart-Message:ModernSwansThrowend",
"source": "Agent:LuckyApplesGrab",
"sourceHandle": "start",
"target": "Message:ModernSwansThrow",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:LuckyApplesGrabagentBottom-Agent:SlickSpidersTurnagentTop",
"source": "Agent:LuckyApplesGrab",
"sourceHandle": "agentBottom",
"target": "Agent:SlickSpidersTurn",
"targetHandle": "agentTop"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:LuckyApplesGrabagentBottom-Agent:IcyPawsRescueagentTop",
"source": "Agent:LuckyApplesGrab",
"sourceHandle": "agentBottom",
"target": "Agent:IcyPawsRescue",
"targetHandle": "agentTop"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:LuckyApplesGrabagentBottom-Agent:TenderAdsAllowagentTop",
"source": "Agent:LuckyApplesGrab",
"sourceHandle": "agentBottom",
"target": "Agent:TenderAdsAllow",
"targetHandle": "agentTop"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:SlickSpidersTurntool-Tool:ThreeWallsRingend",
"source": "Agent:SlickSpidersTurn",
"sourceHandle": "tool",
"target": "Tool:ThreeWallsRing",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:IcyPawsRescuetool-Tool:FloppyJokesItchend",
"source": "Agent:IcyPawsRescue",
"sourceHandle": "tool",
"target": "Tool:FloppyJokesItch",
"targetHandle": "end"
}
],
"nodes": [
{
"data": {
"form": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SEO blog assistant.\n\nTo get started, please tell me:\n1. What topic you want the blog to cover\n2. Who is the target audience\n3. What you hope to achieve with this blog (e.g., SEO traffic, teaching beginners, promoting a product)\n"
},
"label": "Begin",
"name": "begin"
},
"dragging": false,
"id": "begin",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": 38.19445084117184,
"y": 183.9781832844475
},
"selected": false,
"sourcePosition": "left",
"targetPosition": "right",
"type": "beginNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The user query is {sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Lead Agent**, responsible for initiating the multi-agent SEO blog generation process. You will receive the user\u2019s topic and blog goal, interpret the intent, and coordinate the downstream writing agents.\n\n# Goals\n\n1. Parse the user's initial input.\n\n2. Generate a high-level blog intent summary and writing plan.\n\n3. Provide clear instructions to the following Sub_Agents:\n\n - `Outline Agent` \u2192 Create the blog outline.\n\n - `Body Agent` \u2192 Write all sections based on outline.\n\n - `Editor Agent` \u2192 Polish and finalize the blog post.\n\n4. Merge outputs into a complete, readable blog draft in Markdown format.\n\n# Input\n\nYou will receive:\n\n- Blog topic\n\n- Target audience\n\n- Blog goal (e.g., SEO, education, product marketing)\n\n# Output Format\n\n```markdown\n\n## Parsed Writing Plan\n\n- **Topic**: [Extracted from user input]\n\n- **Audience**: [Summarized from user input]\n\n- **Intent**: [Inferred goal and style]\n\n- **Blog Type**: [e.g., Tutorial / Informative Guide / Marketing Content]\n\n- **Long-tail Keywords**: \n\n - keyword 1\n\n - keyword 2\n\n - keyword 3\n\n - ...\n\n## Instructions for Outline Agent\n\nPlease generate a structured outline including H2 and H3 headings. Assign 1\u20132 relevant keywords to each section. Keep it aligned with the user\u2019s intent and audience level.\n\n## Instructions for Body Agent\n\nWrite the full content based on the outline. Each section should be concise (500\u2013600 words), informative, and optimized for SEO. Use `Tavily Search` only when additional examples or context are needed.\n\n## Instructions for Editor Agent\n\nReview and refine the combined content. Improve transitions, ensure keyword integration, and add a meta title + meta description. Maintain Markdown formatting.\n\n\n## Guides\n\n- Do not generate blog content directly.\n\n- Focus on correct intent recognition and instruction generation.\n\n- Keep communication to downstream agents simple, scoped, and accurate.\n\n\n## Input Examples (and how to handle them)\n\nInput: \"I want to write about RAGFlow.\"\n\u2192 Output: Informative Guide, Audience: AI developers, Intent: explain what RAGFlow is and its use cases\n\nInput: \"Need a blog to promote our prompt design tool.\"\n\u2192 Output: Marketing Content, Audience: product managers or tool adopters, Intent: raise awareness and interest in the product\n\nInput: \"How to get more Google traffic using AI\"\n\u2192 Output: How-to, Audience: SEO marketers, Intent: guide readers on applying AI for SEO growth",
"temperature": "0.1",
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Lead Agent"
},
"id": "Agent:LuckyApplesGrab",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 350,
"y": 200
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"content": [
"{Agent:LuckyApplesGrab@content}"
]
},
"label": "Message",
"name": "Response"
},
"dragging": false,
"id": "Message:ModernSwansThrow",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 669.394830760932,
"y": 190.72421137520644
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "messageNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "Generates a clear and SEO-friendly blog outline using H2/H3 headings based on the topic, audience, and intent provided by the lead agent. Each section includes suggested keywords for optimized downstream writing.\n",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.3,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 2,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Balance",
"presencePenaltyEnabled": false,
"presence_penalty": 0.2,
"prompts": [
{
"content": "{sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Outline Agent**, a sub-agent in a multi-agent SEO blog writing system. You operate under the instruction of the `Lead Agent`, and your sole responsibility is to create a clear, well-structured, and SEO-optimized blog outline.\n\n# Tool Access:\n\n- You have access to a search tool called `Tavily Search`.\n\n- If you are unsure how to structure a section, you may call this tool to search for related blog outlines or content from Google.\n\n- Do not overuse it. Your job is to extract **structure**, not to write paragraphs.\n\n\n# Goals\n\n1. Create a well-structured outline with appropriate H2 and H3 headings.\n\n2. Ensure logical flow from introduction to conclusion.\n\n3. Assign 1\u20132 suggested long-tail keywords to each major section for SEO alignment.\n\n4. Make the structure suitable for downstream paragraph writing.\n\n\n\n\n#Note\n\n- Use concise, scannable section titles.\n\n- Do not write full paragraphs.\n\n- Prioritize clarity, logical progression, and SEO alignment.\n\n\n\n- If the blog type is \u201cTutorial\u201d or \u201cHow-to\u201d, include step-based sections.\n\n\n# Input\n\nYou will receive:\n\n- Writing Type (e.g., Tutorial, Informative Guide)\n\n- Target Audience\n\n- User Intent Summary\n\n- 3\u20135 long-tail keywords\n\n\nUse this information to design a structure that both informs readers and maximizes search engine visibility.\n\n# Output Format\n\n```markdown\n\n## Blog Title (suggested)\n\n[Give a short, SEO-friendly title suggestion]\n\n## Outline\n\n### Introduction\n\n- Purpose of the article\n\n- Brief context\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 1]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 2]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 3]\n\n- [Optional H3 Subsection Title A]\n\n - [Explanation of sub-point]\n\n- [Optional H3 Subsection Title B]\n\n - [Explanation of sub-point]\n\n- **Suggested keywords**: [keyword1]\n\n### Conclusion\n\n- Recap key takeaways\n\n- Optional CTA (Call to Action)\n\n- **Suggested keywords**: [keyword3]\n\n",
"temperature": 0.5,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.85,
"user_prompt": "This is the order you need to send to the agent.",
"visual_files_var": ""
},
"label": "Agent",
"name": "Outline Agent"
},
"dragging": false,
"id": "Agent:SlickSpidersTurn",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 100.60137004146719,
"y": 411.67654846431367
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "Writes the full blog content section-by-section following the outline structure. It integrates target keywords naturally and uses Tavily Search only when additional facts or examples are needed.\n",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "{sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Body Agent**, a sub-agent in a multi-agent SEO blog writing system. You operate under the instruction of the `Lead Agent`, and your job is to write the full blog content based on the outline created by the `OutlineWriter_Agent`.\n\n\n\n# Tool Access:\n\nYou can use the `Tavily Search` tool to retrieve relevant content, statistics, or examples to support each section you're writing.\n\nUse it **only** when the provided outline lacks enough information, or if the section requires factual grounding.\n\nAlways cite the original link or indicate source where possible.\n\n\n# Goals\n\n1. Write each section (based on H2/H3 structure) as a complete and natural blog paragraph.\n\n2. Integrate the suggested long-tail keywords naturally into each section.\n\n3. When appropriate, use the `Tavily Search` tool to enrich your writing with relevant facts, examples, or quotes.\n\n4. Ensure each section is clear, engaging, and informative, suitable for both human readers and search engines.\n\n\n# Style Guidelines\n\n- Write in a tone appropriate to the audience. Be explanatory, not promotional, unless it's a marketing blog.\n\n- Avoid generic filler content. Prioritize clarity, structure, and value.\n\n- Ensure SEO keywords are embedded seamlessly, not forcefully.\n\n\n\n- Maintain writing rhythm. Vary sentence lengths. Use transitions between ideas.\n\n\n# Input\n\n\nYou will receive:\n\n- Blog title\n\n- Structured outline (including section titles, keywords, and descriptions)\n\n- Target audience\n\n- Blog type and user intent\n\nYou must **follow the outline strictly**. Write content **section-by-section**, based on the structure.\n\n\n# Output Format\n\n```markdown\n\n## H2: [Section Title]\n\n[Your generated content for this section \u2014 500-600 words, using keywords naturally.]\n\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "This is the order you need to send to the agent.",
"visual_files_var": ""
},
"label": "Agent",
"name": "Body Agent"
},
"dragging": false,
"id": "Agent:IcyPawsRescue",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 439.3374395738501,
"y": 366.1408588516909
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "Polishes and finalizes the entire blog post. Enhances clarity, checks keyword usage, improves flow, and generates a meta title and description for SEO. Operates after all sections are completed.\n\n",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 2,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "{sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Editor Agent**, the final agent in a multi-agent SEO blog writing workflow. You are responsible for finalizing the blog post for both human readability and SEO effectiveness.\n\n# Goals\n\n1. Polish the entire blog content for clarity, coherence, and style.\n\n2. Improve transitions between sections, ensure logical flow.\n\n3. Verify that keywords are used appropriately and effectively.\n\n4. Conduct a lightweight SEO audit \u2014 checking keyword density, structure (H1/H2/H3), and overall searchability.\n\n\n\n## Integration Responsibilities\n\n- Maintain alignment with Lead Agent's original intent and audience\n\n- Preserve the structure and keyword strategy from Outline Agent\n\n- Enhance and polish Body Agent's content without altering core information\n\n# Style Guidelines\n\n- Be precise. Avoid bloated or vague language.\n\n- Maintain an informative and engaging tone, suitable to the target audience.\n\n- Do not remove keywords unless absolutely necessary for clarity.\n\n- Ensure paragraph flow and section continuity.\n\n\n\n# Input\n\nYou will receive:\n\n- Full blog content, written section-by-section\n\n- Original outline with suggested keywords\n\n- Target audience and writing type\n\n# Output Format\n\n```markdown\n\n[The revised, fully polished blog post content goes here.]\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "This is the order you need to send to the agent.",
"visual_files_var": ""
},
"label": "Agent",
"name": "Editor Agent"
},
"dragging": false,
"id": "Agent:TenderAdsAllow",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 730.8513124709204,
"y": 327.351197329827
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"description": "This is an agent for a specific task.",
"user_prompt": "This is the order you need to send to the agent."
},
"label": "Tool",
"name": "flow.tool_0"
},
"dragging": false,
"id": "Tool:ThreeWallsRing",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": -26.93431957115564,
"y": 531.4384641920368
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "toolNode"
},
{
"data": {
"form": {
"description": "This is an agent for a specific task.",
"user_prompt": "This is the order you need to send to the agent."
},
"label": "Tool",
"name": "flow.tool_1"
},
"dragging": false,
"id": "Tool:FloppyJokesItch",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": 414.6786783453011,
"y": 499.39483076093194
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "toolNode"
},
{
"data": {
"form": {
"text": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI \u201cwriters\u201d, where each agent plays a specialized role \u2014 just like a real editorial team.\n\nInstead of one AI doing everything in order, this version uses a **Lead Agent** to assign tasks to different sub-agents, who then write and edit the blog in parallel. The Lead Agent manages everything and produces the final output.\n\n### Why use multi-agent format?\n\n- Better control over each stage of writing \n- Easier to reuse agents across tasks \n- More human-like workflow (planning \u2192 writing \u2192 editing \u2192 publishing) \n- Easier to scale and customize for advanced users\n\n### Flow Summary:\n\n1. `LeadWriter_Agent` takes your input and creates a plan\n2. It sends that plan to:\n - `OutlineWriter_Agent`: build blog structure\n - `BodyWriter_Agent`: write full content\n - `FinalEditor_Agent`: polish and finalize\n3. `LeadWriter_Agent` collects all results and outputs the final blog post\n"
},
"label": "Note",
"name": "Workflow Overall Description"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 208,
"id": "Note:ElevenVansInvent",
"measured": {
"height": 208,
"width": 518
},
"position": {
"x": -336.6586460874556,
"y": 113.43253511344867
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 518
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis is the central agent that controls the entire writing process.\n\n**What it does**:\n- Reads your blog topic and intent\n- Generates a clear writing plan (topic, audience, goal, keywords)\n- Sends instructions to all sub-agents\n- Waits for their responses and checks quality\n- If any section is missing or weak, it can request a rewrite\n- Finally, it assembles all parts into a complete blog and sends it back to you\n"
},
"label": "Note",
"name": "Lead Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 146,
"id": "Note:EmptyClubsGreet",
"measured": {
"height": 146,
"width": 334
},
"position": {
"x": 390.1408623279084,
"y": 2.6521144030202493
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 334
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent is responsible for building the blog's structure. It creates an outline that shows what the article will cover and how it's organized.\n\n**What it does**:\n- Suggests a blog title that matches the topic and keywords \n- Breaks the article into sections using H2 and H3 headers \n- Adds a short description of what each section should include \n- Assigns SEO keywords to each section for better search visibility \n- Uses search data (via Tavily Search) to find how similar blogs are structured"
},
"label": "Note",
"name": "Outline Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 157,
"id": "Note:CurlyTigersDouble",
"measured": {
"height": 157,
"width": 394
},
"position": {
"x": -60.03139680691618,
"y": 595.8208080534818
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 394
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent is in charge of writing the full blog content, section by section, based on the outline it receives.\n\n**What it does**:\n- Takes each section heading from the outline (H2 / H3)\n- Writes a complete paragraph (150\u2013220 words) under each section\n- Naturally includes the keywords provided for that section\n- Uses the Tavily Search tool to add real-world examples, definitions, or facts if needed\n- Makes sure each section is clear, useful, and easy to read\n"
},
"label": "Note",
"name": "Body Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 164,
"id": "Note:StrongKingsCamp",
"measured": {
"height": 164,
"width": 408
},
"position": {
"x": 446.54943226110845,
"y": 590.9443887062529
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 408
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent reviews, polishes, and finalizes the blog post written by the BodyWriter_Agent. It ensures everything is clean, smooth, and SEO-compliant.\n\n**What it does**:\n- Improves grammar, sentence flow, and transitions \n- Makes sure the content reads naturally and professionally \n- Checks whether keywords are present and well integrated (but not overused) \n- Verifies that the structure follows the correct H1/H2/H3 format \n"
},
"label": "Note",
"name": "Editor Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 147,
"id": "Note:OpenOttersShow",
"measured": {
"height": 147,
"width": 357
},
"position": {
"x": 976.6858726228803,
"y": 422.7404806291804
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 357
}
]
},
"history": [],
"messages": [],
"path": [],
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,327 @@
{
"id": 20,
"title": "Report Agent Using Knowledge Base",
"description": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"canvas_type": "Agent",
"dsl": {
"components": {
"Agent:NewPumasLick": {
"downstream": [
"Message:OrangeYearsShine"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "qwen3-235b-a22b-instruct-2507@Tongyi-Qianwen",
"maxTokensEnabled": true,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 128000,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "# User Query\n {sys.query}",
"role": "user"
}
],
"sys_prompt": "## Role & Task\nYou are a **\u201cKnowledge Base Retrieval Q\\&A Agent\u201d** whose goal is to break down the user\u2019s question into retrievable subtasks, and then produce a multi-source-verified, structured, and actionable research report using the internal knowledge base.\n## Execution Framework (Detailed Steps & Key Points)\n1. **Assessment & Decomposition**\n * Actions:\n * Automatically extract: main topic, subtopics, entities (people/organizations/products/technologies), time window, geographic/business scope.\n * Output as a list: N facts/data points that must be collected (*N* ranges from 5\u201320 depending on question complexity).\n2. **Query Type Determination (Rule-Based)**\n * Example rules:\n * If the question involves a single issue but requests \u201cmethod comparison/multiple explanations\u201d \u2192 use **depth-first**.\n * If the question can naturally be split into \u22653 independent sub-questions \u2192 use **breadth-first**.\n * If the question can be answered by a single fact/specification/definition \u2192 use **simple query**.\n3. **Research Plan Formulation**\n * Depth-first: define 3\u20135 perspectives (methodology/stakeholders/time dimension/technical route, etc.), assign search keywords, target document types, and output format for each perspective.\n * Breadth-first: list subtasks, prioritize them, and assign search terms.\n * Simple query: directly provide the search sentence and required fields.\n4. **Retrieval Execution**\n * After retrieval: perform coverage check (does it contain the key facts?) and quality check (source diversity, authority, latest update time).\n * If standards are not met, automatically loop: rewrite queries (synonyms/cross-domain terms) and retry \u22643 times, or flag as requiring external search.\n5. **Integration & Reasoning**\n * Build the answer using a **fact\u2013evidence\u2013reasoning** chain. For each conclusion, attach 1\u20132 strongest pieces of evidence.\n---\n## Quality Gate Checklist (Verify at Each Stage)\n* **Stage 1 (Decomposition)**:\n * [ ] Key concepts and expected outputs identified\n * [ ] Required facts/data points listed\n* **Stage 2 (Retrieval)**:\n * [ ] Meets quality standards (see above)\n * [ ] If not met: execute query iteration\n* **Stage 3 (Generation)**:\n * [ ] Each conclusion has at least one direct evidence source\n * [ ] State assumptions/uncertainties\n * [ ] Provide next-step suggestions or experiment/retrieval plans\n * [ ] Final length and depth match user expectations (comply with word count/format if specified)\n---\n## Core Principles\n1. **Strict reliance on the knowledge base**: answers must be **fully bounded** by the content retrieved from the knowledge base.\n2. **No fabrication**: do not generate, infer, or create information that is not explicitly present in the knowledge base.\n3. **Accuracy first**: prefer incompleteness over inaccurate content.\n4. **Output format**:\n * Hierarchically clear modular structure\n * Logical grouping according to the MECE principle\n * Professionally presented formatting\n * Step-by-step cognitive guidance\n * Reasonable use of headings and dividers for clarity\n * *Italicize* key parameters\n * **Bold** critical information\n5. **LaTeX formula requirements**:\n * Inline formulas: start and end with `$`\n * Block formulas: start and end with `$$`, each `$$` on its own line\n * Block formula content must comply with LaTeX math syntax\n * Verify formula correctness\n---\n## Additional Notes (Interaction & Failure Strategy)\n* If the knowledge base does not cover critical facts: explicitly inform the user (with sample wording)\n* For time-sensitive issues: enforce time filtering in the search request, and indicate the latest retrieval date in the answer.\n* Language requirement: answer in the user\u2019s preferred language\n",
"temperature": "0.1",
"temperatureEnabled": true,
"tools": [
{
"component_name": "Retrieval",
"name": "Retrieval",
"params": {
"cross_languages": [],
"description": "",
"empty_response": "",
"kb_ids": [],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"begin"
]
},
"Message:OrangeYearsShine": {
"downstream": [],
"obj": {
"component_name": "Message",
"params": {
"content": [
"{Agent:NewPumasLick@content}"
]
}
},
"upstream": [
"Agent:NewPumasLick"
]
},
"begin": {
"downstream": [
"Agent:NewPumasLick"
],
"obj": {
"component_name": "Begin",
"params": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "\u4f60\u597d\uff01 \u6211\u662f\u4f60\u7684\u52a9\u7406\uff0c\u6709\u4ec0\u4e48\u53ef\u4ee5\u5e2e\u5230\u4f60\u7684\u5417\uff1f"
}
},
"upstream": []
}
},
"globals": {
"sys.conversation_turns": 0,
"sys.files": [],
"sys.query": "",
"sys.user_id": ""
},
"graph": {
"edges": [
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Agent:NewPumasLickend",
"source": "begin",
"sourceHandle": "start",
"target": "Agent:NewPumasLick",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:NewPumasLickstart-Message:OrangeYearsShineend",
"markerEnd": "logo",
"source": "Agent:NewPumasLick",
"sourceHandle": "start",
"style": {
"stroke": "rgba(91, 93, 106, 1)",
"strokeWidth": 1
},
"target": "Message:OrangeYearsShine",
"targetHandle": "end",
"type": "buttonEdge",
"zIndex": 1001
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:NewPumasLicktool-Tool:AllBirdsNailend",
"selected": false,
"source": "Agent:NewPumasLick",
"sourceHandle": "tool",
"target": "Tool:AllBirdsNail",
"targetHandle": "end"
}
],
"nodes": [
{
"data": {
"form": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "\u4f60\u597d\uff01 \u6211\u662f\u4f60\u7684\u52a9\u7406\uff0c\u6709\u4ec0\u4e48\u53ef\u4ee5\u5e2e\u5230\u4f60\u7684\u5417\uff1f"
},
"label": "Begin",
"name": "begin"
},
"dragging": false,
"id": "begin",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": -9.569875358221438,
"y": 205.84018385864917
},
"selected": false,
"sourcePosition": "left",
"targetPosition": "right",
"type": "beginNode"
},
{
"data": {
"form": {
"content": [
"{Agent:NewPumasLick@content}"
]
},
"label": "Message",
"name": "Response"
},
"dragging": false,
"id": "Message:OrangeYearsShine",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 734.4061285881053,
"y": 199.9706031723009
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "messageNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "qwen3-235b-a22b-instruct-2507@Tongyi-Qianwen",
"maxTokensEnabled": true,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 128000,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "# User Query\n {sys.query}",
"role": "user"
}
],
"sys_prompt": "## Role & Task\nYou are a **\u201cKnowledge Base Retrieval Q\\&A Agent\u201d** whose goal is to break down the user\u2019s question into retrievable subtasks, and then produce a multi-source-verified, structured, and actionable research report using the internal knowledge base.\n## Execution Framework (Detailed Steps & Key Points)\n1. **Assessment & Decomposition**\n * Actions:\n * Automatically extract: main topic, subtopics, entities (people/organizations/products/technologies), time window, geographic/business scope.\n * Output as a list: N facts/data points that must be collected (*N* ranges from 5\u201320 depending on question complexity).\n2. **Query Type Determination (Rule-Based)**\n * Example rules:\n * If the question involves a single issue but requests \u201cmethod comparison/multiple explanations\u201d \u2192 use **depth-first**.\n * If the question can naturally be split into \u22653 independent sub-questions \u2192 use **breadth-first**.\n * If the question can be answered by a single fact/specification/definition \u2192 use **simple query**.\n3. **Research Plan Formulation**\n * Depth-first: define 3\u20135 perspectives (methodology/stakeholders/time dimension/technical route, etc.), assign search keywords, target document types, and output format for each perspective.\n * Breadth-first: list subtasks, prioritize them, and assign search terms.\n * Simple query: directly provide the search sentence and required fields.\n4. **Retrieval Execution**\n * After retrieval: perform coverage check (does it contain the key facts?) and quality check (source diversity, authority, latest update time).\n * If standards are not met, automatically loop: rewrite queries (synonyms/cross-domain terms) and retry \u22643 times, or flag as requiring external search.\n5. **Integration & Reasoning**\n * Build the answer using a **fact\u2013evidence\u2013reasoning** chain. For each conclusion, attach 1\u20132 strongest pieces of evidence.\n---\n## Quality Gate Checklist (Verify at Each Stage)\n* **Stage 1 (Decomposition)**:\n * [ ] Key concepts and expected outputs identified\n * [ ] Required facts/data points listed\n* **Stage 2 (Retrieval)**:\n * [ ] Meets quality standards (see above)\n * [ ] If not met: execute query iteration\n* **Stage 3 (Generation)**:\n * [ ] Each conclusion has at least one direct evidence source\n * [ ] State assumptions/uncertainties\n * [ ] Provide next-step suggestions or experiment/retrieval plans\n * [ ] Final length and depth match user expectations (comply with word count/format if specified)\n---\n## Core Principles\n1. **Strict reliance on the knowledge base**: answers must be **fully bounded** by the content retrieved from the knowledge base.\n2. **No fabrication**: do not generate, infer, or create information that is not explicitly present in the knowledge base.\n3. **Accuracy first**: prefer incompleteness over inaccurate content.\n4. **Output format**:\n * Hierarchically clear modular structure\n * Logical grouping according to the MECE principle\n * Professionally presented formatting\n * Step-by-step cognitive guidance\n * Reasonable use of headings and dividers for clarity\n * *Italicize* key parameters\n * **Bold** critical information\n5. **LaTeX formula requirements**:\n * Inline formulas: start and end with `$`\n * Block formulas: start and end with `$$`, each `$$` on its own line\n * Block formula content must comply with LaTeX math syntax\n * Verify formula correctness\n---\n## Additional Notes (Interaction & Failure Strategy)\n* If the knowledge base does not cover critical facts: explicitly inform the user (with sample wording)\n* For time-sensitive issues: enforce time filtering in the search request, and indicate the latest retrieval date in the answer.\n* Language requirement: answer in the user\u2019s preferred language\n",
"temperature": "0.1",
"temperatureEnabled": true,
"tools": [
{
"component_name": "Retrieval",
"name": "Retrieval",
"params": {
"cross_languages": [],
"description": "",
"empty_response": "",
"kb_ids": [],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Knowledge Base Agent"
},
"dragging": false,
"id": "Agent:NewPumasLick",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 347.00048227952215,
"y": 186.49109364794631
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"description": "This is an agent for a specific task.",
"user_prompt": "This is the order you need to send to the agent."
},
"label": "Tool",
"name": "flow.tool_10"
},
"dragging": false,
"id": "Tool:AllBirdsNail",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": 220.24819746977118,
"y": 403.31576836482583
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "toolNode"
}
]
},
"history": [],
"memory": [],
"messages": [],
"path": [],
"retrieval": []
},
"avatar": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAH0klEQVR4nO2ZC1BU1wGG/3uRp/IygG+DGK0GOjE1U6cxI4tT03Y0E+kENbaJbKpj60wzgNMwnTjuEtu0miGasY+0krI202kMVEnVxtoOLG00oVa0LajVBDcSEI0REFBgkZv/3GWXfdzdvctuHs7kmzmec9//d+45914XCXc4Xwjk1+59VJGGF7C5QAFSWBvgyWmWLl7IKiny6QNL173B5YjB84bOyrpKA4B1DLySdQpLKAiZGtZ7a/KMVoQJz6UfEZyhTWwaEBmssiLvCueu6BJg8EwFqGTTAC+uvNWC9w82sRWcux/JwaSHstjywcogRt4RG0KExwWG4QsVYCebKSwe3L5lR9OOWjyzfg2WL/0a1/jncO3b2FHxGnKeWYqo+Giu8UEMrWJKWBACPMY/DG+63txhvnKshUu+DF2/hayMDFRsL+VScDb++AVc6OjAuInxXPJl2tfnIikrzUyJMi7qQmLRhOEr2fOFbX/7P6STF7BqoWevfdij4NWGQfx+57OYO2sG1wSnsek8Nm15EU8sikF6ouelXz9ph7JwDqYt+5IIZaGEkauDIrH4wPBmhjexCSEws+VdVG1M4NIoj+2xYzBuJtavWcEl/VS8dggx/ZdQvcGzQwp+cxOXsu5RBQQMVkYJM4LA/Txh+ELFMWFVPARS5kFiabZdx8Olh7l17BzdvhzZmROhdJ3j6D/nIyBgOCMlLAgA9xmF4TMV4BSbrgnrLiBl5rOsRCRRbDUsBzQFiJjY91PCBj9w+yiP1lXWsTLAjc9YQGB9I8+Yx1oTiUWFvW9QgDo2PdASaDp/EQ8/sRnhcPTVcuTMncXwQQVESL9DidscaPW+QEtAICRu9PSxFTpJiePV8AI9AsTvXZBY/Pa+wJ9ApNApIILm8S5Y4QXXQwhYFH6csemDP4G3G5v579i5d04mknknQhDYS4HCrCVr/mC3D305KnbCEpvVIia5Onw6WaWw+KAl0Np+FUXbdiMcyoqfUoeRHoFrJ1uRtnBG1/9Mf/3LtElp+VwF2wcd7woJib1vUPwMH4GWQCQJJtBa/V9cPmFD8uQUpMdNGDhY8bNYrobh8acHu270/l0ImJWRt64Wn6WACN9z5gq2lXwPW8pfweT0icP/fH23vO9QLYq3/QKyLBmFQI3CUcT9NdESEEPItKsSN3r7MBaSJoxHWZERM6ZmMLy2gDP8/pd/og418dTL37hFSUpMUC5f+UiWZcnY9s5+ixCwUiCXx2iiJdDNx6f4pgkH8Q3lbxK7h8+enoHha1cRNdMp8axiHxo6+/5bVdk8DSROYIW1X7QEIom3wHD3gEf4vu1bVYEJZeWQ0zJQvmcfyiv2QZak6raG/QWfK4Ez9mTc5v8xPMJfuojoxXmIX/9DOMe+FCWbcHu4BJJ0YEwCx0824bFNW9HesB+CqYu+jepfPYcHF+aoPXS8sQl/+vU2bgmOU2C+qRc9/YrrPPbGBtzavd0nvCxLxui4pJrBm911PFwak4CYA80cj+JCAiGUzYkmxrSY4N2c3GLi6UEIFL/wRxxqkhmHnTEpDQcrfq6ea+hcE8bNy3GFzyq4H22HW1Kd4WMSkg1jmsSRpKj0Rzhy4gNUv/y8Gjrv8SJK3OWScA+fMn/ysVPPvTmeh6nh1TcxBUJ+jEaKYr7N36x7h+Edj0pB6+WrLokn87+BrTt/p4ZPzZ6MM7/8R2//h33vOcNzdwgBMwVMbGvySQmo4a0NqOZccU7YmGXLEfPQUlUid/XT6B8YdIU/99vjsPcOdEhDsfOd4QVCwKB8yp8SWuG1njbTl83DpMWz1PCKAswuWPDI0e8WebyAJBbxNdrF7cls+hBpAb3h3XtehL/3+4u7D35rQwpP4YFTwMJ91rHpQyQFQgmf9sAMNL9Ur4afv/FBjIuPVj+n4YVTwMD96tj0IVICoYYXv/q1VJ1Sl8UveQyaRwErvOB6B5SwKhqP00gI6A0vhsycJ7/KIzxhyHqGN0ADbnNAAYOicRfCFdAb/p50Gbfuc/wy5w1D5lOghk0fuG0USlgVr7sQjoDe8C8WxKGKPy2KjzlvAQb02/sCbh+FApngX1QUtyeSuwDi0hxFByV7L+LIf3r5kvpp4PBr07Hqvn71Y85bgOG6WS2ggA1+4D6eUKKQApVsqngI6KSkqh9HzsoM/3zg8Oz5VQ9E8wjf30YFDGdkeAsCwH18oYRZGXk7C4HuYxcwe6rjQsFovzaEvoFxqNkTOPzMjGikJso8wsF77XYkLx6dAwxWxvBmBIH7aUMJi8J3w0DnTVz7dyvX6KPzVBt+kL8cmzesRq9ps2Z48bRJmOIapS7E4zM2lXNt5CcU6ID7+ocSZkqY2NRN6ysnsHbJEpR8ZwV6t5Yg+iuLELf2KVd48VwXQf3BQGUMb4ZOuH9gKFEIYJfiNrEDcXZHHV4q3YRv5i7ikgM94RlETNgihrcgBHhccCiRCf7VhBK5rAPyr9I/Y/WKPEyfksH/9NjQ2dODhsYzwcLXsypkeBtCRGLRDUUMAMyKHxEx4dtrzyP97nQMygripiQiKi4aSbPvQmKW7+OXF69ntYvBa1iPCYklZEZECsGm4ja0Ops7EJsaj4SprlU+8IJiqIjAFga3Ikx4vvAYkTGALxyWFArlsnbBC9Sz6mI5zWKNRGh3JJY7mjte4GOz+r4tkRbxQQAAAABJRU5ErkJggg=="
}

View File

@ -0,0 +1,915 @@
{
"id": 12,
"title": "Generate SEO Blog",
"description": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"canvas_type": "Marketing",
"dsl": {
"components": {
"Agent:BetterSitesSend": {
"downstream": [
"Agent:EagerNailsRemain"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.3,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Balance",
"presencePenaltyEnabled": false,
"presence_penalty": 0.2,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Outline_Agent**, responsible for generating a clear and SEO-optimized blog outline based on the user's parsed writing intent and keyword strategy.\n\n# Tool Access:\n\n- You have access to a search tool called `Tavily Search`.\n\n- If you are unsure how to structure a section, you may call this tool to search for related blog outlines or content from Google.\n\n- Do not overuse it. Your job is to extract **structure**, not to write paragraphs.\n\n\n# Goals\n\n1. Create a well-structured outline with appropriate H2 and H3 headings.\n\n2. Ensure logical flow from introduction to conclusion.\n\n3. Assign 1\u20132 suggested long-tail keywords to each major section for SEO alignment.\n\n4. Make the structure suitable for downstream paragraph writing.\n\n\n\n\n#Note\n\n- Use concise, scannable section titles.\n\n- Do not write full paragraphs.\n\n- Prioritize clarity, logical progression, and SEO alignment.\n\n\n\n- If the blog type is \u201cTutorial\u201d or \u201cHow-to\u201d, include step-based sections.\n\n\n# Input\n\nYou will receive:\n\n- Writing Type (e.g., Tutorial, Informative Guide)\n\n- Target Audience\n\n- User Intent Summary\n\n- 3\u20135 long-tail keywords\n\n\nUse this information to design a structure that both informs readers and maximizes search engine visibility.\n\n# Output Format\n\n```markdown\n\n## Blog Title (suggested)\n\n[Give a short, SEO-friendly title suggestion]\n\n## Outline\n\n### Introduction\n\n- Purpose of the article\n\n- Brief context\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 1]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 2]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 3]\n\n- [Optional H3 Subsection Title A]\n\n - [Explanation of sub-point]\n\n- [Optional H3 Subsection Title B]\n\n - [Explanation of sub-point]\n\n- **Suggested keywords**: [keyword1]\n\n### Conclusion\n\n- Recap key takeaways\n\n- Optional CTA (Call to Action)\n\n- **Suggested keywords**: [keyword3]\n\n",
"temperature": 0.5,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.85,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"Agent:ClearRabbitsScream"
]
},
"Agent:ClearRabbitsScream": {
"downstream": [
"Agent:BetterSitesSend"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 1,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The user query is {sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Parse_And_Keyword_Agent**, responsible for interpreting a user's blog writing request and generating a structured writing intent summary and keyword strategy for SEO-optimized content generation.\n\n# Goals\n\n1. Extract and infer the user's true writing intent, even if the input is informal or vague.\n\n2. Identify the writing type, target audience, and implied goal.\n\n3. Suggest 3\u20135 long-tail keywords based on the input and context.\n\n4. Output all data in a Markdown format for downstream agents.\n\n# Operating Guidelines\n\n\n- If the user's input lacks clarity, make reasonable and **conservative** assumptions based on SEO best practices.\n\n- Always choose one clear \"Writing Type\" from the list below.\n\n- Your job is not to write the blog \u2014 only to structure the brief.\n\n# Output Format\n\n```markdown\n## Writing Type\n\n[Choose one: Tutorial / Informative Guide / Marketing Content / Case Study / Opinion Piece / How-to / Comparison Article]\n\n## Target Audience\n\n[Try to be specific based on clues in the input: e.g., marketing managers, junior developers, SEO beginners]\n\n## User Intent Summary\n\n[A 1\u20132 sentence summary of what the user wants to achieve with the blog post]\n\n## Suggested Long-tail Keywords\n\n- keyword 1\n\n- keyword 2\n\n- keyword 3\n\n- keyword 4 (optional)\n\n- keyword 5 (optional)\n\n\n\n\n## Input Examples (and how to handle them)\n\nInput: \"I want to write about RAGFlow.\"\n\u2192 Output: Informative Guide, Audience: AI developers, Intent: explain what RAGFlow is and its use cases\n\nInput: \"Need a blog to promote our prompt design tool.\"\n\u2192 Output: Marketing Content, Audience: product managers or tool adopters, Intent: raise awareness and interest in the product\n\n\n\nInput: \"How to get more Google traffic using AI\"\n\u2192 Output: How-to, Audience: SEO marketers, Intent: guide readers on applying AI for SEO growth",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"begin"
]
},
"Agent:EagerNailsRemain": {
"downstream": [
"Agent:LovelyHeadsOwn"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Body_Agent**, responsible for generating the full content of each section of an SEO-optimized blog based on the provided outline and keyword strategy.\n\n# Tool Access:\n\nYou can use the `Tavily Search` tool to retrieve relevant content, statistics, or examples to support each section you're writing.\n\nUse it **only** when the provided outline lacks enough information, or if the section requires factual grounding.\n\nAlways cite the original link or indicate source where possible.\n\n\n# Goals\n\n1. Write each section (based on H2/H3 structure) as a complete and natural blog paragraph.\n\n2. Integrate the suggested long-tail keywords naturally into each section.\n\n3. When appropriate, use the `Tavily Search` tool to enrich your writing with relevant facts, examples, or quotes.\n\n4. Ensure each section is clear, engaging, and informative, suitable for both human readers and search engines.\n\n\n# Style Guidelines\n\n- Write in a tone appropriate to the audience. Be explanatory, not promotional, unless it's a marketing blog.\n\n- Avoid generic filler content. Prioritize clarity, structure, and value.\n\n- Ensure SEO keywords are embedded seamlessly, not forcefully.\n\n\n\n- Maintain writing rhythm. Vary sentence lengths. Use transitions between ideas.\n\n\n# Input\n\n\nYou will receive:\n\n- Blog title\n\n- Structured outline (including section titles, keywords, and descriptions)\n\n- Target audience\n\n- Blog type and user intent\n\nYou must **follow the outline strictly**. Write content **section-by-section**, based on the structure.\n\n\n# Output Format\n\n```markdown\n\n## H2: [Section Title]\n\n[Your generated content for this section \u2014 500-600 words, using keywords naturally.]\n\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"Agent:BetterSitesSend"
]
},
"Agent:LovelyHeadsOwn": {
"downstream": [
"Message:LegalBeansBet"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Editor_Agent**, responsible for finalizing the blog post for both human readability and SEO effectiveness.\n\n# Goals\n\n1. Polish the entire blog content for clarity, coherence, and style.\n\n2. Improve transitions between sections, ensure logical flow.\n\n3. Verify that keywords are used appropriately and effectively.\n\n4. Conduct a lightweight SEO audit \u2014 checking keyword density, structure (H1/H2/H3), and overall searchability.\n\n\n\n# Style Guidelines\n\n- Be precise. Avoid bloated or vague language.\n\n- Maintain an informative and engaging tone, suitable to the target audience.\n\n- Do not remove keywords unless absolutely necessary for clarity.\n\n- Ensure paragraph flow and section continuity.\n\n\n# Input\n\nYou will receive:\n\n- Full blog content, written section-by-section\n\n- Original outline with suggested keywords\n\n- Target audience and writing type\n\n# Output Format\n\n```markdown\n\n[The revised, fully polished blog post content goes here.]\n\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"Agent:EagerNailsRemain"
]
},
"Message:LegalBeansBet": {
"downstream": [],
"obj": {
"component_name": "Message",
"params": {
"content": [
"{Agent:LovelyHeadsOwn@content}"
]
}
},
"upstream": [
"Agent:LovelyHeadsOwn"
]
},
"begin": {
"downstream": [
"Agent:ClearRabbitsScream"
],
"obj": {
"component_name": "Begin",
"params": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SEO blog assistant.\n\nTo get started, please tell me:\n1. What topic you want the blog to cover\n2. Who is the target audience\n3. What you hope to achieve with this blog (e.g., SEO traffic, teaching beginners, promoting a product)\n"
}
},
"upstream": []
}
},
"globals": {
"sys.conversation_turns": 0,
"sys.files": [],
"sys.query": "",
"sys.user_id": ""
},
"graph": {
"edges": [
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Agent:ClearRabbitsScreamend",
"source": "begin",
"sourceHandle": "start",
"target": "Agent:ClearRabbitsScream",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:ClearRabbitsScreamstart-Agent:BetterSitesSendend",
"source": "Agent:ClearRabbitsScream",
"sourceHandle": "start",
"target": "Agent:BetterSitesSend",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:BetterSitesSendtool-Tool:SharpPensBurnend",
"source": "Agent:BetterSitesSend",
"sourceHandle": "tool",
"target": "Tool:SharpPensBurn",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:BetterSitesSendstart-Agent:EagerNailsRemainend",
"source": "Agent:BetterSitesSend",
"sourceHandle": "start",
"target": "Agent:EagerNailsRemain",
"targetHandle": "end"
},
{
"id": "xy-edge__Agent:EagerNailsRemaintool-Tool:WickedDeerHealend",
"source": "Agent:EagerNailsRemain",
"sourceHandle": "tool",
"target": "Tool:WickedDeerHeal",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:EagerNailsRemainstart-Agent:LovelyHeadsOwnend",
"source": "Agent:EagerNailsRemain",
"sourceHandle": "start",
"target": "Agent:LovelyHeadsOwn",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:LovelyHeadsOwnstart-Message:LegalBeansBetend",
"source": "Agent:LovelyHeadsOwn",
"sourceHandle": "start",
"target": "Message:LegalBeansBet",
"targetHandle": "end"
}
],
"nodes": [
{
"data": {
"form": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SEO blog assistant.\n\nTo get started, please tell me:\n1. What topic you want the blog to cover\n2. Who is the target audience\n3. What you hope to achieve with this blog (e.g., SEO traffic, teaching beginners, promoting a product)\n"
},
"label": "Begin",
"name": "begin"
},
"id": "begin",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": 50,
"y": 200
},
"selected": false,
"sourcePosition": "left",
"targetPosition": "right",
"type": "beginNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 1,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The user query is {sys.query}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Parse_And_Keyword_Agent**, responsible for interpreting a user's blog writing request and generating a structured writing intent summary and keyword strategy for SEO-optimized content generation.\n\n# Goals\n\n1. Extract and infer the user's true writing intent, even if the input is informal or vague.\n\n2. Identify the writing type, target audience, and implied goal.\n\n3. Suggest 3\u20135 long-tail keywords based on the input and context.\n\n4. Output all data in a Markdown format for downstream agents.\n\n# Operating Guidelines\n\n\n- If the user's input lacks clarity, make reasonable and **conservative** assumptions based on SEO best practices.\n\n- Always choose one clear \"Writing Type\" from the list below.\n\n- Your job is not to write the blog \u2014 only to structure the brief.\n\n# Output Format\n\n```markdown\n## Writing Type\n\n[Choose one: Tutorial / Informative Guide / Marketing Content / Case Study / Opinion Piece / How-to / Comparison Article]\n\n## Target Audience\n\n[Try to be specific based on clues in the input: e.g., marketing managers, junior developers, SEO beginners]\n\n## User Intent Summary\n\n[A 1\u20132 sentence summary of what the user wants to achieve with the blog post]\n\n## Suggested Long-tail Keywords\n\n- keyword 1\n\n- keyword 2\n\n- keyword 3\n\n- keyword 4 (optional)\n\n- keyword 5 (optional)\n\n\n\n\n## Input Examples (and how to handle them)\n\nInput: \"I want to write about RAGFlow.\"\n\u2192 Output: Informative Guide, Audience: AI developers, Intent: explain what RAGFlow is and its use cases\n\nInput: \"Need a blog to promote our prompt design tool.\"\n\u2192 Output: Marketing Content, Audience: product managers or tool adopters, Intent: raise awareness and interest in the product\n\n\n\nInput: \"How to get more Google traffic using AI\"\n\u2192 Output: How-to, Audience: SEO marketers, Intent: guide readers on applying AI for SEO growth",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Parse And Keyword Agent"
},
"dragging": false,
"id": "Agent:ClearRabbitsScream",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 344.7766966202233,
"y": 234.82202253184496
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.3,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 3,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Balance",
"presencePenaltyEnabled": false,
"presence_penalty": 0.2,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Outline_Agent**, responsible for generating a clear and SEO-optimized blog outline based on the user's parsed writing intent and keyword strategy.\n\n# Tool Access:\n\n- You have access to a search tool called `Tavily Search`.\n\n- If you are unsure how to structure a section, you may call this tool to search for related blog outlines or content from Google.\n\n- Do not overuse it. Your job is to extract **structure**, not to write paragraphs.\n\n\n# Goals\n\n1. Create a well-structured outline with appropriate H2 and H3 headings.\n\n2. Ensure logical flow from introduction to conclusion.\n\n3. Assign 1\u20132 suggested long-tail keywords to each major section for SEO alignment.\n\n4. Make the structure suitable for downstream paragraph writing.\n\n\n\n\n#Note\n\n- Use concise, scannable section titles.\n\n- Do not write full paragraphs.\n\n- Prioritize clarity, logical progression, and SEO alignment.\n\n\n\n- If the blog type is \u201cTutorial\u201d or \u201cHow-to\u201d, include step-based sections.\n\n\n# Input\n\nYou will receive:\n\n- Writing Type (e.g., Tutorial, Informative Guide)\n\n- Target Audience\n\n- User Intent Summary\n\n- 3\u20135 long-tail keywords\n\n\nUse this information to design a structure that both informs readers and maximizes search engine visibility.\n\n# Output Format\n\n```markdown\n\n## Blog Title (suggested)\n\n[Give a short, SEO-friendly title suggestion]\n\n## Outline\n\n### Introduction\n\n- Purpose of the article\n\n- Brief context\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 1]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 2]\n\n- [Short description of what this section will cover]\n\n- **Suggested keywords**: [keyword1, keyword2]\n\n### H2: [Section Title 3]\n\n- [Optional H3 Subsection Title A]\n\n - [Explanation of sub-point]\n\n- [Optional H3 Subsection Title B]\n\n - [Explanation of sub-point]\n\n- **Suggested keywords**: [keyword1]\n\n### Conclusion\n\n- Recap key takeaways\n\n- Optional CTA (Call to Action)\n\n- **Suggested keywords**: [keyword3]\n\n",
"temperature": 0.5,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.85,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Outline Agent"
},
"dragging": false,
"id": "Agent:BetterSitesSend",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 613.4368763415628,
"y": 164.3074269048589
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"description": "This is an agent for a specific task.",
"user_prompt": "This is the order you need to send to the agent."
},
"label": "Tool",
"name": "flow.tool_0"
},
"dragging": false,
"id": "Tool:SharpPensBurn",
"measured": {
"height": 44,
"width": 200
},
"position": {
"x": 580.1877078861457,
"y": 287.7669662022325
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "toolNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Body_Agent**, responsible for generating the full content of each section of an SEO-optimized blog based on the provided outline and keyword strategy.\n\n# Tool Access:\n\nYou can use the `Tavily Search` tool to retrieve relevant content, statistics, or examples to support each section you're writing.\n\nUse it **only** when the provided outline lacks enough information, or if the section requires factual grounding.\n\nAlways cite the original link or indicate source where possible.\n\n\n# Goals\n\n1. Write each section (based on H2/H3 structure) as a complete and natural blog paragraph.\n\n2. Integrate the suggested long-tail keywords naturally into each section.\n\n3. When appropriate, use the `Tavily Search` tool to enrich your writing with relevant facts, examples, or quotes.\n\n4. Ensure each section is clear, engaging, and informative, suitable for both human readers and search engines.\n\n\n# Style Guidelines\n\n- Write in a tone appropriate to the audience. Be explanatory, not promotional, unless it's a marketing blog.\n\n- Avoid generic filler content. Prioritize clarity, structure, and value.\n\n- Ensure SEO keywords are embedded seamlessly, not forcefully.\n\n\n\n- Maintain writing rhythm. Vary sentence lengths. Use transitions between ideas.\n\n\n# Input\n\n\nYou will receive:\n\n- Blog title\n\n- Structured outline (including section titles, keywords, and descriptions)\n\n- Target audience\n\n- Blog type and user intent\n\nYou must **follow the outline strictly**. Write content **section-by-section**, based on the structure.\n\n\n# Output Format\n\n```markdown\n\n## H2: [Section Title]\n\n[Your generated content for this section \u2014 500-600 words, using keywords naturally.]\n\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [
{
"component_name": "TavilySearch",
"name": "TavilySearch",
"params": {
"api_key": "",
"days": 7,
"exclude_domains": [],
"include_answer": false,
"include_domains": [],
"include_image_descriptions": false,
"include_images": false,
"include_raw_content": true,
"max_results": 5,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"query": "sys.query",
"search_depth": "basic",
"topic": "general"
}
}
],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Body Agent"
},
"dragging": false,
"id": "Agent:EagerNailsRemain",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 889.0614605692713,
"y": 247.00973041799065
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"description": "This is an agent for a specific task.",
"user_prompt": "This is the order you need to send to the agent."
},
"label": "Tool",
"name": "flow.tool_1"
},
"dragging": false,
"id": "Tool:WickedDeerHeal",
"measured": {
"height": 44,
"width": 200
},
"position": {
"x": 853.2006404239659,
"y": 364.37541577229143
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "toolNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_comment": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": null,
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.5,
"llm_id": "deepseek-chat@DeepSeek",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 4096,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"parameter": "Precise",
"presencePenaltyEnabled": false,
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
"sys_prompt": "# Role\n\nYou are the **Editor_Agent**, responsible for finalizing the blog post for both human readability and SEO effectiveness.\n\n# Goals\n\n1. Polish the entire blog content for clarity, coherence, and style.\n\n2. Improve transitions between sections, ensure logical flow.\n\n3. Verify that keywords are used appropriately and effectively.\n\n4. Conduct a lightweight SEO audit \u2014 checking keyword density, structure (H1/H2/H3), and overall searchability.\n\n\n\n# Style Guidelines\n\n- Be precise. Avoid bloated or vague language.\n\n- Maintain an informative and engaging tone, suitable to the target audience.\n\n- Do not remove keywords unless absolutely necessary for clarity.\n\n- Ensure paragraph flow and section continuity.\n\n\n# Input\n\nYou will receive:\n\n- Full blog content, written section-by-section\n\n- Original outline with suggested keywords\n\n- Target audience and writing type\n\n# Output Format\n\n```markdown\n\n[The revised, fully polished blog post content goes here.]\n\n",
"temperature": 0.2,
"temperatureEnabled": true,
"tools": [],
"topPEnabled": false,
"top_p": 0.75,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "Editor Agent"
},
"dragging": false,
"id": "Agent:LovelyHeadsOwn",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 1160.3332919804993,
"y": 149.50806732882472
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"content": [
"{Agent:LovelyHeadsOwn@content}"
]
},
"label": "Message",
"name": "Response"
},
"dragging": false,
"id": "Message:LegalBeansBet",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 1370.6665839609984,
"y": 267.0323933738015
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "messageNode"
},
{
"data": {
"form": {
"text": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don\u2019t need any writing experience. Just provide a topic or short request \u2014 the system will handle the rest.\n\nThe process includes the following key stages:\n\n1. **Understanding your topic and goals**\n2. **Designing the blog structure**\n3. **Writing high-quality content**\n\n\n"
},
"label": "Note",
"name": "Workflow Overall Description"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 205,
"id": "Note:SlimyGhostsWear",
"measured": {
"height": 205,
"width": 415
},
"position": {
"x": -284.3143151688742,
"y": 150.47632147913419
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 415
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent reads the user\u2019s input and figures out what kind of blog needs to be written.\n\n**What it does**:\n- Understands the main topic you want to write about \n- Identifies who the blog is for (e.g., beginners, marketers, developers) \n- Determines the writing purpose (e.g., SEO traffic, product promotion, education) \n- Suggests 3\u20135 long-tail SEO keywords related to the topic"
},
"label": "Note",
"name": "Parse And Keyword Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 152,
"id": "Note:EmptyChairsShake",
"measured": {
"height": 152,
"width": 340
},
"position": {
"x": 295.04147626768133,
"y": 372.2755718118446
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 340
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent builds the blog structure \u2014 just like writing a table of contents before you start writing the full article.\n\n**What it does**:\n- Suggests a clear blog title that includes important keywords \n- Breaks the article into sections using H2 and H3 headings (like a professional blog layout) \n- Assigns 1\u20132 recommended keywords to each section to help with SEO \n- Follows the writing goal and target audience set in the previous step"
},
"label": "Note",
"name": "Outline Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 146,
"id": "Note:TallMelonsNotice",
"measured": {
"height": 146,
"width": 343
},
"position": {
"x": 598.5644991893463,
"y": 5.801054564756448
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 343
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent is responsible for writing the actual content of the blog \u2014 paragraph by paragraph \u2014 based on the outline created earlier.\n\n**What it does**:\n- Looks at each H2/H3 section in the outline \n- Writes 150\u2013220 words of clear, helpful, and well-structured content per section \n- Includes the suggested SEO keywords naturally (not keyword stuffing) \n- Uses real examples or facts if needed (by calling a web search tool like Tavily)"
},
"label": "Note",
"name": "Body Agent"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 137,
"id": "Note:RipeCougarsBuild",
"measured": {
"height": 137,
"width": 319
},
"position": {
"x": 860.4854129814981,
"y": 427.2196835690842
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 319
},
{
"data": {
"form": {
"text": "**Purpose**: \nThis agent reviews the entire blog draft to make sure it is smooth, professional, and SEO-friendly. It acts like a human editor before publishing.\n\n**What it does**:\n- Polishes the writing: improves sentence clarity, fixes awkward phrasing \n- Makes sure the content flows well from one section to the next \n- Double-checks keyword usage: are they present, natural, and not overused? \n- Verifies the blog structure (H1, H2, H3 headings) is correct \n- Adds two key SEO elements:\n - **Meta Title** (shows up in search results)\n - **Meta Description** (summary for Google and social sharing)"
},
"label": "Note",
"name": "Editor Agent"
},
"dragHandle": ".note-drag-handle",
"height": 146,
"id": "Note:OpenTurkeysSell",
"measured": {
"height": 146,
"width": 320
},
"position": {
"x": 1129,
"y": -30
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 320
}
]
},
"history": [],
"messages": [],
"path": [],
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,724 @@
{
"id": 17,
"title": "SQL Assistant",
"description": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., “Show me last quarters top 10 products by revenue”) and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ",
"canvas_type": "Marketing",
"dsl": {
"components": {
"Agent:WickedGoatsDivide": {
"downstream": [
"ExeSQL:TiredShirtsPull"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-max@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "User's query: {sys.query}\n\nSchema: {Retrieval:HappyTiesFilm@formalized_content}\n\nSamples about question to SQL: {Retrieval:SmartNewsHammer@formalized_content}\n\nDescription about meanings of tables and files: {Retrieval:SweetDancersAppear@formalized_content}",
"role": "user"
}
],
"sys_prompt": "### ROLE\nYou are a Text-to-SQL assistant. \nGiven a relational database schema and a natural-language request, you must produce a **single, syntactically-correct MySQL query** that answers the request. \nReturn **nothing except the SQL statement itself**\u2014no code fences, no commentary, no explanations, no comments, no trailing semicolon if not required.\n\n\n### EXAMPLES \n-- Example 1 \nUser: List every product name and its unit price. \nSQL:\nSELECT name, unit_price FROM Products;\n\n-- Example 2 \nUser: Show the names and emails of customers who placed orders in January 2025. \nSQL:\nSELECT DISTINCT c.name, c.email\nFROM Customers c\nJOIN Orders o ON o.customer_id = c.id\nWHERE o.order_date BETWEEN '2025-01-01' AND '2025-01-31';\n\n-- Example 3 \nUser: How many orders have a status of \"Completed\" for each month in 2024? \nSQL:\nSELECT DATE_FORMAT(order_date, '%Y-%m') AS month,\n COUNT(*) AS completed_orders\nFROM Orders\nWHERE status = 'Completed'\n AND YEAR(order_date) = 2024\nGROUP BY month\nORDER BY month;\n\n-- Example 4 \nUser: Which products generated at least \\$10 000 in total revenue? \nSQL:\nSELECT p.id, p.name, SUM(oi.quantity * oi.unit_price) AS revenue\nFROM Products p\nJOIN OrderItems oi ON oi.product_id = p.id\nGROUP BY p.id, p.name\nHAVING revenue >= 10000\nORDER BY revenue DESC;\n\n\n### OUTPUT GUIDELINES\n1. Think through the schema and the request. \n2. Write **only** the final MySQL query. \n3. Do **not** wrap the query in back-ticks or markdown fences. \n4. Do **not** add explanations, comments, or additional text\u2014just the SQL.",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"Retrieval:HappyTiesFilm",
"Retrieval:SmartNewsHammer",
"Retrieval:SweetDancersAppear"
]
},
"ExeSQL:TiredShirtsPull": {
"downstream": [
"Message:ShaggyMasksAttend"
],
"obj": {
"component_name": "ExeSQL",
"params": {
"database": "",
"db_type": "mysql",
"host": "",
"max_records": 1024,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"password": "20010812Yy!",
"port": 3306,
"sql": "Agent:WickedGoatsDivide@content",
"username": "13637682833@163.com"
}
},
"upstream": [
"Agent:WickedGoatsDivide"
]
},
"Message:ShaggyMasksAttend": {
"downstream": [],
"obj": {
"component_name": "Message",
"params": {
"content": [
"{ExeSQL:TiredShirtsPull@formalized_content}"
]
}
},
"upstream": [
"ExeSQL:TiredShirtsPull"
]
},
"Retrieval:HappyTiesFilm": {
"downstream": [
"Agent:WickedGoatsDivide"
],
"obj": {
"component_name": "Retrieval",
"params": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"ed31364c727211f0bdb2bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
},
"upstream": [
"begin"
]
},
"Retrieval:SmartNewsHammer": {
"downstream": [
"Agent:WickedGoatsDivide"
],
"obj": {
"component_name": "Retrieval",
"params": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"0f968106727311f08357bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
},
"upstream": [
"begin"
]
},
"Retrieval:SweetDancersAppear": {
"downstream": [
"Agent:WickedGoatsDivide"
],
"obj": {
"component_name": "Retrieval",
"params": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"4ad1f9d0727311f0827dbafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
},
"upstream": [
"begin"
]
},
"begin": {
"downstream": [
"Retrieval:HappyTiesFilm",
"Retrieval:SmartNewsHammer",
"Retrieval:SweetDancersAppear"
],
"obj": {
"component_name": "Begin",
"params": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SQL assistant. What can I do for you?"
}
},
"upstream": []
}
},
"globals": {
"sys.conversation_turns": 0,
"sys.files": [],
"sys.query": "",
"sys.user_id": ""
},
"graph": {
"edges": [
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Retrieval:HappyTiesFilmend",
"source": "begin",
"sourceHandle": "start",
"target": "Retrieval:HappyTiesFilm",
"targetHandle": "end"
},
{
"id": "xy-edge__beginstart-Retrieval:SmartNewsHammerend",
"source": "begin",
"sourceHandle": "start",
"target": "Retrieval:SmartNewsHammer",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Retrieval:SweetDancersAppearend",
"source": "begin",
"sourceHandle": "start",
"target": "Retrieval:SweetDancersAppear",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Retrieval:HappyTiesFilmstart-Agent:WickedGoatsDivideend",
"source": "Retrieval:HappyTiesFilm",
"sourceHandle": "start",
"target": "Agent:WickedGoatsDivide",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Retrieval:SmartNewsHammerstart-Agent:WickedGoatsDivideend",
"markerEnd": "logo",
"source": "Retrieval:SmartNewsHammer",
"sourceHandle": "start",
"style": {
"stroke": "rgba(91, 93, 106, 1)",
"strokeWidth": 1
},
"target": "Agent:WickedGoatsDivide",
"targetHandle": "end",
"type": "buttonEdge",
"zIndex": 1001
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Retrieval:SweetDancersAppearstart-Agent:WickedGoatsDivideend",
"markerEnd": "logo",
"source": "Retrieval:SweetDancersAppear",
"sourceHandle": "start",
"style": {
"stroke": "rgba(91, 93, 106, 1)",
"strokeWidth": 1
},
"target": "Agent:WickedGoatsDivide",
"targetHandle": "end",
"type": "buttonEdge",
"zIndex": 1001
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:WickedGoatsDividestart-ExeSQL:TiredShirtsPullend",
"source": "Agent:WickedGoatsDivide",
"sourceHandle": "start",
"target": "ExeSQL:TiredShirtsPull",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__ExeSQL:TiredShirtsPullstart-Message:ShaggyMasksAttendend",
"source": "ExeSQL:TiredShirtsPull",
"sourceHandle": "start",
"target": "Message:ShaggyMasksAttend",
"targetHandle": "end"
}
],
"nodes": [
{
"data": {
"form": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SQL assistant. What can I do for you?"
},
"label": "Begin",
"name": "begin"
},
"id": "begin",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": 50,
"y": 200
},
"selected": false,
"sourcePosition": "left",
"targetPosition": "right",
"type": "beginNode"
},
{
"data": {
"form": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"ed31364c727211f0bdb2bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
},
"label": "Retrieval",
"name": "Schema"
},
"dragging": false,
"id": "Retrieval:HappyTiesFilm",
"measured": {
"height": 96,
"width": 200
},
"position": {
"x": 414,
"y": 20.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "retrievalNode"
},
{
"data": {
"form": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"0f968106727311f08357bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
},
"label": "Retrieval",
"name": "Question to SQL"
},
"dragging": false,
"id": "Retrieval:SmartNewsHammer",
"measured": {
"height": 96,
"width": 200
},
"position": {
"x": 406.5,
"y": 175.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "retrievalNode"
},
{
"data": {
"form": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"4ad1f9d0727311f0827dbafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
},
"label": "Retrieval",
"name": "Database Description"
},
"dragging": false,
"id": "Retrieval:SweetDancersAppear",
"measured": {
"height": 96,
"width": 200
},
"position": {
"x": 403.5,
"y": 328
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "retrievalNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-max@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "User's query: {sys.query}\n\nSchema: {Retrieval:HappyTiesFilm@formalized_content}\n\nSamples about question to SQL: {Retrieval:SmartNewsHammer@formalized_content}\n\nDescription about meanings of tables and files: {Retrieval:SweetDancersAppear@formalized_content}",
"role": "user"
}
],
"sys_prompt": "### ROLE\nYou are a Text-to-SQL assistant. \nGiven a relational database schema and a natural-language request, you must produce a **single, syntactically-correct MySQL query** that answers the request. \nReturn **nothing except the SQL statement itself**\u2014no code fences, no commentary, no explanations, no comments, no trailing semicolon if not required.\n\n\n### EXAMPLES \n-- Example 1 \nUser: List every product name and its unit price. \nSQL:\nSELECT name, unit_price FROM Products;\n\n-- Example 2 \nUser: Show the names and emails of customers who placed orders in January 2025. \nSQL:\nSELECT DISTINCT c.name, c.email\nFROM Customers c\nJOIN Orders o ON o.customer_id = c.id\nWHERE o.order_date BETWEEN '2025-01-01' AND '2025-01-31';\n\n-- Example 3 \nUser: How many orders have a status of \"Completed\" for each month in 2024? \nSQL:\nSELECT DATE_FORMAT(order_date, '%Y-%m') AS month,\n COUNT(*) AS completed_orders\nFROM Orders\nWHERE status = 'Completed'\n AND YEAR(order_date) = 2024\nGROUP BY month\nORDER BY month;\n\n-- Example 4 \nUser: Which products generated at least \\$10 000 in total revenue? \nSQL:\nSELECT p.id, p.name, SUM(oi.quantity * oi.unit_price) AS revenue\nFROM Products p\nJOIN OrderItems oi ON oi.product_id = p.id\nGROUP BY p.id, p.name\nHAVING revenue >= 10000\nORDER BY revenue DESC;\n\n\n### OUTPUT GUIDELINES\n1. Think through the schema and the request. \n2. Write **only** the final MySQL query. \n3. Do **not** wrap the query in back-ticks or markdown fences. \n4. Do **not** add explanations, comments, or additional text\u2014just the SQL.",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "SQL Generator "
},
"dragging": false,
"id": "Agent:WickedGoatsDivide",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 981,
"y": 174
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"database": "",
"db_type": "mysql",
"host": "",
"max_records": 1024,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"password": "20010812Yy!",
"port": 3306,
"sql": "Agent:WickedGoatsDivide@content",
"username": "13637682833@163.com"
},
"label": "ExeSQL",
"name": "ExeSQL"
},
"dragging": false,
"id": "ExeSQL:TiredShirtsPull",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 1211.5,
"y": 212.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "ragNode"
},
{
"data": {
"form": {
"content": [
"{ExeSQL:TiredShirtsPull@formalized_content}"
]
},
"label": "Message",
"name": "Message"
},
"dragging": false,
"id": "Message:ShaggyMasksAttend",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 1447.3125,
"y": 181.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "messageNode"
},
{
"data": {
"form": {
"text": "Searches for relevant database creation statements.\n\nIt should label with a knowledgebase to which the schema is dumped in. You could use \" General \" as parsing method, \" 2 \" as chunk size and \" ; \" as delimiter."
},
"label": "Note",
"name": "Note Schema"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 188,
"id": "Note:ThickClubsFloat",
"measured": {
"height": 188,
"width": 392
},
"position": {
"x": 689,
"y": -180.31251144409183
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 392
},
{
"data": {
"form": {
"text": "Searches for samples about question to SQL. \n\nYou could use \" Q&A \" as parsing method.\n\nPlease check this dataset:\nhttps://huggingface.co/datasets/InfiniFlow/text2sql"
},
"label": "Note",
"name": "Note: Question to SQL"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 154,
"id": "Note:ElevenLionsJoke",
"measured": {
"height": 154,
"width": 345
},
"position": {
"x": 693.5,
"y": 138
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 345
},
{
"data": {
"form": {
"text": "Searches for description about meanings of tables and fields.\n\nYou could use \" General \" as parsing method, \" 2 \" as chunk size and \" ### \" as delimiter."
},
"label": "Note",
"name": "Note: Database Description"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 158,
"id": "Note:ManyRosesTrade",
"measured": {
"height": 158,
"width": 408
},
"position": {
"x": 691.5,
"y": 435.69736389555317
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 408
},
{
"data": {
"form": {
"text": "The Agent learns which tables may be available based on the responses from three knowledge bases and converts the user's input into SQL statements."
},
"label": "Note",
"name": "Note: SQL Generator"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 132,
"id": "Note:RudeHousesInvite",
"measured": {
"height": 132,
"width": 383
},
"position": {
"x": 1106.9254833678003,
"y": 290.5891036507015
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 383
},
{
"data": {
"form": {
"text": "Connect to your database to execute SQL statements."
},
"label": "Note",
"name": "Note: SQL Executor"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"id": "Note:HungryBatsLay",
"measured": {
"height": 136,
"width": 255
},
"position": {
"x": 1185,
"y": -30
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode"
}
]
},
"history": [],
"messages": [],
"path": [],
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q=="
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -15,9 +15,8 @@
# #
import argparse import argparse
import os import os
from functools import partial
from agent.canvas import Canvas from agent.canvas import Canvas
from agent.settings import DEBUG from api import settings
if __name__ == '__main__': if __name__ == '__main__':
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
@ -31,19 +30,17 @@ if __name__ == '__main__':
parser.add_argument('-m', '--stream', default=False, help="Stream output", action='store_true', required=False) parser.add_argument('-m', '--stream', default=False, help="Stream output", action='store_true', required=False)
args = parser.parse_args() args = parser.parse_args()
settings.init_settings()
canvas = Canvas(open(args.dsl, "r").read(), args.tenant_id) canvas = Canvas(open(args.dsl, "r").read(), args.tenant_id)
if canvas.get_prologue():
print(f"==================== Bot =====================\n> {canvas.get_prologue()}", end='')
query = ""
while True: while True:
ans = canvas.run(stream=args.stream) canvas.reset(True)
query = input("\n==================== User =====================\n> ")
ans = canvas.run(query=query)
print("==================== Bot =====================\n> ", end='') print("==================== Bot =====================\n> ", end='')
if args.stream and isinstance(ans, partial): for ans in canvas.run(query=query):
cont = "" print(ans, end='\n', flush=True)
for an in ans():
print(an["content"][len(cont):], end='', flush=True)
cont = an["content"]
else:
print(ans["content"])
if DEBUG: print(canvas.path)
print(canvas.path)
question = input("\n==================== User =====================\n> ")
canvas.add_user_input(question)

View File

@ -1,129 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["baidu:0"],
"upstream": ["begin", "message:0","message:1"]
},
"baidu:0": {
"obj": {
"component_name": "Baidu",
"params": {}
},
"downstream": ["generate:0"],
"upstream": ["answer:0"]
},
"generate:0": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the user's question based on what Baidu searched. First, please output the user's question and the content searched by Baidu, and then answer yes, no, or i don't know.Here is the user's question:{user_input}The above is the user's question.Here is what Baidu searched for:{baidu}The above is the content searched by Baidu.",
"temperature": 0.2
},
"parameters": [
{
"component_id": "answer:0",
"id": "69415446-49bf-4d4b-8ec9-ac86066f7709",
"key": "user_input"
},
{
"component_id": "baidu:0",
"id": "83363c2a-00a8-402f-a45c-ddc4097d7d8b",
"key": "baidu"
}
]
},
"downstream": ["switch:0"],
"upstream": ["baidu:0"]
},
"switch:0": {
"obj": {
"component_name": "Switch",
"params": {
"conditions": [
{
"logical_operator" : "or",
"items" : [
{"cpn_id": "generate:0", "operator": "contains", "value": "yes"},
{"cpn_id": "generate:0", "operator": "contains", "value": "yeah"}
],
"to": "message:0"
},
{
"logical_operator" : "and",
"items" : [
{"cpn_id": "generate:0", "operator": "contains", "value": "no"},
{"cpn_id": "generate:0", "operator": "not contains", "value": "yes"},
{"cpn_id": "generate:0", "operator": "not contains", "value": "know"}
],
"to": "message:1"
},
{
"logical_operator" : "",
"items" : [
{"cpn_id": "generate:0", "operator": "contains", "value": "know"}
],
"to": "message:2"
}
],
"end_cpn_id": "answer:0"
}
},
"downstream": ["message:0","message:1"],
"upstream": ["generate:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": ["YES YES YES YES YES YES YES YES YES YES YES YES"]
}
},
"upstream": ["switch:0"],
"downstream": ["answer:0"]
},
"message:1": {
"obj": {
"component_name": "Message",
"params": {
"messages": ["NO NO NO NO NO NO NO NO NO NO NO NO NO NO"]
}
},
"upstream": ["switch:0"],
"downstream": ["answer:0"]
},
"message:2": {
"obj": {
"component_name": "Message",
"params": {
"messages": ["I DON'T KNOW---------------------------"]
}
},
"upstream": ["switch:0"],
"downstream": ["answer:0"]
}
},
"history": [],
"messages": [],
"reference": {},
"path": [],
"answer": []
}

View File

@ -7,16 +7,8 @@
"prologue": "Hi there!" "prologue": "Hi there!"
} }
}, },
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"], "downstream": ["categorize:0"],
"upstream": ["begin"] "upstream": []
}, },
"categorize:0": { "categorize:0": {
"obj": { "obj": {
@ -26,48 +18,68 @@
"category_description": { "category_description": {
"product_related": { "product_related": {
"description": "The question is about the product usage, appearance and how it works.", "description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?", "to": ["agent:0"]
"to": "message:0"
}, },
"others": { "others": {
"description": "The question is not about the product usage, appearance and how it works.", "description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?", "to": ["message:0"]
"to": "message:1"
} }
} }
} }
}, },
"downstream": ["message:0","message:1"], "downstream": [],
"upstream": ["answer:0"] "upstream": ["begin"]
}, },
"message:0": { "message:0": {
"obj": { "obj":{
"component_name": "Message", "component_name": "Message",
"params": { "params": {
"messages": [ "content": [
"Message 0!!!!!!!" "Sorry, I don't know. I'm an AI bot."
] ]
} }
}, },
"downstream": ["answer:0"], "downstream": [],
"upstream": ["categorize:0"]
},
"agent:0": {
"obj": {
"component_name": "Agent",
"params": {
"llm_id": "deepseek-chat",
"sys_prompt": "You are a smart researcher. You could generate proper queries to search. According to the search results, you could deside next query if the result is not enough.",
"temperature": 0.2,
"llm_enabled_tools": [
{
"component_name": "TavilySearch",
"params": {
"api_key": "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"
}
}
]
}
},
"downstream": ["message:1"],
"upstream": ["categorize:0"] "upstream": ["categorize:0"]
}, },
"message:1": { "message:1": {
"obj": { "obj": {
"component_name": "Message", "component_name": "Message",
"params": { "params": {
"messages": [ "content": ["{agent:0@content}"]
"Message 1!!!!!!!"
]
} }
}, },
"downstream": ["answer:0"], "downstream": [],
"upstream": ["categorize:0"] "upstream": ["agent:0"]
} }
}, },
"history": [], "history": [],
"messages": [],
"path": [], "path": [],
"reference": [], "retrival": {"chunks": [], "doc_aggs": []},
"answer": [] "globals": {
} "sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
}

View File

@ -1,113 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"],
"upstream": ["begin"]
},
"categorize:0": {
"obj": {
"component_name": "Categorize",
"params": {
"llm_id": "deepseek-chat",
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?",
"to": "concentrator:0"
},
"others": {
"description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "concentrator:1"
}
}
}
},
"downstream": ["concentrator:0","concentrator:1"],
"upstream": ["answer:0"]
},
"concentrator:0": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:0"],
"upstream": ["categorize:0"]
},
"concentrator:1": {
"obj": {
"component_name": "Concentrator",
"params": {}
},
"downstream": ["message:1_0","message:1_1","message:1_2"],
"upstream": ["categorize:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 0_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:0"]
},
"message:1_0": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_0!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_1": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_1!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
},
"message:1_2": {
"obj": {
"component_name": "Message",
"params": {
"messages": [
"Message 1_2!!!!!!!"
]
}
},
"downstream": ["answer:0"],
"upstream": ["concentrator:1"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -1,157 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi! How can I help you?"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"],
"upstream": ["begin", "generate:0", "generate:casual", "generate:answer", "generate:complain", "generate:ask_contact", "message:get_contact"]
},
"categorize:0": {
"obj": {
"component_name": "Categorize",
"params": {
"llm_id": "deepseek-chat",
"category_description": {
"product_related": {
"description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?\nException: Can't connect to ES cluster\nHow to build the RAGFlow image from scratch",
"to": "retrieval:0"
},
"casual": {
"description": "The question is not about the product usage, appearance and how it works. Just casual chat.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?",
"to": "generate:casual"
},
"complain": {
"description": "Complain even curse about the product or service you provide. But the comment is not specific enough.",
"examples": "How bad is it.\nIt's really sucks.\nDamn, for God's sake, can it be more steady?\nShit, I just can't use this shit.\nI can't stand it anymore.",
"to": "generate:complain"
},
"answer": {
"description": "This answer provide a specific contact information, like e-mail, phone number, wechat number, line number, twitter, discord, etc,.",
"examples": "My phone number is 203921\nkevinhu.hk@gmail.com\nThis is my discord number: johndowson_29384",
"to": "message:get_contact"
}
},
"message_history_window_size": 8
}
},
"downstream": ["retrieval:0", "generate:casual", "generate:complain", "message:get_contact"],
"upstream": ["answer:0"]
},
"generate:casual": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are a customer support. But the customer wants to have a casual chat with you instead of consulting about the product. Be nice, funny, enthusiasm and concern.",
"temperature": 0.9,
"message_history_window_size": 12,
"cite": false
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
},
"generate:complain": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are a customer support. the Customers complain even curse about the products but not specific enough. You need to ask him/her what's the specific problem with the product. Be nice, patient and concern to soothe your customers emotions at first place.",
"temperature": 0.9,
"message_history_window_size": 12,
"cite": false
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
},
"retrieval:0": {
"obj": {
"component_name": "Retrieval",
"params": {
"similarity_threshold": 0.2,
"keywords_similarity_weight": 0.3,
"top_n": 6,
"top_k": 1024,
"rerank_id": "BAAI/bge-reranker-v2-m3",
"kb_ids": ["869a236818b811ef91dffa163e197198"]
}
},
"downstream": ["relevant:0"],
"upstream": ["categorize:0"]
},
"relevant:0": {
"obj": {
"component_name": "Relevant",
"params": {
"llm_id": "deepseek-chat",
"temperature": 0.02,
"yes": "generate:answer",
"no": "generate:ask_contact"
}
},
"downstream": ["generate:answer", "generate:ask_contact"],
"upstream": ["retrieval:0"]
},
"generate:answer": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the question based on content of knowledge base. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\". Answers need to consider chat history.\n Knowledge base content is as following:\n {input}\n The above is the content of knowledge base.",
"temperature": 0.02
}
},
"downstream": ["answer:0"],
"upstream": ["relevant:0"]
},
"generate:ask_contact": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are a customer support. But you can't answer to customers' question. You need to request their contact like E-mail, phone number, Wechat number, LINE number, twitter, discord, etc,. Product experts will contact them later. Please do not ask the same question twice.",
"temperature": 0.9,
"message_history_window_size": 12,
"cite": false
}
},
"downstream": ["answer:0"],
"upstream": ["relevant:0"]
},
"message:get_contact": {
"obj":{
"component_name": "Message",
"params": {
"messages": [
"Okay, I've already write this down. What else I can do for you?",
"Get it. What else I can do for you?",
"Thanks for your trust! Our expert will contact ASAP. So, anything else I can do for you?",
"Thanks! So, anything else I can do for you?"
]
}
},
"downstream": ["answer:0"],
"upstream": ["categorize:0"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -1,39 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there! Please enter the text you want to translate in format like: 'text you want to translate' => target language. For an example: 您好! => English"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["generate:0"],
"upstream": ["begin", "generate:0"]
},
"generate:0": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an professional interpreter.\n- Role: an professional interpreter.\n- Input format: content need to be translated => target language. \n- Answer format: => translated content in target language. \n- Examples:\n - user: 您好! => English. assistant: => How are you doing!\n - user: You look good today. => Japanese. assistant: => 今日は調子がいいですね 。\n",
"temperature": 0.5
}
},
"downstream": ["answer:0"],
"upstream": ["answer:0"]
}
},
"history": [],
"messages": [],
"reference": {},
"path": [],
"answer": []
}

View File

@ -1,39 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there! Please enter the text you want to translate in format like: 'text you want to translate' => target language. For an example: 您好! => English"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["generate:0"],
"upstream": ["begin", "generate:0"]
},
"generate:0": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an professional interpreter.\n- Role: an professional interpreter.\n- Input format: content need to be translated => target language. \n- Answer format: => translated content in target language. \n- Examples:\n - user: 您好! => English. assistant: => How are you doing!\n - user: You look good today. => Japanese. assistant: => 今日は調子がいいですね 。\n",
"temperature": 0.5
}
},
"downstream": ["answer:0"],
"upstream": ["answer:0"]
}
},
"history": [],
"messages": [],
"reference": {},
"path": [],
"answer": []
}

View File

@ -0,0 +1,92 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["generate:0"],
"upstream": []
},
"generate:0": {
"obj": {
"component_name": "Agent",
"params": {
"llm_id": "deepseek-chat",
"sys_prompt": "You are an helpful research assistant. \nPlease decompose user's topic: '{sys.query}' into several meaningful sub-topics. \nThe output format MUST be an string array like: [\"sub-topic1\", \"sub-topic2\", ...]. Redundant information is forbidden.",
"temperature": 0.2,
"cite":false,
"output_structure": ["sub-topic1", "sub-topic2", "sub-topic3"]
}
},
"downstream": ["iteration:0"],
"upstream": ["begin"]
},
"iteration:0": {
"obj": {
"component_name": "Iteration",
"params": {
"items_ref": "generate:0@structured_content"
}
},
"downstream": ["message:0"],
"upstream": ["generate:0"]
},
"iterationitem:0": {
"obj": {
"component_name": "IterationItem",
"params": {}
},
"parent_id": "iteration:0",
"downstream": ["tavily:0"],
"upstream": []
},
"tavily:0": {
"obj": {
"component_name": "TavilySearch",
"params": {
"api_key": "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",
"query": "iterationitem:0@result"
}
},
"parent_id": "iteration:0",
"downstream": ["generate:1"],
"upstream": ["iterationitem:0"]
},
"generate:1": {
"obj": {
"component_name": "Agent",
"params": {
"llm_id": "deepseek-chat",
"sys_prompt": "Your goal is to provide answers based on information from the internet. \nYou must use the provided search results to find relevant online information. \nYou should never use your own knowledge to answer questions.\nPlease include relevant url sources in the end of your answers.\n\n \"{tavily:0@formalized_content}\" \nUsing the above information, answer the following question or topic: \"{iterationitem:0@result} \"\nin a detailed report — The report should focus on the answer to the question, should be well structured, informative, in depth, with facts and numbers if available, a minimum of 200 words and with markdown syntax and apa format. Write all source urls at the end of the report in apa format. You should write your report only based on the given information and nothing else.",
"temperature": 0.9,
"cite":false
}
},
"parent_id": "iteration:0",
"downstream": ["iterationitem:0"],
"upstream": ["tavily:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"content": ["{iteration:0@generate:1}"]
}
},
"downstream": [],
"upstream": ["iteration:0"]
}
},
"history": [],
"path": [],
"retrival": {"chunks": [], "doc_aggs": []},
"globals": {
"sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
}

View File

@ -1,62 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["keyword:0"],
"upstream": ["begin"]
},
"keyword:0": {
"obj": {
"component_name": "KeywordExtract",
"params": {
"llm_id": "deepseek-chat",
"prompt": "- Role: You're a question analyzer.\n - Requirements:\n - Summarize user's question, and give top %s important keyword/phrase.\n - Use comma as a delimiter to separate keywords/phrases.\n - Answer format: (in language of user's question)\n - keyword: ",
"temperature": 0.2,
"top_n": 1
}
},
"downstream": ["wikipedia:0"],
"upstream": ["answer:0"]
},
"wikipedia:0": {
"obj":{
"component_name": "Wikipedia",
"params": {
"top_n": 10
}
},
"downstream": ["generate:0"],
"upstream": ["keyword:0"]
},
"generate:1": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the question based on content from Wikipedia. When the answer from Wikipedia is incomplete, you need to output the URL link of the corresponding content as well. When all the content searched from Wikipedia is irrelevant to the question, your answer must include the sentence, \"The answer you are looking for is not found in the Wikipedia!\". Answers need to consider chat history.\n The content of Wikipedia is as follows:\n {input}\n The above is the content of Wikipedia.",
"temperature": 0.2
}
},
"downstream": ["answer:0"],
"upstream": ["wikipedia:0"]
}
},
"history": [],
"path": [],
"messages": [],
"reference": {},
"answer": []
}

View File

@ -7,16 +7,8 @@
"prologue": "Hi there!" "prologue": "Hi there!"
} }
}, },
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["retrieval:0"], "downstream": ["retrieval:0"],
"upstream": ["begin", "generate:0"] "upstream": []
}, },
"retrieval:0": { "retrieval:0": {
"obj": { "obj": {
@ -26,29 +18,44 @@
"keywords_similarity_weight": 0.3, "keywords_similarity_weight": 0.3,
"top_n": 6, "top_n": 6,
"top_k": 1024, "top_k": 1024,
"rerank_id": "BAAI/bge-reranker-v2-m3", "rerank_id": "",
"kb_ids": ["869a236818b811ef91dffa163e197198"] "empty_response": "Nothing found in dataset",
"kb_ids": ["1a3d1d7afb0611ef9866047c16ec874f"]
} }
}, },
"downstream": ["generate:0"], "downstream": ["generate:0"],
"upstream": ["answer:0"] "upstream": ["begin"]
}, },
"generate:0": { "generate:0": {
"obj": { "obj": {
"component_name": "Generate", "component_name": "LLM",
"params": { "params": {
"llm_id": "deepseek-chat", "llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n Here is the knowledge base:\n {input}\n The above is the knowledge base.", "sys_prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n Here is the knowledge base:\n {retrieval:0@formalized_content}\n The above is the knowledge base.",
"temperature": 0.2 "temperature": 0.2
} }
}, },
"downstream": ["answer:0"], "downstream": ["message:0"],
"upstream": ["retrieval:0"] "upstream": ["retrieval:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"content": ["{generate:0@content}"]
}
},
"downstream": [],
"upstream": ["generate:0"]
} }
}, },
"history": [], "history": [],
"messages": [],
"reference": {},
"path": [], "path": [],
"answer": [] "retrival": {"chunks": [], "doc_aggs": []},
"globals": {
"sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
} }

View File

@ -7,16 +7,8 @@
"prologue": "Hi there!" "prologue": "Hi there!"
} }
}, },
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["categorize:0"], "downstream": ["categorize:0"],
"upstream": ["begin", "generate:0", "switch:0"] "upstream": []
}, },
"categorize:0": { "categorize:0": {
"obj": { "obj": {
@ -26,30 +18,30 @@
"category_description": { "category_description": {
"product_related": { "product_related": {
"description": "The question is about the product usage, appearance and how it works.", "description": "The question is about the product usage, appearance and how it works.",
"examples": "Why it always beaming?\nHow to install it onto the wall?\nIt leaks, what to do?", "examples": [],
"to": "retrieval:0" "to": ["retrieval:0"]
}, },
"others": { "others": {
"description": "The question is not about the product usage, appearance and how it works.", "description": "The question is not about the product usage, appearance and how it works.",
"examples": "How are you doing?\nWhat is your name?\nAre you a robot?\nWhat's the weather?\nWill it rain?", "examples": [],
"to": "message:0" "to": ["message:0"]
} }
} }
} }
}, },
"downstream": ["retrieval:0", "message:0"], "downstream": [],
"upstream": ["answer:0"] "upstream": ["begin"]
}, },
"message:0": { "message:0": {
"obj":{ "obj":{
"component_name": "Message", "component_name": "Message",
"params": { "params": {
"messages": [ "content": [
"Sorry, I don't know. I'm an AI bot." "Sorry, I don't know. I'm an AI bot."
] ]
} }
}, },
"downstream": ["answer:0"], "downstream": [],
"upstream": ["categorize:0"] "upstream": ["categorize:0"]
}, },
"retrieval:0": { "retrieval:0": {
@ -60,29 +52,44 @@
"keywords_similarity_weight": 0.3, "keywords_similarity_weight": 0.3,
"top_n": 6, "top_n": 6,
"top_k": 1024, "top_k": 1024,
"rerank_id": "BAAI/bge-reranker-v2-m3", "rerank_id": "",
"kb_ids": ["869a236818b811ef91dffa163e197198"] "empty_response": "Nothing found in dataset",
"kb_ids": ["1a3d1d7afb0611ef9866047c16ec874f"]
} }
}, },
"downstream": ["generate:0"], "downstream": ["generate:0"],
"upstream": ["switch:0"] "upstream": ["categorize:0"]
}, },
"generate:0": { "generate:0": {
"obj": { "obj": {
"component_name": "Generate", "component_name": "Agent",
"params": { "params": {
"llm_id": "deepseek-chat", "llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n Here is the knowledge base:\n {input}\n The above is the knowledge base.", "sys_prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n Here is the knowledge base:\n {retrieval:0@formalized_content}\n The above is the knowledge base.",
"temperature": 0.2 "temperature": 0.2
} }
}, },
"downstream": ["answer:0"], "downstream": ["message:1"],
"upstream": ["retrieval:0"] "upstream": ["retrieval:0"]
},
"message:1": {
"obj": {
"component_name": "Message",
"params": {
"content": ["{generate:0@content}"]
}
},
"downstream": [],
"upstream": ["generate:0"]
} }
}, },
"history": [], "history": [],
"messages": [],
"reference": {},
"path": [], "path": [],
"answer": [] "retrival": {"chunks": [], "doc_aggs": []},
"globals": {
"sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
} }

View File

@ -1,82 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["retrieval:0"],
"upstream": ["begin", "generate:0", "switch:0"]
},
"retrieval:0": {
"obj": {
"component_name": "Retrieval",
"params": {
"similarity_threshold": 0.2,
"keywords_similarity_weight": 0.3,
"top_n": 6,
"top_k": 1024,
"rerank_id": "BAAI/bge-reranker-v2-m3",
"kb_ids": ["869a236818b811ef91dffa163e197198"],
"empty_response": "Sorry, knowledge base has noting related information."
}
},
"downstream": ["relevant:0"],
"upstream": ["answer:0"]
},
"relevant:0": {
"obj": {
"component_name": "Relevant",
"params": {
"llm_id": "deepseek-chat",
"temperature": 0.02,
"yes": "generate:0",
"no": "message:0"
}
},
"downstream": ["message:0", "generate:0"],
"upstream": ["retrieval:0"]
},
"generate:0": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the question based on content of knowledge base. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\". Answers need to consider chat history.\n Knowledge base content is as following:\n {input}\n The above is the content of knowledge base.",
"temperature": 0.2
}
},
"downstream": ["answer:0"],
"upstream": ["relevant:0"]
},
"message:0": {
"obj":{
"component_name": "Message",
"params": {
"messages": [
"Sorry, I don't know. Please leave your contact, our experts will contact you later. What's your e-mail/phone/wechat?",
"I'm an AI bot and not quite sure about this question. Please leave your contact, our experts will contact you later. What's your e-mail/phone/wechat?",
"Can't find answer in my knowledge base. Please leave your contact, our experts will contact you later. What's your e-mail/phone/wechat?"
]
}
},
"downstream": ["answer:0"],
"upstream": ["relevant:0"]
}
},
"history": [],
"path": [],
"messages": [],
"reference": {},
"answer": []
}

View File

@ -1,103 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["retrieval:0"],
"upstream": ["begin"]
},
"retrieval:0": {
"obj": {
"component_name": "Retrieval",
"params": {
"similarity_threshold": 0.2,
"keywords_similarity_weight": 0.3,
"top_n": 6,
"top_k": 1024,
"rerank_id": "BAAI/bge-reranker-v2-m3",
"kb_ids": ["21ca4e6a2c8911ef8b1e0242ac120006"],
"empty_response": "Sorry, knowledge base has noting related information."
}
},
"downstream": ["relevant:0"],
"upstream": ["answer:0"]
},
"relevant:0": {
"obj": {
"component_name": "Relevant",
"params": {
"llm_id": "deepseek-chat",
"temperature": 0.02,
"yes": "generate:0",
"no": "keyword:0"
}
},
"downstream": ["keyword:0", "generate:0"],
"upstream": ["retrieval:0"]
},
"generate:0": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the question based on content of knowledge base. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\". Answers need to consider chat history.\n Knowledge base content is as following:\n {input}\n The above is the content of knowledge base.",
"temperature": 0.2
}
},
"downstream": ["answer:0"],
"upstream": ["relevant:0"]
},
"keyword:0": {
"obj": {
"component_name": "KeywordExtract",
"params": {
"llm_id": "deepseek-chat",
"prompt": "- Role: You're a question analyzer.\n - Requirements:\n - Summarize user's question, and give top %s important keyword/phrase.\n - Use comma as a delimiter to separate keywords/phrases.\n - Answer format: (in language of user's question)\n - keyword: ",
"temperature": 0.2,
"top_n": 1
}
},
"downstream": ["baidu:0"],
"upstream": ["relevant:0"]
},
"baidu:0": {
"obj":{
"component_name": "Baidu",
"params": {
"top_n": 10
}
},
"downstream": ["generate:1"],
"upstream": ["keyword:0"]
},
"generate:1": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the question based on content searched from Baidu. When the answer from a Baidu search is incomplete, you need to output the URL link of the corresponding content as well. When all the content searched from Baidu is irrelevant to the question, your answer must include the sentence, \"The answer you are looking for is not found in the Baidu search!\". Answers need to consider chat history.\n The content of Baidu search is as follows:\n {input}\n The above is the content of Baidu search.",
"temperature": 0.2
}
},
"downstream": ["answer:0"],
"upstream": ["baidu:0"]
}
},
"history": [],
"path": [],
"messages": [],
"reference": {},
"answer": []
}

View File

@ -1,79 +0,0 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["answer:0"],
"upstream": []
},
"answer:0": {
"obj": {
"component_name": "Answer",
"params": {}
},
"downstream": ["retrieval:0"],
"upstream": ["begin", "generate:0", "switch:0"]
},
"retrieval:0": {
"obj": {
"component_name": "Retrieval",
"params": {
"similarity_threshold": 0.2,
"keywords_similarity_weight": 0.3,
"top_n": 6,
"top_k": 1024,
"rerank_id": "BAAI/bge-reranker-v2-m3",
"kb_ids": ["869a236818b811ef91dffa163e197198"],
"empty_response": "Sorry, knowledge base has noting related information."
}
},
"downstream": ["relevant:0"],
"upstream": ["answer:0", "rewrite:0"]
},
"relevant:0": {
"obj": {
"component_name": "Relevant",
"params": {
"llm_id": "deepseek-chat",
"temperature": 0.02,
"yes": "generate:0",
"no": "rewrite:0"
}
},
"downstream": ["generate:0", "rewrite:0"],
"upstream": ["retrieval:0"]
},
"generate:0": {
"obj": {
"component_name": "Generate",
"params": {
"llm_id": "deepseek-chat",
"prompt": "You are an intelligent assistant. Please answer the question based on content of knowledge base. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\". Answers need to consider chat history.\n Knowledge base content is as following:\n {input}\n The above is the content of knowledge base.",
"temperature": 0.02
}
},
"downstream": ["answer:0"],
"upstream": ["relevant:0"]
},
"rewrite:0": {
"obj":{
"component_name": "RewriteQuestion",
"params": {
"llm_id": "deepseek-chat",
"temperature": 0.8
}
},
"downstream": ["retrieval:0"],
"upstream": ["relevant:0"]
}
},
"history": [],
"messages": [],
"path": [],
"reference": [],
"answer": []
}

View File

@ -0,0 +1,55 @@
{
"components": {
"begin": {
"obj":{
"component_name": "Begin",
"params": {
"prologue": "Hi there!"
}
},
"downstream": ["tavily:0"],
"upstream": []
},
"tavily:0": {
"obj": {
"component_name": "TavilySearch",
"params": {
"api_key": "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"
}
},
"downstream": ["generate:0"],
"upstream": ["begin"]
},
"generate:0": {
"obj": {
"component_name": "LLM",
"params": {
"llm_id": "deepseek-chat",
"sys_prompt": "You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence \"The answer you are looking for is not found in the knowledge base!\" Answers need to consider chat history.\n Here is the knowledge base:\n {tavily:0@formalized_content}\n The above is the knowledge base.",
"temperature": 0.2
}
},
"downstream": ["message:0"],
"upstream": ["tavily:0"]
},
"message:0": {
"obj": {
"component_name": "Message",
"params": {
"content": ["{generate:0@content}"]
}
},
"downstream": [],
"upstream": ["generate:0"]
}
},
"history": [],
"path": [],
"retrival": {"chunks": [], "doc_aggs": []},
"globals": {
"sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
}

48
agent/tools/__init__.py Normal file
View File

@ -0,0 +1,48 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import importlib
import inspect
from types import ModuleType
from typing import Dict, Type
_package_path = os.path.dirname(__file__)
__all_classes: Dict[str, Type] = {}
def _import_submodules() -> None:
for filename in os.listdir(_package_path): # noqa: F821
if filename.startswith("__") or not filename.endswith(".py") or filename.startswith("base"):
continue
module_name = filename[:-3]
try:
module = importlib.import_module(f".{module_name}", package=__name__)
_extract_classes_from_module(module) # noqa: F821
except ImportError as e:
print(f"Warning: Failed to import module {module_name}: {str(e)}")
def _extract_classes_from_module(module: ModuleType) -> None:
for name, obj in inspect.getmembers(module):
if (inspect.isclass(obj) and
obj.__module__ == module.__name__ and not name.startswith("_")):
__all_classes[name] = obj
globals()[name] = obj
_import_submodules()
__all__ = list(__all_classes.keys()) + ["__all_classes"]
del _package_path, _import_submodules, _extract_classes_from_module

102
agent/tools/arxiv.py Normal file
View File

@ -0,0 +1,102 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import time
from abc import ABC
import arxiv
from agent.tools.base import ToolParamBase, ToolMeta, ToolBase
from api.utils.api_utils import timeout
class ArXivParam(ToolParamBase):
"""
Define the ArXiv component parameters.
"""
def __init__(self):
self.meta:ToolMeta = {
"name": "arxiv_search",
"description": """arXiv is a free distribution service and an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Materials on this site are not peer-reviewed by arXiv.""",
"parameters": {
"query": {
"type": "string",
"description": "The search keywords to execute with arXiv. The keywords should be the most important words/terms(includes synonyms) from the original request.",
"default": "{sys.query}",
"required": True
}
}
}
super().__init__()
self.top_n = 12
self.sort_by = 'submittedDate'
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.sort_by, "ArXiv Search Sort_by",
['submittedDate', 'lastUpdatedDate', 'relevance'])
def get_input_form(self) -> dict[str, dict]:
return {
"query": {
"name": "Query",
"type": "line"
}
}
class ArXiv(ToolBase, ABC):
component_name = "ArXiv"
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12))
def _invoke(self, **kwargs):
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
try:
sort_choices = {"relevance": arxiv.SortCriterion.Relevance,
"lastUpdatedDate": arxiv.SortCriterion.LastUpdatedDate,
'submittedDate': arxiv.SortCriterion.SubmittedDate}
arxiv_client = arxiv.Client()
search = arxiv.Search(
query=kwargs["query"],
max_results=self._param.top_n,
sort_by=sort_choices[self._param.sort_by]
)
self._retrieve_chunks(list(arxiv_client.results(search)),
get_title=lambda r: r.title,
get_url=lambda r: r.pdf_url,
get_content=lambda r: r.summary)
return self.output("formalized_content")
except Exception as e:
last_e = e
logging.exception(f"ArXiv error: {e}")
time.sleep(self._param.delay_after_error)
if last_e:
self.set_output("_ERROR", str(last_e))
return f"ArXiv error: {last_e}"
assert False, self.output()
def thoughts(self) -> str:
return """
Keywords: {}
Looking for the most relevant articles.
""".format(self.get_input().get("query", "-_-!"))

173
agent/tools/base.py Normal file
View File

@ -0,0 +1,173 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import re
import time
from copy import deepcopy
from functools import partial
from typing import TypedDict, List, Any
from agent.component.base import ComponentParamBase, ComponentBase
from api.utils import hash_str2int
from rag.llm.chat_model import ToolCallSession
from rag.prompts.prompts import kb_prompt
from rag.utils.mcp_tool_call_conn import MCPToolCallSession
from timeit import default_timer as timer
class ToolParameter(TypedDict):
type: str
description: str
displayDescription: str
enum: List[str]
required: bool
class ToolMeta(TypedDict):
name: str
displayName: str
description: str
displayDescription: str
parameters: dict[str, ToolParameter]
class LLMToolPluginCallSession(ToolCallSession):
def __init__(self, tools_map: dict[str, object], callback: partial):
self.tools_map = tools_map
self.callback = callback
def tool_call(self, name: str, arguments: dict[str, Any]) -> Any:
assert name in self.tools_map, f"LLM tool {name} does not exist"
st = timer()
if isinstance(self.tools_map[name], MCPToolCallSession):
resp = self.tools_map[name].tool_call(name, arguments, 60)
else:
resp = self.tools_map[name].invoke(**arguments)
self.callback(name, arguments, resp, elapsed_time=timer()-st)
return resp
def get_tool_obj(self, name):
return self.tools_map[name]
class ToolParamBase(ComponentParamBase):
def __init__(self):
#self.meta:ToolMeta = None
super().__init__()
self._init_inputs()
self._init_attr_by_meta()
def _init_inputs(self):
self.inputs = {}
for k,p in self.meta["parameters"].items():
self.inputs[k] = deepcopy(p)
def _init_attr_by_meta(self):
for k,p in self.meta["parameters"].items():
if not hasattr(self, k):
setattr(self, k, p.get("default"))
def get_meta(self):
params = {}
for k, p in self.meta["parameters"].items():
params[k] = {
"type": p["type"],
"description": p["description"]
}
if "enum" in p:
params[k]["enum"] = p["enum"]
desc = self.meta["description"]
if hasattr(self, "description"):
desc = self.description
function_name = self.meta["name"]
if hasattr(self, "function_name"):
function_name = self.function_name
return {
"type": "function",
"function": {
"name": function_name,
"description": desc,
"parameters": {
"type": "object",
"properties": params,
"required": [k for k, p in self.meta["parameters"].items() if p["required"]]
}
}
}
class ToolBase(ComponentBase):
def __init__(self, canvas, id, param: ComponentParamBase):
from agent.canvas import Canvas # Local import to avoid cyclic dependency
assert isinstance(canvas, Canvas), "canvas must be an instance of Canvas"
self._canvas = canvas
self._id = id
self._param = param
self._param.check()
def get_meta(self) -> dict[str, Any]:
return self._param.get_meta()
def invoke(self, **kwargs):
self.set_output("_created_time", time.perf_counter())
try:
res = self._invoke(**kwargs)
except Exception as e:
self._param.outputs["_ERROR"] = {"value": str(e)}
logging.exception(e)
res = str(e)
self._param.debug_inputs = []
self.set_output("_elapsed_time", time.perf_counter() - self.output("_created_time"))
return res
def _retrieve_chunks(self, res_list: list, get_title, get_url, get_content, get_score=None):
chunks = []
aggs = []
for r in res_list:
content = get_content(r)
if not content:
continue
content = re.sub(r"!?\[[a-z]+\]\(data:image/png;base64,[ 0-9A-Za-z/_=+-]+\)", "", content)
content = content[:10000]
if not content:
continue
id = str(hash_str2int(content))
title = get_title(r)
url = get_url(r)
score = get_score(r) if get_score else 1
chunks.append({
"chunk_id": id,
"content": content,
"doc_id": id,
"docnm_kwd": title,
"similarity": score,
"url": url
})
aggs.append({
"doc_name": title,
"doc_id": id,
"count": 1,
"url": url
})
self._canvas.add_refernce(chunks, aggs)
self.set_output("formalized_content", "\n".join(kb_prompt({"chunks": chunks, "doc_aggs": aggs}, 200000, True)))
def thoughts(self) -> str:
return self._canvas.get_component_name(self._id) + " is running..."

201
agent/tools/code_exec.py Normal file
View File

@ -0,0 +1,201 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import base64
import logging
import os
from abc import ABC
from strenum import StrEnum
from typing import Optional
from pydantic import BaseModel, Field, field_validator
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api import settings
from api.utils.api_utils import timeout
class Language(StrEnum):
PYTHON = "python"
NODEJS = "nodejs"
class CodeExecutionRequest(BaseModel):
code_b64: str = Field(..., description="Base64 encoded code string")
language: str = Field(default=Language.PYTHON.value, description="Programming language")
arguments: Optional[dict] = Field(default={}, description="Arguments")
@field_validator("code_b64")
@classmethod
def validate_base64(cls, v: str) -> str:
try:
base64.b64decode(v, validate=True)
return v
except Exception as e:
raise ValueError(f"Invalid base64 encoding: {str(e)}")
@field_validator("language", mode="before")
@classmethod
def normalize_language(cls, v) -> str:
if isinstance(v, str):
low = v.lower()
if low in ("python", "python3"):
return "python"
elif low in ("javascript", "nodejs"):
return "nodejs"
raise ValueError(f"Unsupported language: {v}")
class CodeExecParam(ToolParamBase):
"""
Define the code sandbox component parameters.
"""
def __init__(self):
self.meta:ToolMeta = {
"name": "execute_code",
"description": """
This tool has a sandbox that can execute code written in 'Python'/'Javascript'. It recieves a piece of code and return a Json string.
Here's a code example for Python(`main` function MUST be included):
def main() -> dict:
\"\"\"
Generate Fibonacci numbers within 100.
\"\"\"
def fibonacci_recursive(n):
if n <= 1:
return n
else:
return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)
return {
"result": fibonacci_recursive(100),
}
Here's a code example for Javascript(`main` function MUST be included and exported):
const axios = require('axios');
async function main(args) {
try {
const response = await axios.get('https://github.com/infiniflow/ragflow');
console.log('Body:', response.data);
} catch (error) {
console.error('Error:', error.message);
}
}
module.exports = { main };
""",
"parameters": {
"lang": {
"type": "string",
"description": "The programming language of this piece of code.",
"enum": ["python", "javascript"],
"required": True,
},
"script": {
"type": "string",
"description": "A piece of code in right format. There MUST be main function.",
"required": True
}
}
}
super().__init__()
self.lang = Language.PYTHON.value
self.script = "def main(arg1: str, arg2: str) -> dict: return {\"result\": arg1 + arg2}"
self.arguments = {}
self.outputs = {"result": {"value": "", "type": "string"}}
def check(self):
self.check_valid_value(self.lang, "Support languages", ["python", "python3", "nodejs", "javascript"])
self.check_empty(self.script, "Script")
def get_input_form(self) -> dict[str, dict]:
res = {}
for k, v in self.arguments.items():
res[k] = {
"type": "line",
"name": k
}
return res
class CodeExec(ToolBase, ABC):
component_name = "CodeExec"
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
def _invoke(self, **kwargs):
lang = kwargs.get("lang", self._param.lang)
script = kwargs.get("script", self._param.script)
arguments = {}
for k, v in self._param.arguments.items():
if kwargs.get(k):
arguments[k] = kwargs[k]
continue
arguments[k] = self._canvas.get_variable_value(v) if v else None
self._execute_code(
language=lang,
code=script,
arguments=arguments
)
def _execute_code(self, language: str, code: str, arguments: dict):
import requests
try:
code_b64 = self._encode_code(code)
code_req = CodeExecutionRequest(code_b64=code_b64, language=language, arguments=arguments).model_dump()
except Exception as e:
self.set_output("_ERROR", "construct code request error: " + str(e))
try:
resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=10)
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run", code_req, resp.status_code)
if resp.status_code != 200:
resp.raise_for_status()
body = resp.json()
if body:
stderr = body.get("stderr")
if stderr:
self.set_output("_ERROR", stderr)
return
try:
rt = eval(body.get("stdout", ""))
except Exception:
rt = body.get("stdout", "")
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run -> {rt}")
if isinstance(rt, tuple):
for i, (k, o) in enumerate(self._param.outputs.items()):
if k.find("_") == 0:
continue
o["value"] = rt[i]
elif isinstance(rt, dict):
for i, (k, o) in enumerate(self._param.outputs.items()):
if k not in rt or k.find("_") == 0:
continue
o["value"] = rt[k]
else:
for i, (k, o) in enumerate(self._param.outputs.items()):
if k.find("_") == 0:
continue
o["value"] = rt
else:
self.set_output("_ERROR", "There is no response from sandbox")
except Exception as e:
self.set_output("_ERROR", "Exception executing code: " + str(e))
return self.output()
def _encode_code(self, code: str) -> str:
return base64.b64encode(code.encode("utf-8")).decode("utf-8")
def thoughts(self) -> str:
return "Running a short script to process data."

View File

@ -16,11 +16,12 @@
from abc import ABC from abc import ABC
import asyncio import asyncio
from crawl4ai import AsyncWebCrawler from crawl4ai import AsyncWebCrawler
from agent.component.base import ComponentBase, ComponentParamBase
from agent.tools.base import ToolParamBase, ToolBase
from api.utils.web_utils import is_valid_url from api.utils.web_utils import is_valid_url
class CrawlerParam(ComponentParamBase): class CrawlerParam(ToolParamBase):
""" """
Define the Crawler component parameters. Define the Crawler component parameters.
""" """
@ -34,7 +35,7 @@ class CrawlerParam(ComponentParamBase):
self.check_valid_value(self.extract_type, "Type of content from the crawler", ['html', 'markdown', 'content']) self.check_valid_value(self.extract_type, "Type of content from the crawler", ['html', 'markdown', 'content'])
class Crawler(ComponentBase, ABC): class Crawler(ToolBase, ABC):
component_name = "Crawler" component_name = "Crawler"
def _run(self, history, **kwargs): def _run(self, history, **kwargs):

120
agent/tools/duckduckgo.py Normal file
View File

@ -0,0 +1,120 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import time
from abc import ABC
from duckduckgo_search import DDGS
from agent.tools.base import ToolMeta, ToolParamBase, ToolBase
from api.utils.api_utils import timeout
class DuckDuckGoParam(ToolParamBase):
"""
Define the DuckDuckGo component parameters.
"""
def __init__(self):
self.meta:ToolMeta = {
"name": "duckduckgo_search",
"description": "DuckDuckGo is a search engine focused on privacy. It offers search capabilities for web pages, images, and provides translation services. DuckDuckGo also features a private AI chat interface, providing users with an AI assistant that prioritizes data protection.",
"parameters": {
"query": {
"type": "string",
"description": "The search keywords to execute with DuckDuckGo. The keywords should be the most important words/terms(includes synonyms) from the original request.",
"default": "{sys.query}",
"required": True
},
"channel": {
"type": "string",
"description": "default:general. The category of the search. `news` is useful for retrieving real-time updates, particularly about politics, sports, and major current events covered by mainstream media sources. `general` is for broader, more general-purpose searches that may include a wide range of sources.",
"enum": ["general", "news"],
"default": "general",
"required": False,
},
}
}
super().__init__()
self.top_n = 10
self.channel = "text"
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_valid_value(self.channel, "Web Search or News", ["text", "news"])
def get_input_form(self) -> dict[str, dict]:
return {
"query": {
"name": "Query",
"type": "line"
},
"channel": {
"name": "Channel",
"type": "options",
"value": "general",
"options": ["general", "news"]
}
}
class DuckDuckGo(ToolBase, ABC):
component_name = "DuckDuckGo"
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 12))
def _invoke(self, **kwargs):
if not kwargs.get("query"):
self.set_output("formalized_content", "")
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
try:
if kwargs.get("topic", "general") == "general":
with DDGS() as ddgs:
# {'title': '', 'href': '', 'body': ''}
duck_res = ddgs.text(kwargs["query"], max_results=self._param.top_n)
self._retrieve_chunks(duck_res,
get_title=lambda r: r["title"],
get_url=lambda r: r.get("href", r.get("url")),
get_content=lambda r: r["body"])
self.set_output("json", duck_res)
return self.output("formalized_content")
else:
with DDGS() as ddgs:
# {'date': '', 'title': '', 'body': '', 'url': '', 'image': '', 'source': ''}
duck_res = ddgs.news(kwargs["query"], max_results=self._param.top_n)
self._retrieve_chunks(duck_res,
get_title=lambda r: r["title"],
get_url=lambda r: r.get("href", r.get("url")),
get_content=lambda r: r["body"])
self.set_output("json", duck_res)
return self.output("formalized_content")
except Exception as e:
last_e = e
logging.exception(f"DuckDuckGo error: {e}")
time.sleep(self._param.delay_after_error)
if last_e:
self.set_output("_ERROR", str(last_e))
return f"DuckDuckGo error: {last_e}"
assert False, self.output()
def thoughts(self) -> str:
return """
Keywords: {}
Looking for the most relevant articles.
""".format(self.get_input().get("query", "-_-!"))

215
agent/tools/email.py Normal file
View File

@ -0,0 +1,215 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import time
from abc import ABC
import json
import smtplib
import logging
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
from email.utils import formataddr
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.utils.api_utils import timeout
class EmailParam(ToolParamBase):
"""
Define the Email component parameters.
"""
def __init__(self):
self.meta:ToolMeta = {
"name": "email",
"description": "The email is a method of electronic communication for sending and receiving information through the Internet. This tool helps users to send emails to one person or to multiple recipients with support for CC, BCC, file attachments, and markdown-to-HTML conversion.",
"parameters": {
"to_email": {
"type": "string",
"description": "The target email address.",
"default": "{sys.query}",
"required": True
},
"cc_email": {
"type": "string",
"description": "The other email addresses needs to be send to. Comma splited.",
"default": "",
"required": False
},
"content": {
"type": "string",
"description": "The content of the email.",
"default": "",
"required": False
},
"subject": {
"type": "string",
"description": "The subject/title of the email.",
"default": "",
"required": False
}
}
}
super().__init__()
# Fixed configuration parameters
self.smtp_server = "" # SMTP server address
self.smtp_port = 465 # SMTP port
self.email = "" # Sender email
self.password = "" # Email authorization code
self.sender_name = "" # Sender name
def check(self):
# Check required parameters
self.check_empty(self.smtp_server, "SMTP Server")
self.check_empty(self.email, "Email")
self.check_empty(self.password, "Password")
self.check_empty(self.sender_name, "Sender Name")
def get_input_form(self) -> dict[str, dict]:
return {
"to_email": {
"name": "To ",
"type": "line"
},
"subject": {
"name": "Subject",
"type": "line",
"optional": True
},
"cc_email": {
"name": "CC To",
"type": "line",
"optional": True
},
}
class Email(ToolBase, ABC):
component_name = "Email"
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60))
def _invoke(self, **kwargs):
if not kwargs.get("to_email"):
self.set_output("success", False)
return ""
last_e = ""
for _ in range(self._param.max_retries+1):
try:
# Parse JSON string passed from upstream
email_data = kwargs
# Validate required fields
if "to_email" not in email_data:
return Email.be_output("Missing required field: to_email")
# Create email object
msg = MIMEMultipart('alternative')
# Properly handle sender name encoding
msg['From'] = formataddr((str(Header(self._param.sender_name,'utf-8')), self._param.email))
msg['To'] = email_data["to_email"]
if email_data.get("cc_email"):
msg['Cc'] = email_data["cc_email"]
msg['Subject'] = Header(email_data.get("subject", "No Subject"), 'utf-8').encode()
# Use content from email_data or default content
email_content = email_data.get("content", "No content provided")
# msg.attach(MIMEText(email_content, 'plain', 'utf-8'))
msg.attach(MIMEText(email_content, 'html', 'utf-8'))
# Connect to SMTP server and send
logging.info(f"Connecting to SMTP server {self._param.smtp_server}:{self._param.smtp_port}")
context = smtplib.ssl.create_default_context()
with smtplib.SMTP(self._param.smtp_server, self._param.smtp_port) as server:
server.ehlo()
server.starttls(context=context)
server.ehlo()
# Login
logging.info(f"Attempting to login with email: {self._param.email}")
server.login(self._param.email, self._param.password)
# Get all recipient list
recipients = [email_data["to_email"]]
if email_data.get("cc_email"):
recipients.extend(email_data["cc_email"].split(','))
# Send email
logging.info(f"Sending email to recipients: {recipients}")
try:
server.send_message(msg, self._param.email, recipients)
success = True
except Exception as e:
logging.error(f"Error during send_message: {str(e)}")
# Try alternative method
server.sendmail(self._param.email, recipients, msg.as_string())
success = True
try:
server.quit()
except Exception as e:
# Ignore errors when closing connection
logging.warning(f"Non-fatal error during connection close: {str(e)}")
self.set_output("success", success)
return success
except json.JSONDecodeError:
error_msg = "Invalid JSON format in input"
logging.error(error_msg)
self.set_output("_ERROR", error_msg)
self.set_output("success", False)
return False
except smtplib.SMTPAuthenticationError:
error_msg = "SMTP Authentication failed. Please check your email and authorization code."
logging.error(error_msg)
self.set_output("_ERROR", error_msg)
self.set_output("success", False)
return False
except smtplib.SMTPConnectError:
error_msg = f"Failed to connect to SMTP server {self._param.smtp_server}:{self._param.smtp_port}"
logging.error(error_msg)
last_e = error_msg
time.sleep(self._param.delay_after_error)
except smtplib.SMTPException as e:
error_msg = f"SMTP error occurred: {str(e)}"
logging.error(error_msg)
last_e = error_msg
time.sleep(self._param.delay_after_error)
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logging.error(error_msg)
self.set_output("_ERROR", error_msg)
self.set_output("success", False)
return False
if last_e:
self.set_output("_ERROR", str(last_e))
return False
assert False, self.output()
def thoughts(self) -> str:
inputs = self.get_input()
return """
To: {}
Subject: {}
Your email is on its way—sit tight!
""".format(inputs.get("to_email", "-_-!"), inputs.get("subject", "-_-!"))

148
agent/tools/exesql.py Normal file
View File

@ -0,0 +1,148 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import re
from abc import ABC
import pandas as pd
import pymysql
import psycopg2
import pyodbc
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta
from api.utils.api_utils import timeout
class ExeSQLParam(ToolParamBase):
"""
Define the ExeSQL component parameters.
"""
def __init__(self):
self.meta:ToolMeta = {
"name": "execute_sql",
"description": "This is a tool that can execute SQL.",
"parameters": {
"sql": {
"type": "string",
"description": "The SQL needs to be executed.",
"default": "{sys.query}",
"required": True
}
}
}
super().__init__()
self.db_type = "mysql"
self.database = ""
self.username = ""
self.host = ""
self.port = 3306
self.password = ""
self.max_records = 1024
def check(self):
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgresql', 'mariadb', 'mssql'])
self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address")
self.check_positive_integer(self.port, "IP Port")
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.max_records, "Maximum number of records")
if self.database == "rag_flow":
if self.host == "ragflow-mysql":
raise ValueError("For the security reason, it dose not support database named rag_flow.")
if self.password == "infini_rag_flow":
raise ValueError("For the security reason, it dose not support database named rag_flow.")
def get_input_form(self) -> dict[str, dict]:
return {
"sql": {
"name": "SQL",
"type": "line"
}
}
class ExeSQL(ToolBase, ABC):
component_name = "ExeSQL"
@timeout(os.environ.get("COMPONENT_EXEC_TIMEOUT", 60))
def _invoke(self, **kwargs):
def convert_decimals(obj):
from decimal import Decimal
if isinstance(obj, Decimal):
return float(obj) # 或 str(obj)
elif isinstance(obj, dict):
return {k: convert_decimals(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [convert_decimals(item) for item in obj]
return obj
sql = kwargs.get("sql")
if not sql:
raise Exception("SQL for `ExeSQL` MUST not be empty.")
sqls = sql.split(";")
if self._param.db_type in ["mysql", "mariadb"]:
db = pymysql.connect(db=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'postgresql':
db = psycopg2.connect(dbname=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'mssql':
conn_str = (
r'DRIVER={ODBC Driver 17 for SQL Server};'
r'SERVER=' + self._param.host + ',' + str(self._param.port) + ';'
r'DATABASE=' + self._param.database + ';'
r'UID=' + self._param.username + ';'
r'PWD=' + self._param.password
)
db = pyodbc.connect(conn_str)
try:
cursor = db.cursor()
except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e))
sql_res = []
formalized_content = []
for single_sql in sqls:
single_sql = single_sql.replace('```','')
if not single_sql:
continue
single_sql = re.sub(r"\[ID:[0-9]+\]", "", single_sql)
cursor.execute(single_sql)
if cursor.rowcount == 0:
sql_res.append({"content": "No record in the database!"})
break
if self._param.db_type == 'mssql':
single_res = pd.DataFrame.from_records(cursor.fetchmany(self._param.max_records),
columns=[desc[0] for desc in cursor.description])
else:
single_res = pd.DataFrame([i for i in cursor.fetchmany(self._param.max_records)])
single_res.columns = [i[0] for i in cursor.description]
for col in single_res.columns:
if pd.api.types.is_datetime64_any_dtype(single_res[col]):
single_res[col] = single_res[col].dt.strftime('%Y-%m-%d')
sql_res.append(convert_decimals(single_res.to_dict(orient='records')))
formalized_content.append(single_res.to_markdown(index=False, floatfmt=".6f"))
self.set_output("json", sql_res)
self.set_output("formalized_content", "\n\n".join(formalized_content))
return self.output("formalized_content")
def thoughts(self) -> str:
return "Query sent—waiting for the data."

Some files were not shown because too many files have changed in this diff Show More