Compare commits

...

115 Commits

Author SHA1 Message Date
da5cef0686 Refactor:Improve the float compare for LocalAIRerank (#9428)
### What problem does this PR solve?
Improve the float compare for LocalAIRerank

### Type of change

- [x] Refactoring
2025-08-13 10:26:42 +08:00
9098efb8aa Feat: Fixed the issue where some fields in the chat configuration could not be displayed #3221 (#9430)
### What problem does this PR solve?

Feat: Fixed the issue where some fields in the chat configuration could
not be displayed #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-13 10:26:26 +08:00
421657f64b Feat: allows setting multiple types of default models in service config (#9404)
### What problem does this PR solve?

Allows set multiple types of default models in service config.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-13 09:46:05 +08:00
7ee5e0d152 Fix KeyError in session listing endpoint when accessing conversation reference (#9419)
- Add type and boundary checks for conv["reference"] access
- Prevent KeyError: 0 when reference list is empty or malformed
- Ensure reference is list type before indexing
- Handle cases where reference items are None or missing chunks
- Maintains backward compatibility with existing data structures

This resolves crashes in /api/v1/agents/<agent_id>/sessions endpoint
when conversation reference data is not properly structured.

### What problem does this PR solve?

This PR fixes a critical `KeyError: 0` that occurs in the
`/api/v1/agents/<agent_id>/sessions` endpoint when the system attempts
to access conversation reference data that is not properly structured.

**Background Context:**
The `list_agent_session` method in `api/apps/sdk/session.py` assumes
that `conv["reference"]` is always a properly indexed list with valid
dictionary structures. However, in real-world scenarios, this data can
be:
- Not a list type (could be None, string, or other types)
- An empty list when `chunk_num` tries to access index 0
- Contains None values or malformed dictionary structures
- Missing expected "chunks" keys in reference items

**Impact Before Fix:**
When malformed reference data is encountered, the API crashes with:
```json
{
    "code": 100,
    "data": null,
    "message": "KeyError(0)"
}
```
**Solution:**
Added comprehensive safety checks including type validation, boundary
checking, null safety, and structure validation to ensure the API
gracefully handles all reference data formats while maintaining backward
compatibility.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-08-13 09:23:52 +08:00
22915223d4 Fix: citation issue. (#9424)
### What problem does this PR solve?

#8474

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 18:53:34 +08:00
d7b4e84cda Refa: Update LLM stream response type to Generator (#9420)
### What problem does this PR solve?

Change return type of _generate_streamly from str to Generator[str,
None, None] to properly type hint streaming responses.

### Type of change

- [x] Refactoring
2025-08-12 18:05:52 +08:00
e845d5f9f8 Fix:valueERROR when file is optional but not exist value (#9414)
### What problem does this PR solve?

when begin component has optional file but not exist , it rase error

### Type of change

- [x] Bug Fix

Co-authored-by: Popmio <zhengyihao036@gamil.com>
2025-08-12 17:39:03 +08:00
3d18284dd6 Feat: Added meta data to the chat configuration page #8531 (#9417)
### What problem does this PR solve?

Feat: Added meta data to the chat configuration page #8531

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 16:19:23 +08:00
96783aa82c Fix: remove doc error. (#9413)
### What problem does this PR solve?

Close #9407

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 15:55:04 +08:00
a0c2da1219 Fix: Patch LiteLLM (#9416)
### What problem does this PR solve?

Patch LiteLLM refactor. #9408

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 15:54:30 +08:00
79e2edc835 Fix "File contains no valid workbook part" (#9360)
### What problem does this PR solve?

fix "File contains no valid workbook part"

stacktrace:
```
Traceback (most recent call last):
  File "/ragflow/deepdoc/parser/excel_parser.py", line 54, in _load_excel_to_workbook
    return RAGFlowExcelParser._dataframe_to_workbook(df)
  File "/ragflow/deepdoc/parser/excel_parser.py", line 69, in _dataframe_to_workbook
    ws.cell(row=row_num, column=col_num, value=value)
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/worksheet/worksheet.py", line 246, in cell
    cell.value = value
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/cell/cell.py", line 218, in value
    self._bind_value(value)
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/cell/cell.py", line 197, in _bind_value
    value = self.check_string(value)
  File "/ragflow/.venv/lib/python3.10/site-packages/openpyxl/cell/cell.py", line 165, in check_string
    raise IllegalCharacterError(f"{value} cannot be used in worksheets.")
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-08-12 14:58:36 +08:00
57b87fa9d9 Fix:TypeError: OllamaCV.chat() got an unexpected keyword argument 'stop' (#9363)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9351
Support filter argument before invoking
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-12 14:55:27 +08:00
153e430b00 Feat: add meta data filter. (#9405)
### What problem does this PR solve?

#8531 
#7417 
#6761 
#6573
#6477

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 14:12:56 +08:00
3ccaa06031 Fix: Before executing the SQL, remove tags in the format [ID: number] to avoid execution errors. (#9326)
### What problem does this PR solve?

Before executing the SQL, remove tags in the format [ID: number] to
avoid execution errors.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: wangyazhou <wangyazhou@sdibd.cn>
2025-08-12 12:42:28 +08:00
569ab011c4 Add fallback to use 'calamine' parse engine in excel_parser.py (#9374)
### What problem does this PR solve?

add fallback to `calamine` engine when parse error raised using the
default `openpyxl` / `xlrd` engine.
e.g. the following error can be fixed:
```
Traceback (most recent call last):
  File "/ragflow/deepdoc/parser/excel_parser.py", line 53, in _load_excel_to_workbook
    df = pd.read_excel(file_like_object)
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_base.py", line 495, in read_excel
    io = ExcelFile(
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_base.py", line 1567, in __init__
    self._reader = self._engines[engine](
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_xlrd.py", line 46, in __init__
    super().__init__(
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_base.py", line 573, in __init__
    self.book = self.load_workbook(self.handles.handle, engine_kwargs)
  File "/ragflow/.venv/lib/python3.10/site-packages/pandas/io/excel/_xlrd.py", line 63, in load_workbook
    return open_workbook(file_contents=data, **engine_kwargs)
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/__init__.py", line 172, in open_workbook
    bk = open_workbook_xls(
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/book.py", line 68, in open_workbook_xls
    bk.biff2_8_load(
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/book.py", line 641, in biff2_8_load
    cd.locate_named_stream(UNICODE_LITERAL(qname))
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/compdoc.py", line 398, in locate_named_stream
    result = self._locate_stream(
  File "/ragflow/.venv/lib/python3.10/site-packages/xlrd/compdoc.py", line 429, in _locate_stream
    raise CompDocError("%s corruption: seen[%d] == %d" % (qname, s, self.seen[s]))
xlrd.compdoc.CompDocError: Workbook corruption: seen[2] == 4
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 12:41:33 +08:00
96b1538b3e Fix:HTTP request component failed to retrieve the corresponding value (#9399)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9385
Based on my understanding, I think checking empty string is fine

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-12 12:27:22 +08:00
735570486f feat(next-search): Added AI summary functionality #3221 (#9402)
### What problem does this PR solve?

feat(next-search): Added AI summary functionality #3221

- Added the LlmSettingFieldItems component for AI summary settings
- Updated the SearchSetting component to integrate AI summary
functionality
- Added the updateSearch hook and related service methods
- Modified the ISearchAppDetailProps interface to add the llm_setting
field

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 12:27:00 +08:00
da68f541b6 Feat: add full list of supported AWS Bedrock regions (#9395)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 11:01:16 +08:00
83771e500c Refa: migrate chat models to LiteLLM (#9394)
### What problem does this PR solve?

All models pass the mock response tests, which means that if a model can
return the correct response, everything should work as expected.
However, not all models have been fully tested in a real environment,
the real API_KEY. I suggest actively monitoring the refactored models
over the coming period to ensure they work correctly and fixing them
step by step, or waiting to merge until most have been tested in
practical environment.

### Type of change

- [x] Refactoring
2025-08-12 10:59:20 +08:00
a6d2119498 Refa: list canvas (#9341)
### What problem does this PR solve?

Refactor list canvas.

### Type of change

- [x] Refactoring
2025-08-12 10:58:06 +08:00
57b9f8cf52 Fix: Update test assertions and simplify test cases (#9400)
### What problem does this PR solve?

- Fix error message assertion in test_update_chunk.py to match new
ownership validation
- Simplify dataset listing test cases by removing lambda assertions for
sorting
- Fix actions:
https://github.com/infiniflow/ragflow/actions/runs/16885465524/job/47831942553

### Type of change

- [x] Fix test cases
2025-08-12 10:57:30 +08:00
5c3577c4c9 Python SDK: add meta_fields to Document class (#9387)
### What problem does this PR solve?

Python class Document was missing "meta_fields", e.g. when querying, the
document instances came without meta_fields

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-12 10:16:12 +08:00
76118000c1 Feat: Allow chat to use meta data #3221 (#9393)
### What problem does this PR solve?

Feat:  Allow chat to use meta data #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-12 10:15:10 +08:00
9433f64fe2 Feat: added functionality to choose all datasets if no id is provided (#9184)
### What problem does this PR solve?

Using the mcp server in n8n sometimes (with smaller models) results in
errors because the llm misses a char or adds one to the list of
dataset_ids provided. It first asks for the list of datasets and if you
got a larger list of them it makes a error recalling the list
completely. So adding the feature to just search through all available
datasets solves this and makes the retrieval of data more stable. The
functionality to just call special datasets by id is not changed, the
dataset_ids are now not required anymore (only the "question" is). You
can provide (like before) a list of datasets, a empty list or no list at
all.

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
<img width="1897" height="880" alt="mcp error dataset id"
src="https://github.com/user-attachments/assets/71076d24-f875-4663-a69a-60839fc7a545"
/>
2025-08-11 17:20:35 +08:00
d7c9611d45 docs(sandbox): update /etc/hosts entry to include required services (#9144)
Fixes an issue where running the sandbox (code component) fails due to
unresolved hostnames. Added missing service names (es01, infinity,
mysql, minio, redis) to 127.0.0.1 in the /etc/hosts example.

Reference: https://github.com/infiniflow/ragflow/issues/8226

## What this PR does

Updates the sandbox quickstart documentation to fix a known issue where
the sandbox fails to resolve required service hostnames.

## Why

Following the original instruction leads to a `Failed to resolve 'none'`
error, as discussed in issue #8226. Adding the missing service names to
`127.0.0.1` resolves the problem.

## Related issue

https://github.com/infiniflow/ragflow/issues/8226

## Note

It might be better to add `127.0.0.1 es01 infinity mysql minio redis` to
docs/quickstart.mdx, but since no issues appeared at the time without
adding this line—and the problem occurred while working with the code
component—I added it here.

### Type of change

- [X] Documentation Update
2025-08-11 17:18:56 +08:00
79399f7f25 Support the case of one cell split by multiple columns. (#9225)
### What problem does this PR solve?
Support the case of one cell split by multiple columns. Besides, the
codes are compatible with the common cell case.
#8606 can be fixed.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

I provide a case of one cell split by multiple columns:

[test.xlsx](https://github.com/user-attachments/files/21578693/test.xlsx)

The chunk res:
<img width="236" height="57" alt="2025-06-17 16-04-07 的屏幕截图"
src="https://github.com/user-attachments/assets/b0a499ac-349d-4c3d-8c6e-0931c8fc26de"
/>
2025-08-11 17:17:56 +08:00
23522f1ea8 Fix: handle missing dataset_ids when creating chat assistant (#9324)
- Root cause: accessing req.get("dataset_ids") returns None when the key
is absent, causing KeyError.
- Fix: use req.get("dataset_ids", []) to default to empty list.
2025-08-11 17:17:20 +08:00
46dc3f1c48 Fix: Update test assertions and add GraphRAG config in dataset tests (#9386)
### What problem does this PR solve?

- Modify error message assertion in chunk update test to check for
document ownership
- Add GraphRAG configuration with `use_graphrag: False` in dataset
update tests
- Fix actions:
https://github.com/infiniflow/ragflow/actions/runs/16863637898/job/47767511582
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:15:48 +08:00
c9b156fa6d Fix: Remove default dataset_ids from Chat class initialization (#9381)
### What problem does this PR solve?

- The default dataset_ids "kb1" was removed from the Chat class. 
- The HTTP API response does not include the dataset_ids field.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:15:30 +08:00
83939b1a63 Feat: add full list of supported AWS Bedrock regions (#9378)
### What problem does this PR solve?

Add full list of supported AWS Bedrock regions.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 17:15:07 +08:00
7f08ba47d7 Fix "no tc element at grid_offset" (#9375)
### What problem does this PR solve?

fix "no `tc` element at grid_offset", just log warning and ignore.
stacktrace:
```
Traceback (most recent call last):
  File "/ragflow/rag/svr/task_executor.py", line 620, in handle_task
    await do_handle_task(task)
  File "/ragflow/rag/svr/task_executor.py", line 553, in do_handle_task
    chunks = await build_chunks(task, progress_callback)
  File "/ragflow/rag/svr/task_executor.py", line 257, in build_chunks
    cks = await trio.to_thread.run_sync(lambda: chunker.chunk(task["name"], binary=binary, from_page=task["from_page"],
  File "/ragflow/.venv/lib/python3.10/site-packages/trio/_threads.py", line 447, in to_thread_run_sync
    return msg_from_thread.unwrap()
  File "/ragflow/.venv/lib/python3.10/site-packages/outcome/_impl.py", line 213, in unwrap
    raise captured_error
  File "/ragflow/.venv/lib/python3.10/site-packages/trio/_threads.py", line 373, in do_release_then_return_result
    return result.unwrap()
  File "/ragflow/.venv/lib/python3.10/site-packages/outcome/_impl.py", line 213, in unwrap
    raise captured_error
  File "/ragflow/.venv/lib/python3.10/site-packages/trio/_threads.py", line 392, in worker_fn
    ret = context.run(sync_fn, *args)
  File "/ragflow/rag/svr/task_executor.py", line 257, in <lambda>
    cks = await trio.to_thread.run_sync(lambda: chunker.chunk(task["name"], binary=binary, from_page=task["from_page"],
  File "/ragflow/rag/app/naive.py", line 384, in chunk
    sections, tables = Docx()(filename, binary)
  File "/ragflow/rag/app/naive.py", line 230, in __call__
    while i < len(r.cells):
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/table.py", line 438, in cells
    return tuple(_iter_row_cells())
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/table.py", line 436, in _iter_row_cells
    yield from iter_tc_cells(tc)
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/table.py", line 424, in iter_tc_cells
    yield from iter_tc_cells(tc._tc_above)  # pyright: ignore[reportPrivateUsage]
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/oxml/table.py", line 741, in _tc_above
    return self._tr_above.tc_at_grid_offset(self.grid_offset)
  File "/ragflow/.venv/lib/python3.10/site-packages/docx/oxml/table.py", line 98, in tc_at_grid_offset
    raise ValueError(f"no `tc` element at grid_offset={grid_offset}")
ValueError: no `tc` element at grid_offset=10
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:13:10 +08:00
ce3dd019c3 Fix broken data stream when writing image file (#9354)
### What problem does this PR solve?

fix "broken data stream when writing image file", just log warning and
ignore

Close #8379 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 17:07:49 +08:00
476c56868d Agent plans tasks by referring to its own prompt. (#9315)
### What problem does this PR solve?

Fixes the issue in the analyze_task execution flow where the Lead Agent
was not utilizing its own sys_prompt during task analysis, resulting in
incorrect or incomplete task planning.
https://github.com/infiniflow/ragflow/issues/9294
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 17:05:06 +08:00
b9c4954c2f Fix: Replace StrEnum with strenum in code_exec.py (#9376)
### What problem does this PR solve?

- The enum import was changed from Python's built-in StrEnum to the
strenum package.
- Fix error `Warning: Failed to import module code_exec: cannot import
name 'StrEnum' from 'enum' (/usr/lib/python3.10/enum.py)`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-11 15:32:04 +08:00
a060672b31 Feat: Run eslint when the project is running to standardize everyone's code #9377 (#9379)
### What problem does this PR solve?

Feat: Run eslint when the project is running to standardize everyone's
code #9377

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 15:31:38 +08:00
f022504ef9 Support Russian in UI Update config.ts (#9361)
add ru

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
2025-08-11 15:30:35 +08:00
1a78b8b295 Support Russian in UI (#9362)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 14:06:18 +08:00
017dd85ccf Feat: Modify the agent list return field name #3221 (#9373)
### What problem does this PR solve?

Feat: Modify the agent list return field name #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 14:03:52 +08:00
4c7b2ef46e Feat: New search page components and features (#9344)
### What problem does this PR solve?

Feat: New search page components and features #3221

- Added search homepage, search settings, and ongoing search components
- Implemented features such as search app list, creating search apps,
and deleting search apps
- Optimized the multi-select component, adding disabled state and suffix
display
- Adjusted navigation hooks to support search page navigation

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-11 10:34:22 +08:00
597d88bf9a Doc: updated supported model name (#9343)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-08-11 10:05:39 +08:00
9b026fc5b6 Refa: code format. (#9342)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2025-08-08 18:44:42 +08:00
90eb5fd31b Fix: canvas sharing bug. (#9339)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 18:31:51 +08:00
b9eeb8e64f Docs: Update version references to v0.20.1 in READMEs and docs (#9335)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.20.0 to v0.20.1
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-08-08 18:17:25 +08:00
4c99988c3e Revert: revert token_required decorator of agent_bot completions and inputs (#9332)
### What problem does this PR solve?

Revert token_required decorator of agent_bot completions and inputs.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-08-08 17:45:53 +08:00
4f2e9ef248 feat(agent): Adds prologue functionality (#9336)
### What problem does this PR solve?

feat(agent): Adds prologue functionality #3221

- Add a prologue field to the IInputs type
- Initialize the prologue state in the chat container
- Use useEffect to monitor prologue changes and add prologue responses
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 17:45:37 +08:00
4a3871090d Docs: v0.20.1 release notes (#9331)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-08-08 17:35:12 +08:00
7ce64cb265 Update the sql assistant workflow (#9329)
### What problem does this PR solve?
Update the sql assistant workflow
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 17:06:37 +08:00
d102a6bb71 New workflow templates: choose your knowledge base (#9325)
### What problem does this PR solve?

new Agent templates: you can choose your knowledge base, providing
workflow and Agent versions

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 17:06:16 +08:00
a02ca16260 Fix: add prologue to api. (#9322)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 17:05:55 +08:00
cd3bb0ed7c Feat: Set the description of the agent, which can be null #3221 (#9327)
### What problem does this PR solve?

Feat: Set the description of the agent, which can be null #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 16:44:08 +08:00
86fb710e52 Feat: Add xai logo #1853 (#9321)
### What problem does this PR solve?

Feat: Add xai logo #1853

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 14:13:19 +08:00
7713e14d6a Update chat_model.py (#9318)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/9317
base on
https://discuss.ai.google.dev/t/valueerror-invalid-operation-the-response-text-quick-accessor-requires-the-response-to-contain-a-valid-part-but-none-were-returned/42866
should can be handled by retry 
### Type of change

- [x] Refactoring
2025-08-08 14:13:07 +08:00
392f5f4ce9 fix model type (#9250)
### What problem does this PR solve?
 ERROR type model

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 13:43:53 +08:00
79481becea Feat: supports GPT-5 (#9320)
### What problem does this PR solve?

Supports GPT-5.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 11:54:40 +08:00
58a64000ea Feat: Render agent setting dialog #3221 (#9312)
### What problem does this PR solve?

Feat: Render agent setting dialog #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 11:00:55 +08:00
1bd64dafcb Fix: update broken agent completion due to v0.20.0 changes (#9309)
### What problem does this PR solve?

Update broken agent completion due to v0.20.0 changes. #9199

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-08 10:00:16 +08:00
07354f4a1a Add files viaContribute a new workflow template: SQL Assistant upload (#9311)
### What problem does this PR solve?

Contribute a new workflow template: SQL Assistant

### Type of change

- [x] Other (please describe): new workflow template
2025-08-07 18:06:49 +08:00
d628234942 Feat: Restore the button's background color #3221 (#9307)
### What problem does this PR solve?

Feat: Restore the button's background color #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 17:37:53 +08:00
5749aa30b0 Fix: model type error. (#9308)
### What problem does this PR solve?

#9240

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 16:14:47 +08:00
a2e1f5618d Fix: bytes style image issue. (#9304)
### What problem does this PR solve?

#9302

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 15:20:01 +08:00
dc48c3863d Feat: Replace color variables according to design draft #3221 (#9305)
### What problem does this PR solve?

Feat: Replace color variables according to design draft #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 15:19:45 +08:00
23062cb27a Feat: Configure colors according to the design draft#3221 (#9301)
### What problem does this PR solve?

Feat: Configure colors according to the design draft#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 13:59:33 +08:00
63c2f5b821 Fix: virtual file cannot be displayed in KB (#9282)
### What problem does this PR solve?

Fix virtual file cannot be displayed in KB. #9265

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 11:08:03 +08:00
0a0bfc02a0 Refactor:naive_merge_with_images close useless images (#9296)
### What problem does this PR solve?

naive_merge_with_images close useless images

### Type of change

- [x] Refactoring
2025-08-07 11:07:29 +08:00
f0c34d4454 Feat: Render chat page #3221 (#9298)
### What problem does this PR solve?

Feat: Render chat page #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 11:07:15 +08:00
7c719f8365 fix: Optimized popups and the search page (#9297)
### What problem does this PR solve?

fix: Optimized popups and the search page #3221 
- Added a new PortalModal component
- Refactored the Modal component, adding show and hide methods to
support popups
- Updated the search page, adding a new query function and optimizing
the search card style
- Localized, added search-related translations

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 11:07:04 +08:00
4fc9e42e74 fix: add missing env vars and default values of service_conf.yaml (#9289)
### What problem does this PR solve?

Add missing env var `MYSQL_MAX_PACKET` to service_conf.yaml.template,
and add default values to opendal config to fix npe.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-07 10:41:05 +08:00
35539092d0 Add **kwargs to model base class constructors (#9252)
Updated constructors for base and derived classes in chat, embedding,
rerank, sequence2txt, and tts models to accept **kwargs. This change
improves extensibility and allows passing additional parameters without
breaking existing interfaces.

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: IT: Sop.Son <sop.son@feavn.local>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-07 09:45:37 +08:00
581a54fbbb Feat: Search conversation by name #3221 (#9283)
### What problem does this PR solve?

Feat: Search conversation by name #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-07 09:41:03 +08:00
9ca86d801e Refa: add provider info while adding model. (#9273)
### What problem does this PR solve?
#9248

### Type of change

- [x] Refactoring
2025-08-07 09:40:42 +08:00
fb0426419e Feat: Create a conversation #3221 (#9269)
### What problem does this PR solve?

Feat: Create a conversation #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-06 11:42:40 +08:00
1409bb30df Refactor:Improve the logic so that it does not decode base 64 for the test image each time (#9264)
### What problem does this PR solve?

Improve the logic so that it does not decode base 64 for the test image
each time

### Type of change

- [x] Refactoring
- [x] Performance Improvement

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-06 11:42:25 +08:00
7efeaf6548 Fix:remove a img close which can not operate (#9267)
### What problem does this PR solve?


https://github.com/infiniflow/ragflow/issues/9149#issuecomment-3157129587

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-06 10:59:49 +08:00
46a35f44da Feat: add Claude Opus 4.1 (#9268)
### What problem does this PR solve?

Add Claude Opus 4.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring
2025-08-06 10:57:03 +08:00
a7eba61067 FIX: If chunk["content_with_weight"] contains one or more unpaired surrogate characters (such as incomplete emoji or other special characters), then calling .encode("utf-8") directly will raise a UnicodeEncodeError. (#9246)
FIX: If chunk["content_with_weight"] contains one or more unpaired
surrogate characters (such as incomplete emoji or other special
characters), then calling .encode("utf-8") directly will raise a
UnicodeEncodeError.

### What problem does this PR solve?
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-06 10:36:50 +08:00
465f7e036a Feat: advanced list dialogs (#9256)
### What problem does this PR solve?

Advanced list dialogs

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-06 10:33:52 +08:00
7a27d5e463 Feat: Added history management and paste handling features #3221 (#9266)
### What problem does this PR solve?

feat(agent): Added history management and paste handling features #3221

- Added a PasteHandlerPlugin to handle paste operations, optimizing the
multi-line text pasting experience
- Implemented the AgentHistoryManager class to manage history,
supporting undo and redo functionality
- Integrates history management functionality into the Agent component

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-06 10:29:44 +08:00
6a0d6d2565 Added French language support (#9173)
### What problem does this PR solve?
Implemented French UI translation

### Type of change
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: ramin cedric <>
Co-authored-by: Liu An <asiro@qq.com>
2025-08-06 10:22:32 +08:00
f359f2c44e Docs: fixed errors (#9259)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-08-05 21:29:46 +08:00
9295c23170 Update readme (#9260)
### What problem does this PR solve?

Update readme

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-08-05 20:27:43 +08:00
023b090fa4 Fix: template error. (#9258)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 19:52:59 +08:00
2124329e95 Fix: local variable issue. (#9255)
### What problem does this PR solve?

#9227

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 19:24:34 +08:00
ed9757b0c7 Feat: Limit the appearance of loops in operators in the agent canvas #3221 (#9253)
### What problem does this PR solve?
Feat: Limit the appearance of loops in operators in the agent canvas
#3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-05 19:21:24 +08:00
f235a38225 Fix: resolve the prompt problem of Customer Support Workflow (#9251)
### What problem does this PR solve?


### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 18:19:17 +08:00
yzz
550e65bb22 Fix: PlainParser using fix in presentation (#9239)
### What problem does this PR solve?

tiny fix about the using of `deepdoc.pdf_parser.PlainParser` in
`rag.app.presentation.chunk`, I referred to other ways of using this
class.
So tiny the fix is, a issue seems unnecessary.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 17:48:18 +08:00
a264c629b5 Feat: Render dialog list #3221 (#9249)
### What problem does this PR solve?

Feat: Render dialog list #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-05 17:47:44 +08:00
e6bad45c6d Fix: update broken agent OpenAI-Compatible completion due to v0.20.0 changes (#9241)
### What problem does this PR solve?

Update broken agent OpenAI-Compatible completion due to v0.20.0. #9199 

Usage example:

**Referring the input is important, otherwise, will result in empty
output.**

<img width="1273" height="711" alt="Image"
src="https://github.com/user-attachments/assets/30740be8-f4d6-400d-9fda-d2616f89063f"
/>

<img width="622" height="247" alt="Image"
src="https://github.com/user-attachments/assets/0a2ca57a-9600-4cec-9362-0cafd0ab3aee"
/>

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 17:47:25 +08:00
0a303d9ae1 Refactor:Improve the chat stream logic for NvidiaCV (#9242)
### What problem does this PR solve?

Improve the chat stream logic for NvidiaCV

### Type of change


- [x] Refactoring
2025-08-05 17:47:00 +08:00
98a83543e8 Fix: fix mismatch of assitant message and its reference (#9233)
### What problem does this PR solve?

#9232

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

1. When creating a new session, initialize an empty reference that
includes both the app api and sdk API.
2. Fix the logic when retrieving references for historical messages: the
number of dialogue messages and reference messages may differ, but it
should match the number of assistant messages.

Co-authored-by: Li Ye <liye@unittec.com>
2025-08-05 14:32:39 +08:00
afd3a508e5 Fix: Set the maximum number of rounds for the agent to 1 #3221 (#9238)
### What problem does this PR solve?

Fix: Fixed the issue where numbers could not be displayed in the numeric
input box under white theme #3221
Fix: Set the maximum number of rounds for the agent to 1 #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-05 14:32:06 +08:00
1deb0a2d42 Fix:local variable 'response' referenced before assignment (#9230)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9227

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-05 11:00:06 +08:00
dd055deee9 Docs: Updated tips for max rounds (#9235)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-08-05 10:59:37 +08:00
a249803961 Refa: ensure Redis stream queue could be created properly (#9223)
### What problem does this PR solve?

Ensure Redis queue could be created properly.

### Type of change

- [x] Refactoring
2025-08-05 09:54:31 +08:00
6ec3f18e22 Fix: self-deployed LLM error, (#9217)
### What problem does this PR solve?

Close #9197
Close #9145

### Type of change

- [x] Refactoring
- [x] Bug fixing.
2025-08-05 09:49:47 +08:00
7724acbadb Perf Impr: Decouple reasoning and extraction for faster, more precise logic (#9191)
### What problem does this PR solve?

This commit refactors the core prompts to decouple the high-level
reasoning from the low-level information extraction. By making
REASON_PROMPT a dedicated strategist that only generates search queries
and re-tasking RELEVANT_EXTRACTION_PROMPT to be a specialized tool for
single-fact extraction, we eliminate redundant information
summarization. This clear separation of concerns makes the overall
reasoning process significantly faster and more precise, as each
component now has a single, well-defined responsibility.

### Type of change

- [x] Performance Improvement
2025-08-05 09:36:14 +08:00
a36ba95c1c Fix: Add prompt text to the form in the MCP module (#9222)
### What problem does this PR solve?

Fix: Add prompt text to the form in the MCP module #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 09:26:59 +08:00
30ccc4a66c Fix: correct single base64 image handling in image prompt (#9220)
### What problem does this PR solve?

Correct single base64 image handling in image prompt.


![img_v3_02or_ec4757c2-a9d4-4774-9a76-f7c6be633ebg](https://github.com/user-attachments/assets/872a86bf-e2a8-48d1-9b71-2a0c7a35ba9e)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 09:26:42 +08:00
dda5a0080a Fix: Fixed the issue where the agent's chat box could not automatically scroll to the bottom #3221 (#9219)
### What problem does this PR solve?

Fix: Fixed the issue where the agent's chat box could not automatically
scroll to the bottom #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-05 09:26:15 +08:00
9db999ccae v0.20.0 release notes (#9218)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-08-04 18:07:53 +08:00
5f5c6a7990 Fix: Fixed the loss of Await Response function on the share page and other style issues #3221 (#9216)
### What problem does this PR solve?

Fix: Fixed the loss of Await Response function on the share page and
other style issues #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 18:06:56 +08:00
53618d13bb Fix: Fixed the issue where the prompt word edit box had no scroll bar #3221 (#9215)
### What problem does this PR solve?
Fix: Fixed the issue where the prompt word edit box had no scroll bar
#3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 18:06:19 +08:00
60d652d2e1 Feat: list documents supports range filtering (#9214)
### What problem does this PR solve?

list_document supports range filtering.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-04 16:35:35 +08:00
448bdda73d Fix: Web Server Accepts Invalid Data That Could Cause Problems in uv.lock (#8966)
**Context and Purpose:**

This PR automatically remediates a security vulnerability:
- **Description:** h11: h11 accepts some malformed Chunked-Encoding
bodies
- **Rule ID:** CVE-2025-43859
- **Severity:** CRITICAL
- **File:** uv.lock
- **Lines Affected:** None - None

This change is necessary to protect the application from potential
security risks associated with this vulnerability.

**Solution Implemented:**

The automated remediation process has applied the necessary changes to
the affected code in `uv.lock` to resolve the identified issue.

Please review the changes to ensure they are correct and integrate as
expected.
2025-08-04 16:09:15 +08:00
26b85a10d1 Feat: New Agent startup parameters add knowledge base parameter #9194 (#9210)
### What problem does this PR solve?

Feat: New Agent startup parameters add knowledge base parameter #9194

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-04 16:08:41 +08:00
cae11201ef fix "out of memory" if slide.get_thumbnail() to a huge image (#9211)
### What problem does this PR solve?

fix "out of memory" if slide.get_thumbnail() to a huge image

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 16:08:24 +08:00
6ad8b54754 fix "TypeError: '<' not supported between instances of 'Emu' and 'Non… (#9209)
…eType'"

### What problem does this PR solve?

fix "TypeError: '<' not supported between instances of 'Emu' and
'NoneType'"

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 16:07:03 +08:00
83aca2d07b fix #8424 NPE in dify_retrieval.py, add log exception (#9212)
### What problem does this PR solve?

fix #8424 NPE in dify_retrieval.py, add log exception

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 15:36:31 +08:00
34f829e1b1 docs(agent): Correct several spelling errors, such as: Ouline -> Outline (#9188)
### What problem does this PR solve?

Correct several spelling errors, such as: Ouline -> Outline

### Type of change

- [x] Documentation Update
2025-08-04 14:53:32 +08:00
52a349349d Fix: migrate deprecated Langfuse API from v2 to v3 (#9204)
### What problem does this PR solve?

Fix:

```bash
'Langfuse' object has no attribute 'trace'
```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 14:45:43 +08:00
45bf294117 Refactor: support config strong test (#9198)
### What problem does this PR solve?


https://github.com/infiniflow/ragflow/issues/9189#issuecomment-3148920950

### Type of change
- [x] Refactoring

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-04 13:54:18 +08:00
667c5812d0 Fix:Repeated images when parsing markdown files with images (#9196)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9149

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 13:35:58 +08:00
30e9212db9 Fix: enlarge the timeout limits. (#9201)
### What problem does this PR solve?

#9189

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 13:34:34 +08:00
e9cbf4611d Fix:Error when parsing files using Gemini: **ERROR**: GENERIC_ERROR - Unknown field for GenerationConfig: max_tokens (#9195)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/9177
The reason should be due to the gemin internal use a different parameter
name
`
        max_output_tokens (int):
            Optional. The maximum number of tokens to include in a
            response candidate.

            Note: The default value varies by model, see the
            ``Model.output_token_limit`` attribute of the ``Model``
            returned from the ``getModel`` function.

            This field is a member of `oneof`_ ``_max_output_tokens``.
`
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 10:06:09 +08:00
d4b1d163dd Fix: list tags api by using tenant id instead of user id (#9103)
### What problem does this PR solve?

The index name of the tag chunks is generated by the tenant id of the
knowledge base, so it should use the tenant id instead of the current
user id in the listing tags API.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-04 09:57:00 +08:00
fca94509e8 Feat: Add the migration script and its doc, added backup as default… (#8245)
### What problem does this PR solve?

This PR adds a data backup and migration solution for RAGFlow Docker
Compose deployments. Currently, users lack a standardized way to backup
and restore RAGFlow data volumes (MySQL, MinIO, Redis, Elasticsearch),
which is essential for data safety and environment migration.

**Solution:**
- **Migration Script** (`docker/migration.sh`) - Automates
backup/restore operations for all RAGFlow data volumes
- **Documentation**
(`docs/guides/migration/migrate_from_docker_compose.md`) - Usage guide
and best practices
- **Safety Features** - Container conflict detection and user
confirmations to prevent data loss

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

Co-authored-by: treedy <treedy2022@icloud.com>
2025-08-04 09:43:43 +08:00
347 changed files with 13171 additions and 3359 deletions

2
.gitignore vendored
View File

@ -193,3 +193,5 @@ dist
# SvelteKit build / generate output
.svelte-kit
# Default backup dir
backup

15
.trivyignore Normal file
View File

@ -0,0 +1,15 @@
**/*.md
**/*.min.js
**/*.min.css
**/*.svg
**/*.png
**/*.jpg
**/*.jpeg
**/*.gif
**/*.woff
**/*.woff2
**/*.map
**/*.webp
**/*.ico
**/*.ttf
**/*.eot

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -87,7 +87,9 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2025-08-01 Supports agentic workflow.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-04 Supports new models, including Kimi K2 and Grok 4.
- 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query.
- 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
@ -188,7 +190,7 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.20.0-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0` for the full edition `v0.20.0`.
> The command below downloads the `v0.20.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` for the full edition `v0.20.1`.
```bash
$ cd ragflow/docker
@ -201,8 +203,8 @@ releases! 🌟
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------|
| v0.20.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.0-slim | &approx;2 | ❌ | Stable release |
| v0.20.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -80,7 +80,9 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru
- 2025-08-01 Mendukung Alur Kerja agen.
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
- 2025-08-04 Mendukung model baru, termasuk Kimi K2 dan Grok 4.
- 2025-08-01 Mendukung alur kerja agen dan MCP.
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
- 2025-05-05 Mendukung kueri lintas bahasa.
- 2025-03-19 Mendukung penggunaan model multi-modal untuk memahami gambar di dalam file PDF atau DOCX.
@ -179,7 +181,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.20.0-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.0-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0 untuk edisi lengkap v0.20.0.
> Perintah di bawah ini mengunduh edisi v0.20.1-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.1-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1 untuk edisi lengkap v0.20.1.
```bash
$ cd ragflow/docker
@ -192,8 +194,8 @@ $ docker compose -f docker-compose.yml up -d
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.0-slim | &approx;2 | ❌ | Stable release |
| v0.20.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -60,7 +60,9 @@
## 🔥 最新情報
- 2025-08-01 エージェントワークフローをサポートします。
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
- 2025-08-04 新モデル、キミK2およびGrok 4をサポート。
- 2025-08-01 エージェントワークフローとMCPをサポート。
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
- 2025-05-05 言語間クエリをサポートしました。
- 2025-03-19 PDFまたはDOCXファイル内の画像を理解するために、多モーダルモデルを使用することをサポートします。
@ -158,7 +160,7 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.20.0-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.0-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.0 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0 と設定します。
> 以下のコマンドは、RAGFlow Docker イメージの v0.20.1-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.1-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.1 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1 と設定します。
```bash
$ cd ragflow/docker
@ -171,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.0-slim | &approx;2 | ❌ | Stable release |
| v0.20.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -60,7 +60,9 @@
## 🔥 업데이트
- 2025-08-01 에이전트 워크플로를 지원합니다.
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
- 2025-08-04 새로운 모델인 Kimi K2와 Grok 4를 포함하여 지원합니다.
- 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다.
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
- 2025-05-05 언어 간 쿼리를 지원합니다.
- 2025-03-19 PDF 또는 DOCX 파일 내의 이미지를 이해하기 위해 다중 모드 모델을 사용하는 것을 지원합니다.
@ -158,7 +160,7 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.20.0-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.0-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.0을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0로 설정합니다.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.20.1-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.1-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.1을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1로 설정합니다.
```bash
$ cd ragflow/docker
@ -171,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.0-slim | &approx;2 | ❌ | Stable release |
| v0.20.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -80,7 +80,9 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações
- 01-08-2025 Suporta o fluxo de trabalho agêntico.
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
- 04-08-2025 Suporta novos modelos, incluindo Kimi K2 e Grok 4.
- 01-08-2025 Suporta fluxo de trabalho agente e MCP.
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
- 05-05-2025 Suporte a consultas entre idiomas.
- 19-03-2025 Suporta o uso de um modelo multi-modal para entender imagens dentro de arquivos PDF ou DOCX.
@ -178,7 +180,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição `v0.20.0-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.0-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0` para a edição completa `v0.20.0`.
> O comando abaixo baixa a edição `v0.20.1-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.1-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` para a edição completa `v0.20.1`.
```bash
$ cd ragflow/docker
@ -191,8 +193,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.20.0 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.20.0-slim | ~2 | ❌ | Lançamento estável |
| v0.20.1 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.20.1-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -83,7 +83,9 @@
## 🔥 近期更新
- 2025-08-01 支援 agentic workflow
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-04 支援 Kimi K2 和 Grok 4 等模型.
- 2025-08-01 支援 agentic workflow 和 MCP
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。
- 2025-03-19 PDF和DOCX中的圖支持用多模態大模型去解析得到描述.
@ -181,7 +183,7 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.0-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.0-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0` 來下載 RAGFlow 鏡像的 `v0.20.0` 完整發行版。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.1-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.1-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` 來下載 RAGFlow 鏡像的 `v0.20.1` 完整發行版。
```bash
$ cd ragflow/docker
@ -194,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.0-slim | &approx;2 | ❌ | Stable release |
| v0.20.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.0">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.1">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -83,7 +83,9 @@
## 🔥 近期更新
- 2025-08-01 支持 agentic workflow。
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型.
- 2025-08-04 新增对 Kimi K2 和 Grok 4 等模型的支持.
- 2025-08-01 支持 agentic workflow 和 MCP。
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。
- 2025-03-19 PDF 和 DOCX 中的图支持用多模态大模型去解析得到描述.
@ -181,7 +183,7 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.0-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.0-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0` 来下载 RAGFlow 镜像的 `v0.20.0` 完整发行版。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.1-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.1-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` 来下载 RAGFlow 镜像的 `v0.20.1` 完整发行版。
```bash
$ cd ragflow/docker
@ -194,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.0-slim | &approx;2 | ❌ | Stable release |
| v0.20.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -165,7 +165,7 @@ class Agent(LLM, ToolBase):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
use_tools = []
ans = ""
for delta_ans, tk in self._react_with_tools_streamly(msg, use_tools):
for delta_ans, tk in self._react_with_tools_streamly(prompt, msg, use_tools):
ans += delta_ans
if ans.find("**ERROR**") >= 0:
@ -185,7 +185,7 @@ class Agent(LLM, ToolBase):
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(self.chat_mdl.max_length * 0.97))
answer_without_toolcall = ""
use_tools = []
for delta_ans,_ in self._react_with_tools_streamly(msg, use_tools):
for delta_ans,_ in self._react_with_tools_streamly(prompt, msg, use_tools):
if delta_ans.find("**ERROR**") >= 0:
if self.get_exception_default_value():
self.set_output("content", self.get_exception_default_value())
@ -208,7 +208,7 @@ class Agent(LLM, ToolBase):
]):
yield delta_ans
def _react_with_tools_streamly(self, history: list[dict], use_tools):
def _react_with_tools_streamly(self, prompt, history: list[dict], use_tools):
token_count = 0
tool_metas = self.tool_meta
hist = deepcopy(history)
@ -221,7 +221,7 @@ class Agent(LLM, ToolBase):
def use_tool(name, args):
nonlocal hist, use_tools, token_count,last_calling,user_request
print(f"{last_calling=} == {name=}", )
logging.info(f"{last_calling=} == {name=}")
# Summarize of function calling
#if all([
# isinstance(self.toolcall_session.get_tool_obj(name), Agent),
@ -275,7 +275,7 @@ class Agent(LLM, ToolBase):
else:
hist.append({"role": "user", "content": content})
task_desc = analyze_task(self.chat_mdl, user_request, tool_metas)
task_desc = analyze_task(self.chat_mdl, prompt, user_request, tool_metas)
self.callback("analyze_task", {}, task_desc)
for _ in range(self._param.max_rounds + 1):
response, tk = next_step(self.chat_mdl, hist, tool_metas, task_desc)

View File

@ -39,7 +39,10 @@ class Begin(UserFillUp):
def _invoke(self, **kwargs):
for k, v in kwargs.get("inputs", {}).items():
if isinstance(v, dict) and v.get("type", "").lower().find("file") >=0:
v = self._canvas.get_files([v["value"]])
if v.get("optional") and v.get("value", None) is None:
v = None
else:
v = self._canvas.get_files([v["value"]])
else:
v = v.get("value")
self.set_output(k, v)

View File

@ -57,7 +57,7 @@ class Invoke(ComponentBase, ABC):
def _invoke(self, **kwargs):
args = {}
for para in self._param.variables:
if para.get("value") is not None:
if para.get("value"):
args[para["key"]] = para["value"]
else:
args[para["key"]] = self._canvas.get_variable_value(para["ref"])
@ -139,4 +139,4 @@ class Invoke(ComponentBase, ABC):
assert False, self.output()
def thoughts(self) -> str:
return "Waiting for the server respond..."
return "Waiting for the server respond..."

View File

@ -17,7 +17,7 @@ import json
import logging
import os
import re
from typing import Any
from typing import Any, Generator
import json_repair
from copy import deepcopy
@ -154,7 +154,7 @@ class LLM(ComponentBase):
return self.chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf(), **kwargs)
return self.chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf(), images=self.imgs, **kwargs)
def _generate_streamly(self, msg:list[dict], **kwargs) -> str:
def _generate_streamly(self, msg:list[dict], **kwargs) -> Generator[str, None, None]:
ans = ""
last_idx = 0
endswith_think = False

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -89,11 +89,11 @@
"presence_penalty": 0.4,
"prompts": [
{
"content": "{sys.query}",
"content": "The user query is {sys.query}\n\nThe relevant document are {Retrieval:ShyPumasJoke@formalized_content}",
"role": "user"
}
],
"sys_prompt": "You are a highly professional product information advisor. \n\nYour only mission is to provide accurate, factual, and structured answers to all product-related queries.\n\nAbsolutely no assumptions, guesses, or fabricated content are allowed. \n\n**Key Principles:**\n\n1. **Strict Database Reliance:** \n\n - Every answer must be based solely on the verified product information stored in the database accessed through the Retrieval tool. \n\n - You are NOT allowed to invent, speculate, or infer details beyond what is retrieved. \n\n - If you cannot find relevant data, respond with: *\"I cannot find this information in our official product database. Please check back later or provide more details for further search.\"*\n\n2. **Information Accuracy and Structure:** \n\n - Provide information in a clear, concise, and professional way. \n\n - Use bullet points or numbered lists if there are multiple key points (e.g., features, price, warranty, technical specifications). \n\n - Always specify the version or model number when applicable to avoid confusion.\n\n3. **Tone and Style:** \n\n - Maintain a polite, professional, and helpful tone at all times. \n\n - Avoid marketing exaggeration or promotional language; stay strictly factual. \n\n - Do not express personal opinions; only cite official product data.\n\n4. **User Guidance:** \n\n - If the user\u2019s query is unclear or too broad, politely request clarification or guide them to provide more specific product details (e.g., product name, model, version). \n\n - Example: *\"Could you please specify the product model or category so I can retrieve the most relevant information for you?\"*\n\n5. **Response Length and Formatting:** \n\n - Keep each answer within 100\u2013150 words for general queries. \n\n - For complex or multi-step explanations, you may extend to 200\u2013250 words, but always remain clear and well-structured.\n\n6. **Critical Reminder:** \n\nYour authority and reliability depend entirely on database-driven responses. Any fabricated, speculative, or unverified content will be considered a critical failure of your role.\n\nAlways begin processing a query by accessing the Retrieval tool, confirming the data source, and then structuring your response according to the above principles.\n\n",
"sys_prompt": "You are a highly professional product information advisor. \n\nYour only mission is to provide accurate, factual, and structured answers to all product-related queries.\n\nAbsolutely no assumptions, guesses, or fabricated content are allowed. \n\n**Key Principles:**\n\n1. **Strict Database Reliance:** \n\n - Every answer must be based solely on the verified product information stored in the relevant documen.\n\n - You are NOT allowed to invent, speculate, or infer details beyond what is retrieved. \n\n - If you cannot find relevant data, respond with: *\"I cannot find this information in our official product database. Please check back later or provide more details for further search.\"*\n\n2. **Information Accuracy and Structure:** \n\n - Provide information in a clear, concise, and professional way. \n\n - Use bullet points or numbered lists if there are multiple key points (e.g., features, price, warranty, technical specifications). \n\n - Always specify the version or model number when applicable to avoid confusion.\n\n3. **Tone and Style:** \n\n - Maintain a polite, professional, and helpful tone at all times. \n\n - Avoid marketing exaggeration or promotional language; stay strictly factual. \n\n - Do not express personal opinions; only cite official product data.\n\n4. **User Guidance:** \n\n - If the user\u2019s query is unclear or too broad, politely request clarification or guide them to provide more specific product details (e.g., product name, model, version). \n\n - Example: *\"Could you please specify the product model or category so I can retrieve the most relevant information for you?\"*\n\n5. **Response Length and Formatting:** \n\n - Keep each answer within 100\u2013150 words for general queries. \n\n - For complex or multi-step explanations, you may extend to 200\u2013250 words, but always remain clear and well-structured.\n\n6. **Critical Reminder:** \n\nYour authority and reliability depend entirely on the relevant document responses. Any fabricated, speculative, or unverified content will be considered a critical failure of your role.\n\n\n",
"temperature": 0.1,
"temperatureEnabled": true,
"tools": [],
@ -699,7 +699,7 @@
"width": 200
},
"position": {
"x": 644.5771854408022,
"x": 645.6873721057459,
"y": 516.6923702571407
},
"selected": false,
@ -735,11 +735,11 @@
"presence_penalty": 0.4,
"prompts": [
{
"content": "{sys.query}",
"content": "The user query is {sys.query}\n\nThe relevant document are {Retrieval:ShyPumasJoke@formalized_content}",
"role": "user"
}
],
"sys_prompt": "You are a highly professional product information advisor. \n\nYour only mission is to provide accurate, factual, and structured answers to all product-related queries.\n\nAbsolutely no assumptions, guesses, or fabricated content are allowed. \n\n**Key Principles:**\n\n1. **Strict Database Reliance:** \n\n - Every answer must be based solely on the verified product information stored in the database accessed through the Retrieval tool. \n\n - You are NOT allowed to invent, speculate, or infer details beyond what is retrieved. \n\n - If you cannot find relevant data, respond with: *\"I cannot find this information in our official product database. Please check back later or provide more details for further search.\"*\n\n2. **Information Accuracy and Structure:** \n\n - Provide information in a clear, concise, and professional way. \n\n - Use bullet points or numbered lists if there are multiple key points (e.g., features, price, warranty, technical specifications). \n\n - Always specify the version or model number when applicable to avoid confusion.\n\n3. **Tone and Style:** \n\n - Maintain a polite, professional, and helpful tone at all times. \n\n - Avoid marketing exaggeration or promotional language; stay strictly factual. \n\n - Do not express personal opinions; only cite official product data.\n\n4. **User Guidance:** \n\n - If the user\u2019s query is unclear or too broad, politely request clarification or guide them to provide more specific product details (e.g., product name, model, version). \n\n - Example: *\"Could you please specify the product model or category so I can retrieve the most relevant information for you?\"*\n\n5. **Response Length and Formatting:** \n\n - Keep each answer within 100\u2013150 words for general queries. \n\n - For complex or multi-step explanations, you may extend to 200\u2013250 words, but always remain clear and well-structured.\n\n6. **Critical Reminder:** \n\nYour authority and reliability depend entirely on database-driven responses. Any fabricated, speculative, or unverified content will be considered a critical failure of your role.\n\nAlways begin processing a query by accessing the Retrieval tool, confirming the data source, and then structuring your response according to the above principles.\n\n",
"sys_prompt": "You are a highly professional product information advisor. \n\nYour only mission is to provide accurate, factual, and structured answers to all product-related queries.\n\nAbsolutely no assumptions, guesses, or fabricated content are allowed. \n\n**Key Principles:**\n\n1. **Strict Database Reliance:** \n\n - Every answer must be based solely on the verified product information stored in the relevant documen.\n\n - You are NOT allowed to invent, speculate, or infer details beyond what is retrieved. \n\n - If you cannot find relevant data, respond with: *\"I cannot find this information in our official product database. Please check back later or provide more details for further search.\"*\n\n2. **Information Accuracy and Structure:** \n\n - Provide information in a clear, concise, and professional way. \n\n - Use bullet points or numbered lists if there are multiple key points (e.g., features, price, warranty, technical specifications). \n\n - Always specify the version or model number when applicable to avoid confusion.\n\n3. **Tone and Style:** \n\n - Maintain a polite, professional, and helpful tone at all times. \n\n - Avoid marketing exaggeration or promotional language; stay strictly factual. \n\n - Do not express personal opinions; only cite official product data.\n\n4. **User Guidance:** \n\n - If the user\u2019s query is unclear or too broad, politely request clarification or guide them to provide more specific product details (e.g., product name, model, version). \n\n - Example: *\"Could you please specify the product model or category so I can retrieve the most relevant information for you?\"*\n\n5. **Response Length and Formatting:** \n\n - Keep each answer within 100\u2013150 words for general queries. \n\n - For complex or multi-step explanations, you may extend to 200\u2013250 words, but always remain clear and well-structured.\n\n6. **Critical Reminder:** \n\nYour authority and reliability depend entirely on the relevant document responses. Any fabricated, speculative, or unverified content will be considered a critical failure of your role.\n\n\n",
"temperature": 0.1,
"temperatureEnabled": true,
"tools": [],

View File

@ -170,7 +170,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
@ -250,7 +250,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
@ -602,7 +602,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
@ -715,7 +715,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],

View File

@ -169,7 +169,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
@ -249,7 +249,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
@ -601,7 +601,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
@ -714,7 +714,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
@ -912,4 +912,4 @@
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
}
}

View File

@ -169,7 +169,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
@ -249,7 +249,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
@ -601,7 +601,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\n\n\nThe Outline agent output is {Agent:BetterSitesSend@content}",
"role": "user"
}
],
@ -714,7 +714,7 @@
"presence_penalty": 0.5,
"prompts": [
{
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Ouline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"content": "The parse and keyword agent output is {Agent:ClearRabbitsScream@content}\n\nThe Outline agent output is {Agent:BetterSitesSend@content}\n\nThe Body agent output is {Agent:EagerNailsRemain@content}",
"role": "user"
}
],
@ -912,4 +912,4 @@
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
}
}

View File

@ -0,0 +1,724 @@
{
"id": 17,
"title": "SQL Assistant",
"description": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., “Show me last quarters top 10 products by revenue”) and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ",
"canvas_type": "Marketing",
"dsl": {
"components": {
"Agent:WickedGoatsDivide": {
"downstream": [
"ExeSQL:TiredShirtsPull"
],
"obj": {
"component_name": "Agent",
"params": {
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-max@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "User's query: {sys.query}\n\nSchema: {Retrieval:HappyTiesFilm@formalized_content}\n\nSamples about question to SQL: {Retrieval:SmartNewsHammer@formalized_content}\n\nDescription about meanings of tables and files: {Retrieval:SweetDancersAppear@formalized_content}",
"role": "user"
}
],
"sys_prompt": "### ROLE\nYou are a Text-to-SQL assistant. \nGiven a relational database schema and a natural-language request, you must produce a **single, syntactically-correct MySQL query** that answers the request. \nReturn **nothing except the SQL statement itself**\u2014no code fences, no commentary, no explanations, no comments, no trailing semicolon if not required.\n\n\n### EXAMPLES \n-- Example 1 \nUser: List every product name and its unit price. \nSQL:\nSELECT name, unit_price FROM Products;\n\n-- Example 2 \nUser: Show the names and emails of customers who placed orders in January 2025. \nSQL:\nSELECT DISTINCT c.name, c.email\nFROM Customers c\nJOIN Orders o ON o.customer_id = c.id\nWHERE o.order_date BETWEEN '2025-01-01' AND '2025-01-31';\n\n-- Example 3 \nUser: How many orders have a status of \"Completed\" for each month in 2024? \nSQL:\nSELECT DATE_FORMAT(order_date, '%Y-%m') AS month,\n COUNT(*) AS completed_orders\nFROM Orders\nWHERE status = 'Completed'\n AND YEAR(order_date) = 2024\nGROUP BY month\nORDER BY month;\n\n-- Example 4 \nUser: Which products generated at least \\$10 000 in total revenue? \nSQL:\nSELECT p.id, p.name, SUM(oi.quantity * oi.unit_price) AS revenue\nFROM Products p\nJOIN OrderItems oi ON oi.product_id = p.id\nGROUP BY p.id, p.name\nHAVING revenue >= 10000\nORDER BY revenue DESC;\n\n\n### OUTPUT GUIDELINES\n1. Think through the schema and the request. \n2. Write **only** the final MySQL query. \n3. Do **not** wrap the query in back-ticks or markdown fences. \n4. Do **not** add explanations, comments, or additional text\u2014just the SQL.",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
}
},
"upstream": [
"Retrieval:HappyTiesFilm",
"Retrieval:SmartNewsHammer",
"Retrieval:SweetDancersAppear"
]
},
"ExeSQL:TiredShirtsPull": {
"downstream": [
"Message:ShaggyMasksAttend"
],
"obj": {
"component_name": "ExeSQL",
"params": {
"database": "",
"db_type": "mysql",
"host": "",
"max_records": 1024,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"password": "20010812Yy!",
"port": 3306,
"sql": "Agent:WickedGoatsDivide@content",
"username": "13637682833@163.com"
}
},
"upstream": [
"Agent:WickedGoatsDivide"
]
},
"Message:ShaggyMasksAttend": {
"downstream": [],
"obj": {
"component_name": "Message",
"params": {
"content": [
"{ExeSQL:TiredShirtsPull@formalized_content}"
]
}
},
"upstream": [
"ExeSQL:TiredShirtsPull"
]
},
"Retrieval:HappyTiesFilm": {
"downstream": [
"Agent:WickedGoatsDivide"
],
"obj": {
"component_name": "Retrieval",
"params": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"ed31364c727211f0bdb2bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
},
"upstream": [
"begin"
]
},
"Retrieval:SmartNewsHammer": {
"downstream": [
"Agent:WickedGoatsDivide"
],
"obj": {
"component_name": "Retrieval",
"params": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"0f968106727311f08357bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
},
"upstream": [
"begin"
]
},
"Retrieval:SweetDancersAppear": {
"downstream": [
"Agent:WickedGoatsDivide"
],
"obj": {
"component_name": "Retrieval",
"params": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"4ad1f9d0727311f0827dbafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
}
},
"upstream": [
"begin"
]
},
"begin": {
"downstream": [
"Retrieval:HappyTiesFilm",
"Retrieval:SmartNewsHammer",
"Retrieval:SweetDancersAppear"
],
"obj": {
"component_name": "Begin",
"params": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SQL assistant, what can I do for you?"
}
},
"upstream": []
}
},
"globals": {
"sys.conversation_turns": 0,
"sys.files": [],
"sys.query": "",
"sys.user_id": ""
},
"graph": {
"edges": [
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Retrieval:HappyTiesFilmend",
"source": "begin",
"sourceHandle": "start",
"target": "Retrieval:HappyTiesFilm",
"targetHandle": "end"
},
{
"id": "xy-edge__beginstart-Retrieval:SmartNewsHammerend",
"source": "begin",
"sourceHandle": "start",
"target": "Retrieval:SmartNewsHammer",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__beginstart-Retrieval:SweetDancersAppearend",
"source": "begin",
"sourceHandle": "start",
"target": "Retrieval:SweetDancersAppear",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Retrieval:HappyTiesFilmstart-Agent:WickedGoatsDivideend",
"source": "Retrieval:HappyTiesFilm",
"sourceHandle": "start",
"target": "Agent:WickedGoatsDivide",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Retrieval:SmartNewsHammerstart-Agent:WickedGoatsDivideend",
"markerEnd": "logo",
"source": "Retrieval:SmartNewsHammer",
"sourceHandle": "start",
"style": {
"stroke": "rgba(91, 93, 106, 1)",
"strokeWidth": 1
},
"target": "Agent:WickedGoatsDivide",
"targetHandle": "end",
"type": "buttonEdge",
"zIndex": 1001
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Retrieval:SweetDancersAppearstart-Agent:WickedGoatsDivideend",
"markerEnd": "logo",
"source": "Retrieval:SweetDancersAppear",
"sourceHandle": "start",
"style": {
"stroke": "rgba(91, 93, 106, 1)",
"strokeWidth": 1
},
"target": "Agent:WickedGoatsDivide",
"targetHandle": "end",
"type": "buttonEdge",
"zIndex": 1001
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__Agent:WickedGoatsDividestart-ExeSQL:TiredShirtsPullend",
"source": "Agent:WickedGoatsDivide",
"sourceHandle": "start",
"target": "ExeSQL:TiredShirtsPull",
"targetHandle": "end"
},
{
"data": {
"isHovered": false
},
"id": "xy-edge__ExeSQL:TiredShirtsPullstart-Message:ShaggyMasksAttendend",
"source": "ExeSQL:TiredShirtsPull",
"sourceHandle": "start",
"target": "Message:ShaggyMasksAttend",
"targetHandle": "end"
}
],
"nodes": [
{
"data": {
"form": {
"enablePrologue": true,
"inputs": {},
"mode": "conversational",
"prologue": "Hi! I'm your SQL assistant, what can I do for you?"
},
"label": "Begin",
"name": "begin"
},
"id": "begin",
"measured": {
"height": 48,
"width": 200
},
"position": {
"x": 50,
"y": 200
},
"selected": false,
"sourcePosition": "left",
"targetPosition": "right",
"type": "beginNode"
},
{
"data": {
"form": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"ed31364c727211f0bdb2bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
},
"label": "Retrieval",
"name": "Schema"
},
"dragging": false,
"id": "Retrieval:HappyTiesFilm",
"measured": {
"height": 96,
"width": 200
},
"position": {
"x": 414,
"y": 20.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "retrievalNode"
},
{
"data": {
"form": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"0f968106727311f08357bafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
},
"label": "Retrieval",
"name": "Question to SQL"
},
"dragging": false,
"id": "Retrieval:SmartNewsHammer",
"measured": {
"height": 96,
"width": 200
},
"position": {
"x": 406.5,
"y": 175.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "retrievalNode"
},
{
"data": {
"form": {
"cross_languages": [],
"empty_response": "",
"kb_ids": [
"4ad1f9d0727311f0827dbafe6e7908e6"
],
"keywords_similarity_weight": 0.7,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
}
},
"query": "sys.query",
"rerank_id": "",
"similarity_threshold": 0.2,
"top_k": 1024,
"top_n": 8,
"use_kg": false
},
"label": "Retrieval",
"name": "Database Description"
},
"dragging": false,
"id": "Retrieval:SweetDancersAppear",
"measured": {
"height": 96,
"width": 200
},
"position": {
"x": 403.5,
"y": 328
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "retrievalNode"
},
{
"data": {
"form": {
"delay_after_error": 1,
"description": "",
"exception_default_value": "",
"exception_goto": [],
"exception_method": "",
"frequencyPenaltyEnabled": false,
"frequency_penalty": 0.7,
"llm_id": "qwen-max@Tongyi-Qianwen",
"maxTokensEnabled": false,
"max_retries": 3,
"max_rounds": 5,
"max_tokens": 256,
"mcp": [],
"message_history_window_size": 12,
"outputs": {
"content": {
"type": "string",
"value": ""
}
},
"presencePenaltyEnabled": false,
"presence_penalty": 0.4,
"prompts": [
{
"content": "User's query: {sys.query}\n\nSchema: {Retrieval:HappyTiesFilm@formalized_content}\n\nSamples about question to SQL: {Retrieval:SmartNewsHammer@formalized_content}\n\nDescription about meanings of tables and files: {Retrieval:SweetDancersAppear@formalized_content}",
"role": "user"
}
],
"sys_prompt": "### ROLE\nYou are a Text-to-SQL assistant. \nGiven a relational database schema and a natural-language request, you must produce a **single, syntactically-correct MySQL query** that answers the request. \nReturn **nothing except the SQL statement itself**\u2014no code fences, no commentary, no explanations, no comments, no trailing semicolon if not required.\n\n\n### EXAMPLES \n-- Example 1 \nUser: List every product name and its unit price. \nSQL:\nSELECT name, unit_price FROM Products;\n\n-- Example 2 \nUser: Show the names and emails of customers who placed orders in January 2025. \nSQL:\nSELECT DISTINCT c.name, c.email\nFROM Customers c\nJOIN Orders o ON o.customer_id = c.id\nWHERE o.order_date BETWEEN '2025-01-01' AND '2025-01-31';\n\n-- Example 3 \nUser: How many orders have a status of \"Completed\" for each month in 2024? \nSQL:\nSELECT DATE_FORMAT(order_date, '%Y-%m') AS month,\n COUNT(*) AS completed_orders\nFROM Orders\nWHERE status = 'Completed'\n AND YEAR(order_date) = 2024\nGROUP BY month\nORDER BY month;\n\n-- Example 4 \nUser: Which products generated at least \\$10 000 in total revenue? \nSQL:\nSELECT p.id, p.name, SUM(oi.quantity * oi.unit_price) AS revenue\nFROM Products p\nJOIN OrderItems oi ON oi.product_id = p.id\nGROUP BY p.id, p.name\nHAVING revenue >= 10000\nORDER BY revenue DESC;\n\n\n### OUTPUT GUIDELINES\n1. Think through the schema and the request. \n2. Write **only** the final MySQL query. \n3. Do **not** wrap the query in back-ticks or markdown fences. \n4. Do **not** add explanations, comments, or additional text\u2014just the SQL.",
"temperature": 0.1,
"temperatureEnabled": false,
"tools": [],
"topPEnabled": false,
"top_p": 0.3,
"user_prompt": "",
"visual_files_var": ""
},
"label": "Agent",
"name": "SQL Generator "
},
"dragging": false,
"id": "Agent:WickedGoatsDivide",
"measured": {
"height": 84,
"width": 200
},
"position": {
"x": 981,
"y": 174
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "agentNode"
},
{
"data": {
"form": {
"database": "",
"db_type": "mysql",
"host": "",
"max_records": 1024,
"outputs": {
"formalized_content": {
"type": "string",
"value": ""
},
"json": {
"type": "Array<Object>",
"value": []
}
},
"password": "20010812Yy!",
"port": 3306,
"sql": "Agent:WickedGoatsDivide@content",
"username": "13637682833@163.com"
},
"label": "ExeSQL",
"name": "ExeSQL"
},
"dragging": false,
"id": "ExeSQL:TiredShirtsPull",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 1211.5,
"y": 212.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "ragNode"
},
{
"data": {
"form": {
"content": [
"{ExeSQL:TiredShirtsPull@formalized_content}"
]
},
"label": "Message",
"name": "Message"
},
"dragging": false,
"id": "Message:ShaggyMasksAttend",
"measured": {
"height": 56,
"width": 200
},
"position": {
"x": 1447.3125,
"y": 181.5
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "messageNode"
},
{
"data": {
"form": {
"text": "Searches for relevant database creation statements.\n\nIt should label with a knowledgebase to which the schema is dumped in. You could use \" General \" as parsing method, \" 2 \" as chunk size and \" ; \" as delimiter."
},
"label": "Note",
"name": "Note Schema"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 188,
"id": "Note:ThickClubsFloat",
"measured": {
"height": 188,
"width": 392
},
"position": {
"x": 689,
"y": -180.31251144409183
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 392
},
{
"data": {
"form": {
"text": "Searches for samples about question to SQL. \n\nYou could use \" Q&A \" as parsing method.\n\nPlease check this dataset:\nhttps://huggingface.co/datasets/InfiniFlow/text2sql"
},
"label": "Note",
"name": "Note: Question to SQL"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 154,
"id": "Note:ElevenLionsJoke",
"measured": {
"height": 154,
"width": 345
},
"position": {
"x": 693.5,
"y": 138
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 345
},
{
"data": {
"form": {
"text": "Searches for description about meanings of tables and fields.\n\nYou could use \" General \" as parsing method, \" 2 \" as chunk size and \" ### \" as delimiter."
},
"label": "Note",
"name": "Note: Database Description"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 158,
"id": "Note:ManyRosesTrade",
"measured": {
"height": 158,
"width": 408
},
"position": {
"x": 691.5,
"y": 435.69736389555317
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 408
},
{
"data": {
"form": {
"text": "The Agent learns which tables may be available based on the responses from three knowledge bases and converts the user's input into SQL statements."
},
"label": "Note",
"name": "Note: SQL Generator"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"height": 132,
"id": "Note:RudeHousesInvite",
"measured": {
"height": 132,
"width": 383
},
"position": {
"x": 1106.9254833678003,
"y": 290.5891036507015
},
"resizing": false,
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode",
"width": 383
},
{
"data": {
"form": {
"text": "Connect to your database to execute SQL statements."
},
"label": "Note",
"name": "Note: SQL Executor"
},
"dragHandle": ".note-drag-handle",
"dragging": false,
"id": "Note:HungryBatsLay",
"measured": {
"height": 136,
"width": 255
},
"position": {
"x": 1185,
"y": -30
},
"selected": false,
"sourcePosition": "right",
"targetPosition": "left",
"type": "noteNode"
}
]
},
"history": [],
"messages": [],
"path": [],
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q=="
}

View File

@ -17,7 +17,7 @@ import base64
import logging
import os
from abc import ABC
from enum import StrEnum
from strenum import StrEnum
from typing import Optional
from pydantic import BaseModel, Field, field_validator
from agent.tools.base import ToolParamBase, ToolBase, ToolMeta

View File

@ -14,6 +14,7 @@
# limitations under the License.
#
import os
import re
from abc import ABC
import pandas as pd
import pymysql
@ -109,7 +110,7 @@ class ExeSQL(ToolBase, ABC):
single_sql = single_sql.replace('```','')
if not single_sql:
continue
single_sql = re.sub(r"\[ID:[0-9]+\]", "", single_sql)
cursor.execute(single_sql)
if cursor.rowcount == 0:
sql_res.append({"content": "No record in the database!"})

View File

@ -86,7 +86,7 @@ class Retrieval(ToolBase, ABC):
kb_ids.append(id)
continue
kb_nm = self._canvas.get_variable_value(id)
e, kb = KnowledgebaseService.get_by_name(kb_nm)
e, kb = KnowledgebaseService.get_by_name(kb_nm, self._canvas._tenant_id)
if not e:
raise Exception(f"Dataset({kb_nm}) does not exist.")
kb_ids.append(kb.id)

View File

@ -20,94 +20,128 @@ BEGIN_SEARCH_RESULT = "<|begin_search_result|>"
END_SEARCH_RESULT = "<|end_search_result|>"
MAX_SEARCH_LIMIT = 6
REASON_PROMPT = (
"You are a reasoning assistant with the ability to perform dataset searches to help "
"you answer the user's question accurately. You have special tools:\n\n"
f"- To perform a search: write {BEGIN_SEARCH_QUERY} your query here {END_SEARCH_QUERY}.\n"
f"Then, the system will search and analyze relevant content, then provide you with helpful information in the format {BEGIN_SEARCH_RESULT} ...search results... {END_SEARCH_RESULT}.\n\n"
f"You can repeat the search process multiple times if necessary. The maximum number of search attempts is limited to {MAX_SEARCH_LIMIT}.\n\n"
"Once you have all the information you need, continue your reasoning.\n\n"
"-- Example 1 --\n" ########################################
"Question: \"Are both the directors of Jaws and Casino Royale from the same country?\"\n"
"Assistant:\n"
f" {BEGIN_SEARCH_QUERY}Who is the director of Jaws?{END_SEARCH_QUERY}\n\n"
"User:\n"
f" {BEGIN_SEARCH_RESULT}\nThe director of Jaws is Steven Spielberg...\n{END_SEARCH_RESULT}\n\n"
"Continues reasoning with the new information.\n"
"Assistant:\n"
f" {BEGIN_SEARCH_QUERY}Where is Steven Spielberg from?{END_SEARCH_QUERY}\n\n"
"User:\n"
f" {BEGIN_SEARCH_RESULT}\nSteven Allan Spielberg is an American filmmaker...\n{END_SEARCH_RESULT}\n\n"
"Continues reasoning with the new information...\n\n"
"Assistant:\n"
f" {BEGIN_SEARCH_QUERY}Who is the director of Casino Royale?{END_SEARCH_QUERY}\n\n"
"User:\n"
f" {BEGIN_SEARCH_RESULT}\nCasino Royale is a 2006 spy film directed by Martin Campbell...\n{END_SEARCH_RESULT}\n\n"
"Continues reasoning with the new information...\n\n"
"Assistant:\n"
f" {BEGIN_SEARCH_QUERY}Where is Martin Campbell from?{END_SEARCH_QUERY}\n\n"
"User:\n"
f" {BEGIN_SEARCH_RESULT}\nMartin Campbell (born 24 October 1943) is a New Zealand film and television director...\n{END_SEARCH_RESULT}\n\n"
"Continues reasoning with the new information...\n\n"
"Assistant:\nIt's enough to answer the question\n"
REASON_PROMPT = f"""You are an advanced reasoning agent. Your goal is to answer the user's question by breaking it down into a series of verifiable steps.
"-- Example 2 --\n" #########################################
"Question: \"When was the founder of craigslist born?\"\n"
"Assistant:\n"
f" {BEGIN_SEARCH_QUERY}Who was the founder of craigslist?{END_SEARCH_QUERY}\n\n"
"User:\n"
f" {BEGIN_SEARCH_RESULT}\nCraigslist was founded by Craig Newmark...\n{END_SEARCH_RESULT}\n\n"
"Continues reasoning with the new information.\n"
"Assistant:\n"
f" {BEGIN_SEARCH_QUERY} When was Craig Newmark born?{END_SEARCH_QUERY}\n\n"
"User:\n"
f" {BEGIN_SEARCH_RESULT}\nCraig Newmark was born on December 6, 1952...\n{END_SEARCH_RESULT}\n\n"
"Continues reasoning with the new information...\n\n"
"Assistant:\nIt's enough to answer the question\n"
"**Remember**:\n"
f"- You have a dataset to search, so you just provide a proper search query.\n"
f"- Use {BEGIN_SEARCH_QUERY} to request a dataset search and end with {END_SEARCH_QUERY}.\n"
"- The language of query MUST be as the same as 'Question' or 'search result'.\n"
"- If no helpful information can be found, rewrite the search query to be less and precise keywords.\n"
"- When done searching, continue your reasoning.\n\n"
'Please answer the following question. You should think step by step to solve it.\n\n'
)
You have access to a powerful search tool to find information.
RELEVANT_EXTRACTION_PROMPT = """**Task Instruction:**
**Your Task:**
1. Analyze the user's question.
2. If you need information, issue a search query to find a specific fact.
3. Review the search results.
4. Repeat the search process until you have all the facts needed to answer the question.
5. Once you have gathered sufficient information, synthesize the facts and provide the final answer directly.
You are tasked with reading and analyzing web pages based on the following inputs: **Previous Reasoning Steps**, **Current Search Query**, and **Searched Web Pages**. Your objective is to extract relevant and helpful information for **Current Search Query** from the **Searched Web Pages** and seamlessly integrate this information into the **Previous Reasoning Steps** to continue reasoning for the original question.
**Tool Usage:**
- To search, you MUST write your query between the special tokens: {BEGIN_SEARCH_QUERY}your query{END_SEARCH_QUERY}.
- The system will provide results between {BEGIN_SEARCH_RESULT}search results{END_SEARCH_RESULT}.
- You have a maximum of {MAX_SEARCH_LIMIT} search attempts.
**Guidelines:**
---
**Example 1: Multi-hop Question**
1. **Analyze the Searched Web Pages:**
- Carefully review the content of each searched web page.
- Identify factual information that is relevant to the **Current Search Query** and can aid in the reasoning process for the original question.
**Question:** "Are both the directors of Jaws and Casino Royale from the same country?"
2. **Extract Relevant Information:**
- Select the information from the Searched Web Pages that directly contributes to advancing the **Previous Reasoning Steps**.
- Ensure that the extracted information is accurate and relevant.
**Your Thought Process & Actions:**
First, I need to identify the director of Jaws.
{BEGIN_SEARCH_QUERY}who is the director of Jaws?{END_SEARCH_QUERY}
[System returns search results]
{BEGIN_SEARCH_RESULT}
Jaws is a 1975 American thriller film directed by Steven Spielberg.
{END_SEARCH_RESULT}
Okay, the director of Jaws is Steven Spielberg. Now I need to find out his nationality.
{BEGIN_SEARCH_QUERY}where is Steven Spielberg from?{END_SEARCH_QUERY}
[System returns search results]
{BEGIN_SEARCH_RESULT}
Steven Allan Spielberg is an American filmmaker. Born in Cincinnati, Ohio...
{END_SEARCH_RESULT}
So, Steven Spielberg is from the USA. Next, I need to find the director of Casino Royale.
{BEGIN_SEARCH_QUERY}who is the director of Casino Royale 2006?{END_SEARCH_QUERY}
[System returns search results]
{BEGIN_SEARCH_RESULT}
Casino Royale is a 2006 spy film directed by Martin Campbell.
{END_SEARCH_RESULT}
The director of Casino Royale is Martin Campbell. Now I need his nationality.
{BEGIN_SEARCH_QUERY}where is Martin Campbell from?{END_SEARCH_QUERY}
[System returns search results]
{BEGIN_SEARCH_RESULT}
Martin Campbell (born 24 October 1943) is a New Zealand film and television director.
{END_SEARCH_RESULT}
I have all the information. Steven Spielberg is from the USA, and Martin Campbell is from New Zealand. They are not from the same country.
3. **Output Format:**
- **If the web pages provide helpful information for current search query:** Present the information beginning with `**Final Information**` as shown below.
- The language of query **MUST BE** as the same as 'Search Query' or 'Web Pages'.\n"
**Final Information**
Final Answer: No, the directors of Jaws and Casino Royale are not from the same country. Steven Spielberg is from the USA, and Martin Campbell is from New Zealand.
[Helpful information]
---
**Example 2: Simple Fact Retrieval**
- **If the web pages do not provide any helpful information for current search query:** Output the following text.
**Question:** "When was the founder of craigslist born?"
**Final Information**
**Your Thought Process & Actions:**
First, I need to know who founded craigslist.
{BEGIN_SEARCH_QUERY}who founded craigslist?{END_SEARCH_QUERY}
[System returns search results]
{BEGIN_SEARCH_RESULT}
Craigslist was founded in 1995 by Craig Newmark.
{END_SEARCH_RESULT}
The founder is Craig Newmark. Now I need his birth date.
{BEGIN_SEARCH_QUERY}when was Craig Newmark born?{END_SEARCH_QUERY}
[System returns search results]
{BEGIN_SEARCH_RESULT}
Craig Newmark was born on December 6, 1952.
{END_SEARCH_RESULT}
I have found the answer.
No helpful information found.
Final Answer: The founder of craigslist, Craig Newmark, was born on December 6, 1952.
**Inputs:**
- **Previous Reasoning Steps:**
{prev_reasoning}
---
**Important Rules:**
- **One Fact at a Time:** Decompose the problem and issue one search query at a time to find a single, specific piece of information.
- **Be Precise:** Formulate clear and precise search queries. If a search fails, rephrase it.
- **Synthesize at the End:** Do not provide the final answer until you have completed all necessary searches.
- **Language Consistency:** Your search queries should be in the same language as the user's question.
- **Current Search Query:**
{search_query}
Now, begin your work. Please answer the following question by thinking step-by-step.
"""
- **Searched Web Pages:**
{document}
RELEVANT_EXTRACTION_PROMPT = """You are a highly efficient information extraction module. Your sole purpose is to extract the single most relevant piece of information from the provided `Searched Web Pages` that directly answers the `Current Search Query`.
"""
**Your Task:**
1. Read the `Current Search Query` to understand what specific information is needed.
2. Scan the `Searched Web Pages` to find the answer to that query.
3. Extract only the essential, factual information that answers the query. Be concise.
**Context (For Your Information Only):**
The `Previous Reasoning Steps` are provided to give you context on the overall goal, but your primary focus MUST be on answering the `Current Search Query`. Do not use information from the previous steps in your output.
**Output Format:**
Your response must follow one of two formats precisely.
1. **If a direct and relevant answer is found:**
- Start your response immediately with `Final Information`.
- Provide only the extracted fact(s). Do not add any extra conversational text.
*Example:*
`Current Search Query`: Where is Martin Campbell from?
`Searched Web Pages`: [Long article snippet about Martin Campbell's career, which includes the sentence "Martin Campbell (born 24 October 1943) is a New Zealand film and television director..."]
*Your Output:*
Final Information
Martin Campbell is a New Zealand film and television director.
2. **If no relevant answer that directly addresses the query is found in the web pages:**
- Start your response immediately with `Final Information`.
- Write the exact phrase: `No helpful information found.`
---
**BEGIN TASK**
**Inputs:**
- **Previous Reasoning Steps:**
{prev_reasoning}
- **Current Search Query:**
{search_query}
- **Searched Web Pages:**
{document}
"""

View File

@ -32,8 +32,7 @@ from api.db.services.user_service import TenantService
from api.db.services.user_canvas_version import UserCanvasVersionService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result, \
get_error_data_result
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
from agent.canvas import Canvas
from peewee import MySQLDatabase, PostgresqlDatabase
from api.db.db_models import APIToken
@ -62,7 +61,7 @@ def canvas_list():
@login_required
def rm():
for i in request.json["canvas_ids"]:
if not UserCanvasService.query(user_id=current_user.id,id=i):
if not UserCanvasService.accessible(i, current_user.id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
@ -86,12 +85,12 @@ def save():
if not UserCanvasService.save(**req):
return get_data_error_result(message="Fail to save canvas.")
else:
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
if not UserCanvasService.accessible(req["id"], current_user.id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
UserCanvasService.update_by_id(req["id"], req)
# save version
# save version
UserCanvasVersionService.insert( user_canvas_id=req["id"], dsl=req["dsl"], title="{0}_{1}".format(req["title"], time.strftime("%Y_%m_%d_%H_%M_%S")))
UserCanvasVersionService.delete_all_versions(req["id"])
return get_json_result(data=req)
@ -100,9 +99,9 @@ def save():
@manager.route('/get/<canvas_id>', methods=['GET']) # noqa: F821
@login_required
def get(canvas_id):
e, c = UserCanvasService.get_by_tenant_id(canvas_id)
if not e or c["user_id"] != current_user.id:
if not UserCanvasService.accessible(canvas_id, current_user.id):
return get_data_error_result(message="canvas not found.")
e, c = UserCanvasService.get_by_tenant_id(canvas_id)
return get_json_result(data=c)
@ -131,14 +130,15 @@ def run():
files = req.get("files", [])
inputs = req.get("inputs", {})
user_id = req.get("user_id", current_user.id)
e, cvs = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
if not UserCanvasService.accessible(req["id"], current_user.id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
e, cvs = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
@ -172,14 +172,14 @@ def run():
@login_required
def reset():
req = request.json
if not UserCanvasService.accessible(req["id"], current_user.id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
try:
e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.reset()
@ -290,15 +290,12 @@ def input_form():
@login_required
def debug():
req = request.json
if not UserCanvasService.accessible(req["id"], current_user.id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
try:
e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.reset()
canvas.message_id = get_uuid()
@ -350,7 +347,7 @@ def test_db_connect():
if req["db_type"] != 'mssql':
db.connect()
db.close()
return get_json_result(data="Database Connection Successful!")
except Exception as e:
return server_error_response(e)
@ -372,7 +369,7 @@ def getlistversion(canvas_id):
@login_required
def getversion( version_id):
try:
e, version = UserCanvasVersionService.get_by_id(version_id)
if version:
return get_json_result(data=version.to_dict())
@ -382,7 +379,7 @@ def getversion( version_id):
@manager.route('/listteam', methods=['GET']) # noqa: F821
@login_required
def list_kbs():
def list_canvas():
keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 150))
@ -390,10 +387,10 @@ def list_kbs():
desc = request.args.get("desc", True)
try:
tenants = TenantService.get_joined_tenants_by_user_id(current_user.id)
kbs, total = UserCanvasService.get_by_tenant_ids(
canvas, total = UserCanvasService.get_by_tenant_ids(
[m["tenant_id"] for m in tenants], current_user.id, page_number,
items_per_page, orderby, desc, keywords)
return get_json_result(data={"kbs": kbs, "total": total})
return get_json_result(data={"canvas": canvas, "total": total})
except Exception as e:
return server_error_response(e)
@ -404,6 +401,12 @@ def list_kbs():
def setting():
req = request.json
req["user_id"] = current_user.id
if not UserCanvasService.accessible(req["id"], current_user.id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
e,flow = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(message="canvas not found.")
@ -415,10 +418,7 @@ def setting():
flow["permission"] = req["permission"]
if req["avatar"]:
flow["avatar"] = req["avatar"]
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
num= UserCanvasService.update_by_id(req["id"], flow)
return get_json_result(data=num)
@ -441,8 +441,10 @@ def trace():
@login_required
def sessions(canvas_id):
tenant_id = current_user.id
if not UserCanvasService.query(user_id=tenant_id, id=canvas_id):
return get_error_data_result(message=f"You don't own the agent {canvas_id}.")
if not UserCanvasService.accessible(canvas_id, tenant_id):
return get_json_result(
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
user_id = request.args.get("user_id")
page_number = int(request.args.get("page", 1))

View File

@ -66,7 +66,8 @@ def set_conversation():
e, dia = DialogService.get_by_id(req["dialog_id"])
if not e:
return get_data_error_result(message="Dialog not found")
conv = {"id": conv_id, "dialog_id": req["dialog_id"], "name": name, "message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}],"user_id": current_user.id}
conv = {"id": conv_id, "dialog_id": req["dialog_id"], "name": name, "message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}],"user_id": current_user.id,
"reference":[],}
ConversationService.save(**conv)
return get_json_result(data=conv)
except Exception as e:
@ -186,14 +187,7 @@ def completion():
if not conv.reference:
conv.reference = []
else:
for ref in conv.reference:
if isinstance(ref, list):
continue
ref["chunks"] = chunks_format(ref)
if not conv.reference:
conv.reference = []
conv.reference = [r for r in conv.reference if r]
conv.reference.append({"chunks": [], "doc_aggs": []})
def stream():

View File

@ -32,7 +32,8 @@ from api.utils.api_utils import get_json_result
@login_required
def set_dialog():
req = request.json
dialog_id = req.get("dialog_id")
dialog_id = req.get("dialog_id", "")
is_create = not dialog_id
name = req.get("name", "New Dialog")
if not isinstance(name, str):
return get_data_error_result(message="Dialog name must be string.")
@ -50,17 +51,19 @@ def set_dialog():
similarity_threshold = req.get("similarity_threshold", 0.1)
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
llm_setting = req.get("llm_setting", {})
meta_data_filter = req.get("meta_data_filter", {})
prompt_config = req["prompt_config"]
if not req.get("kb_ids", []) and not prompt_config.get("tavily_api_key") and "{knowledge}" in prompt_config['system']:
return get_data_error_result(message="Please remove `{knowledge}` in system prompt since no knowledge base/Tavily used here.")
if not is_create:
if not req.get("kb_ids", []) and not prompt_config.get("tavily_api_key") and "{knowledge}" in prompt_config['system']:
return get_data_error_result(message="Please remove `{knowledge}` in system prompt since no knowledge base/Tavily used here.")
for p in prompt_config["parameters"]:
if p["optional"]:
continue
if prompt_config["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(
message="Parameter '{}' is not used".format(p["key"]))
for p in prompt_config["parameters"]:
if p["optional"]:
continue
if prompt_config["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(
message="Parameter '{}' is not used".format(p["key"]))
try:
e, tenant = TenantService.get_by_id(current_user.id)
@ -83,6 +86,7 @@ def set_dialog():
"llm_id": llm_id,
"llm_setting": llm_setting,
"prompt_config": prompt_config,
"meta_data_filter": meta_data_filter,
"top_n": top_n,
"top_k": top_k,
"rerank_id": rerank_id,
@ -153,6 +157,43 @@ def list_dialogs():
return server_error_response(e)
@manager.route('/next', methods=['POST']) # noqa: F821
@login_required
def list_dialogs_next():
keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 0))
items_per_page = int(request.args.get("page_size", 0))
parser_id = request.args.get("parser_id")
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc", "true").lower() == "false":
desc = False
else:
desc = True
req = request.get_json()
owner_ids = req.get("owner_ids", [])
try:
if not owner_ids:
# tenants = TenantService.get_joined_tenants_by_user_id(current_user.id)
# tenants = [tenant["tenant_id"] for tenant in tenants]
tenants = [] # keep it here
dialogs, total = DialogService.get_by_tenant_ids(
tenants, current_user.id, page_number,
items_per_page, orderby, desc, keywords, parser_id)
else:
tenants = owner_ids
dialogs, total = DialogService.get_by_tenant_ids(
tenants, current_user.id, 0,
0, orderby, desc, keywords, parser_id)
dialogs = [dialog for dialog in dialogs if dialog["tenant_id"] in tenants]
total = len(dialogs)
if page_number and items_per_page:
dialogs = dialogs[(page_number-1)*items_per_page:page_number*items_per_page]
return get_json_result(data={"dialogs": dialogs, "total": total})
except Exception as e:
return server_error_response(e)
@manager.route('/rm', methods=['POST']) # noqa: F821
@login_required
@validate_request("dialog_ids")

View File

@ -166,6 +166,17 @@ def create():
if DocumentService.query(name=req["name"], kb_id=kb_id):
return get_data_error_result(message="Duplicated document name in the same knowledgebase.")
kb_root_folder = FileService.get_kb_folder(kb.tenant_id)
if not kb_root_folder:
return get_data_error_result(message="Cannot find the root folder.")
kb_folder = FileService.new_a_file_from_kb(
kb.tenant_id,
kb.name,
kb_root_folder["id"],
)
if not kb_folder:
return get_data_error_result(message="Cannot find the kb folder for this file.")
doc = DocumentService.insert(
{
"id": get_uuid(),
@ -180,6 +191,9 @@ def create():
"size": 0,
}
)
FileService.add_file_from_kb(doc.to_dict(), kb_folder["id"], kb.tenant_id)
return get_json_result(data=doc.to_json())
except Exception as e:
return server_error_response(e)
@ -206,6 +220,8 @@ def list_docs():
desc = False
else:
desc = True
create_time_from = int(request.args.get("create_time_from", 0))
create_time_to = int(request.args.get("create_time_to", 0))
req = request.get_json()
@ -226,6 +242,14 @@ def list_docs():
try:
docs, tol = DocumentService.get_by_kb_id(kb_id, page_number, items_per_page, orderby, desc, keywords, run_status, types, suffix)
if create_time_from or create_time_to:
filtered_docs = []
for doc in docs:
doc_create_time = doc.get("create_time", 0)
if (create_time_from == 0 or doc_create_time >= create_time_from) and (create_time_to == 0 or doc_create_time <= create_time_to):
filtered_docs.append(doc)
docs = filtered_docs
for doc_item in docs:
if doc_item["thumbnail"] and not doc_item["thumbnail"].startswith(IMG_BASE64_PREFIX):
doc_item["thumbnail"] = f"/v1/document/image/{kb_id}-{doc_item['thumbnail']}"
@ -657,6 +681,11 @@ def set_meta():
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
try:
meta = json.loads(req["meta"])
if not isinstance(meta, dict):
return get_json_result(data=False, message="Only dictionary type supported.", code=settings.RetCode.ARGUMENT_ERROR)
for k,v in meta.items():
if not isinstance(v, str) and not isinstance(v, int) and not isinstance(v, float):
return get_json_result(data=False, message=f"The type is not supported: {v}", code=settings.RetCode.ARGUMENT_ERROR)
except Exception as e:
return get_json_result(data=False, message=f"Json syntax error: {e}", code=settings.RetCode.ARGUMENT_ERROR)
if not isinstance(meta, dict):

View File

@ -247,7 +247,10 @@ def list_tags(kb_id):
code=settings.RetCode.AUTHENTICATION_ERROR
)
tags = settings.retrievaler.all_tags(current_user.id, [kb_id])
tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = []
for tenant in tenants:
tags += settings.retrievaler.all_tags(tenant["tenant_id"], [kb_id])
return get_json_result(data=tags)
@ -263,7 +266,10 @@ def list_tags_from_kbs():
code=settings.RetCode.AUTHENTICATION_ERROR
)
tags = settings.retrievaler.all_tags(current_user.id, kb_ids)
tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = []
for tenant in tenants:
tags += settings.retrievaler.all_tags(tenant["tenant_id"], kb_ids)
return get_json_result(data=tags)
@ -345,6 +351,7 @@ def knowledge_graph(kb_id):
obj["graph"]["edges"] = sorted(filtered_edges, key=lambda x: x.get("weight", 0), reverse=True)[:128]
return get_json_result(data=obj)
@manager.route('/<kb_id>/knowledge_graph', methods=['DELETE']) # noqa: F821
@login_required
def delete_knowledge_graph(kb_id):
@ -358,3 +365,17 @@ def delete_knowledge_graph(kb_id):
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
return get_json_result(data=True)
@manager.route("/get_meta", methods=["GET"]) # noqa: F821
@login_required
def get_meta():
kb_ids = request.args.get("kb_ids", "").split(",")
for kb_id in kb_ids:
if not KnowledgebaseService.accessible(kb_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
return get_json_result(data=DocumentService.get_meta_by_kbs(kb_ids))

View File

@ -15,7 +15,6 @@
#
import logging
import json
import base64
from flask import request
from flask_login import login_required, current_user
from api.db.services.llm_service import LLMFactoriesService, TenantLLMService, LLMService
@ -24,7 +23,7 @@ from api.utils.api_utils import server_error_response, get_data_error_result, va
from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM
from api.utils.api_utils import get_json_result
from api.utils.base64_image import test_image_base64
from api.utils.base64_image import test_image
from rag.llm import EmbeddingModel, ChatModel, RerankModel, CvModel, TTSModel
@ -58,6 +57,7 @@ def set_api_key():
# test if api key works
chat_passed, embd_passed, rerank_passed = False, False, False
factory = req["llm_factory"]
extra = {"provider": factory}
msg = ""
for llm in LLMService.query(fid=factory):
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
@ -74,7 +74,7 @@ def set_api_key():
elif not chat_passed and llm.model_type == LLMType.CHAT.value:
assert factory in ChatModel, f"Chat model from {factory} is not supported yet."
mdl = ChatModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
req["api_key"], llm.llm_name, base_url=req.get("base_url"), **extra)
try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}],
{"temperature": 0.9, 'max_tokens': 50})
@ -82,7 +82,7 @@ def set_api_key():
raise Exception(m)
chat_passed = True
except Exception as e:
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(
e)
elif not rerank_passed and llm.model_type == LLMType.RERANK:
assert factory in RerankModel, f"Re-rank model from {factory} is not supported yet."
@ -95,7 +95,7 @@ def set_api_key():
rerank_passed = True
logging.debug(f'passed model rerank {llm.llm_name}')
except Exception as e:
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
msg += f"\nFail to access model({llm.fid}/{llm.llm_name}) using this api key." + str(
e)
if any([embd_passed, chat_passed, rerank_passed]):
msg = ''
@ -205,6 +205,7 @@ def add_llm():
msg = ""
mdl_nm = llm["llm_name"].split("___")[0]
extra = {"provider": factory}
if llm["model_type"] == LLMType.EMBEDDING.value:
assert factory in EmbeddingModel, f"Embedding model from {factory} is not supported yet."
mdl = EmbeddingModel[factory](
@ -222,7 +223,8 @@ def add_llm():
mdl = ChatModel[factory](
key=llm['api_key'],
model_name=mdl_nm,
base_url=llm["api_base"]
base_url=llm["api_base"],
**extra,
)
try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {
@ -230,7 +232,7 @@ def add_llm():
if not tc and m.find("**ERROR**:") >= 0:
raise Exception(m)
except Exception as e:
msg += f"\nFail to access model({mdl_nm})." + str(
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(
e)
elif llm["model_type"] == LLMType.RERANK:
assert factory in RerankModel, f"RE-rank model from {factory} is not supported yet."
@ -244,9 +246,9 @@ def add_llm():
if len(arr) == 0:
raise Exception("Not known.")
except KeyError:
msg += f"{factory} dose not support this model({mdl_nm})"
msg += f"{factory} dose not support this model({factory}/{mdl_nm})"
except Exception as e:
msg += f"\nFail to access model({mdl_nm})." + str(
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(
e)
elif llm["model_type"] == LLMType.IMAGE2TEXT.value:
assert factory in CvModel, f"Image to text model from {factory} is not supported yet."
@ -256,12 +258,12 @@ def add_llm():
base_url=llm["api_base"]
)
try:
image_data = base64.b64decode(test_image_base64)
image_data = test_image
m, tc = mdl.describe(image_data)
if not m and not tc:
raise Exception(m)
except Exception as e:
msg += f"\nFail to access model({mdl_nm})." + str(e)
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
elif llm["model_type"] == LLMType.TTS:
assert factory in TTSModel, f"TTS model from {factory} is not supported yet."
mdl = TTSModel[factory](
@ -271,7 +273,7 @@ def add_llm():
for resp in mdl.tts("Hello~ Ragflower!"):
pass
except RuntimeError as e:
msg += f"\nFail to access model({mdl_nm})." + str(e)
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
else:
# TODO: check other type of models
pass
@ -313,12 +315,12 @@ def delete_factory():
def my_llms():
try:
include_details = request.args.get('include_details', 'false').lower() == 'true'
if include_details:
res = {}
objs = TenantLLMService.query(tenant_id=current_user.id)
factories = LLMFactoriesService.query(status=StatusEnum.VALID.value)
for o in objs:
o_dict = o.to_dict()
factory_tags = None
@ -326,13 +328,13 @@ def my_llms():
if f.name == o_dict["llm_factory"]:
factory_tags = f.tags
break
if o_dict["llm_factory"] not in res:
res[o_dict["llm_factory"]] = {
"tags": factory_tags,
"llm": []
}
res[o_dict["llm_factory"]]["llm"].append({
"type": o_dict["model_type"],
"name": o_dict["llm_name"],
@ -353,14 +355,12 @@ def my_llms():
"name": o["llm_name"],
"used_token": o["used_tokens"]
})
return get_json_result(data=res)
except Exception as e:
return server_error_response(e)
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_app():

View File

@ -139,7 +139,7 @@ def create(tenant_id):
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
del res["kb_ids"]
res["dataset_ids"] = req["dataset_ids"]
res["dataset_ids"] = req.get("dataset_ids", [])
res["avatar"] = res.pop("icon")
return get_result(data=res)

View File

@ -1,4 +1,4 @@
#
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from flask import request, jsonify
from api.db import LLMType
@ -73,11 +75,13 @@ def retrieval(tenant_id):
for c in ranks["chunks"]:
e, doc = DocumentService.get_by_id( c["doc_id"])
c.pop("vector", None)
meta = getattr(doc, 'meta_fields', {})
meta["doc_id"] = c["doc_id"]
records.append({
"content": c["content_with_weight"],
"score": c["similarity"],
"title": c["docnm_kwd"],
"metadata": doc.meta_fields
"metadata": meta
})
return jsonify({"records": records})
@ -87,4 +91,5 @@ def retrieval(tenant_id):
message='No chunk found! Check the chunk status please!',
code=settings.RetCode.NOT_FOUND
)
logging.exception(e)
return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)

View File

@ -38,7 +38,7 @@ from api.utils.api_utils import check_duplicate_ids, construct_json_result, get_
from rag.app.qa import beAdoc, rmPrefix
from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search
from rag.prompts import keyword_extraction, cross_languages
from rag.prompts import cross_languages, keyword_extraction
from rag.utils import rmSpace
from rag.utils.storage_factory import STORAGE_IMPL
@ -456,6 +456,18 @@ def list_docs(dataset_id, tenant_id):
required: false
default: true
description: Order in descending.
- in: query
name: create_time_from
type: integer
required: false
default: 0
description: Unix timestamp for filtering documents created after this time. 0 means no filter.
- in: query
name: create_time_to
type: integer
required: false
default: 0
description: Unix timestamp for filtering documents created before this time. 0 means no filter.
- in: header
name: Authorization
type: string
@ -517,6 +529,17 @@ def list_docs(dataset_id, tenant_id):
desc = True
docs, tol = DocumentService.get_list(dataset_id, page, page_size, orderby, desc, keywords, id, name)
create_time_from = int(request.args.get("create_time_from", 0))
create_time_to = int(request.args.get("create_time_to", 0))
if create_time_from or create_time_to:
filtered_docs = []
for doc in docs:
doc_create_time = doc.get("create_time", 0)
if (create_time_from == 0 or doc_create_time >= create_time_from) and (create_time_to == 0 or doc_create_time <= create_time_to):
filtered_docs.append(doc)
docs = filtered_docs
# rename key's name
renamed_doc_list = []
key_mapping = {

View File

@ -51,6 +51,7 @@ def create(tenant_id, chat_id):
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": dia[0].prompt_config.get("prologue")}],
"user_id": req.get("user_id", ""),
"reference": [{}],
}
if not conv.get("name"):
return get_error_data_result(message="`name` can not be empty.")
@ -435,14 +436,38 @@ def agents_completion_openai_compatibility(tenant_id, agent_id):
)
)
# Get the last user message as the question
question = next((m["content"] for m in reversed(messages) if m["role"] == "user"), "")
if req.get("stream", True):
return Response(completionOpenAI(tenant_id, agent_id, question, session_id=req.get("id", req.get("metadata", {}).get("id", "")), stream=True), mimetype="text/event-stream")
stream = req.pop("stream", False)
if stream:
resp = Response(
completionOpenAI(
tenant_id,
agent_id,
question,
session_id=req.get("id", req.get("metadata", {}).get("id", "")),
stream=True,
**req,
),
mimetype="text/event-stream",
)
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
else:
# For non-streaming, just return the response directly
response = next(completionOpenAI(tenant_id, agent_id, question, session_id=req.get("id", req.get("metadata", {}).get("id", "")), stream=False))
response = next(
completionOpenAI(
tenant_id,
agent_id,
question,
session_id=req.get("id", req.get("metadata", {}).get("id", "")),
stream=False,
**req,
)
)
return jsonify(response)
@ -450,41 +475,38 @@ def agents_completion_openai_compatibility(tenant_id, agent_id):
@token_required
def agent_completions(tenant_id, agent_id):
req = request.json
cvs = UserCanvasService.query(user_id=tenant_id, id=agent_id)
if not cvs:
return get_error_data_result(f"You don't own the agent {agent_id}")
if req.get("session_id"):
dsl = cvs[0].dsl
if not isinstance(dsl, str):
dsl = json.dumps(dsl)
conv = API4ConversationService.query(id=req["session_id"], dialog_id=agent_id)
if not conv:
return get_error_data_result(f"You don't own the session {req['session_id']}")
# If an update to UserCanvas is detected, update the API4Conversation.dsl
sync_dsl = req.get("sync_dsl", False)
if sync_dsl is True and cvs[0].update_time > conv[0].update_time:
current_dsl = conv[0].dsl
new_dsl = json.loads(dsl)
state_fields = ["history", "messages", "path", "reference"]
states = {field: current_dsl.get(field, []) for field in state_fields}
current_dsl.update(new_dsl)
current_dsl.update(states)
API4ConversationService.update_by_id(req["session_id"], {"dsl": current_dsl})
else:
req["question"] = ""
ans = {}
if req.get("stream", True):
resp = Response(agent_completion(tenant_id, agent_id, **req), mimetype="text/event-stream")
def generate():
for answer in agent_completion(tenant_id=tenant_id, agent_id=agent_id, **req):
if isinstance(answer, str):
try:
ans = json.loads(answer[5:]) # remove "data:"
except Exception:
continue
if ans.get("event") != "message":
continue
yield answer
yield "data:[DONE]\n\n"
resp = Response(generate(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
try:
for answer in agent_completion(tenant_id, agent_id, **req):
return get_result(data=answer)
except Exception as e:
return get_error_data_result(str(e))
for answer in agent_completion(tenant_id=tenant_id, agent_id=agent_id, **req):
try:
ans = json.loads(answer[5:]) # remove "data:"
except Exception as e:
return get_result(data=f"**ERROR**: {str(e)}")
return get_result(data=ans)
@manager.route("/chats/<chat_id>/sessions", methods=["GET"]) # noqa: F821
@ -512,16 +534,16 @@ def list_session(tenant_id, chat_id):
if "prompt" in info:
info.pop("prompt")
conv["chat_id"] = conv.pop("dialog_id")
if conv["reference"]:
ref_messages = conv["reference"]
if ref_messages:
messages = conv["messages"]
message_num = 0
while message_num < len(messages) and message_num < len(conv["reference"]):
if message_num != 0 and messages[message_num]["role"] != "user":
if message_num >= len(conv["reference"]):
break
ref_num = 0
while message_num < len(messages) and ref_num < len(ref_messages):
if messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][message_num]:
chunks = conv["reference"][message_num]["chunks"]
if "chunks" in ref_messages[ref_num]:
chunks = ref_messages[ref_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk.get("chunk_id", chunk.get("id")),
@ -535,6 +557,7 @@ def list_session(tenant_id, chat_id):
chunk_list.append(new_chunk)
messages[message_num]["reference"] = chunk_list
ref_num += 1
message_num += 1
del conv["reference"]
return get_result(data=convs)
@ -566,14 +589,22 @@ def list_agent_session(tenant_id, agent_id):
if "prompt" in info:
info.pop("prompt")
conv["agent_id"] = conv.pop("dialog_id")
# Fix for session listing endpoint
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
# Ensure reference is a list type to prevent KeyError
if not isinstance(conv["reference"], list):
conv["reference"] = []
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
# Add boundary and type checks to prevent KeyError
if (chunk_num < len(conv["reference"]) and
conv["reference"][chunk_num] is not None and
isinstance(conv["reference"][chunk_num], dict) and
"chunks" in conv["reference"][chunk_num]):
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
@ -848,10 +879,11 @@ def begin_inputs(agent_id):
return get_error_data_result(f"Can't find agent by ID: {agent_id}")
canvas = Canvas(json.dumps(cvs.dsl), objs[0].tenant_id)
return get_result(data={
"title": cvs.title,
"avatar": cvs.avatar,
"inputs": canvas.get_component_input_form("begin")
})
return get_result(
data={
"title": cvs.title,
"avatar": cvs.avatar,
"inputs": canvas.get_component_input_form("begin"),
"prologue": canvas.get_prologue()
}
)

View File

@ -620,18 +620,35 @@ def user_register(user_id, user):
"location": "",
}
tenant_llm = []
for llm in LLMService.query(fid=settings.LLM_FACTORY):
tenant_llm.append(
{
"tenant_id": user_id,
"llm_factory": settings.LLM_FACTORY,
"llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": settings.API_KEY,
"api_base": settings.LLM_BASE_URL,
"max_tokens": llm.max_tokens if llm.max_tokens else 8192,
}
)
seen = set()
factory_configs = []
for factory_config in [
settings.CHAT_CFG,
settings.EMBEDDING_CFG,
settings.ASR_CFG,
settings.IMAGE2TEXT_CFG,
settings.RERANK_CFG,
]:
factory_name = factory_config["factory"]
if factory_name not in seen:
seen.add(factory_name)
factory_configs.append(factory_config)
for factory_config in factory_configs:
for llm in LLMService.query(fid=factory_config["factory"]):
tenant_llm.append(
{
"tenant_id": user_id,
"llm_factory": factory_config["factory"],
"llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": factory_config["api_key"],
"api_base": factory_config["base_url"],
"max_tokens": llm.max_tokens if llm.max_tokens else 8192,
}
)
if settings.LIGHTEN != 1:
for buildin_embedding_model in settings.BUILTIN_EMBEDDING_MODELS:
mdlnm, fid = TenantLLMService.split_model_name_and_factory(buildin_embedding_model)
@ -647,6 +664,13 @@ def user_register(user_id, user):
}
)
unique = {}
for item in tenant_llm:
key = (item["tenant_id"], item["llm_factory"], item["llm_name"])
if key not in unique:
unique[key] = item
tenant_llm = list(unique.values())
if not UserService.save(**user):
return
TenantService.insert(**tenant)

View File

@ -744,6 +744,7 @@ class Dialog(DataBaseModel):
null=False,
default={"system": "", "prologue": "Hi! I'm your assistant, what can I do for you?", "parameters": [], "empty_response": "Sorry! No relevant content was found in the knowledge base!"},
)
meta_data_filter = JSONField(null=True, default={})
similarity_threshold = FloatField(default=0.2)
vector_similarity_weight = FloatField(default=0.3)
@ -1015,4 +1016,8 @@ def migrate_db():
migrate(migrator.add_column("api_4_conversation", "errors", TextField(null=True, help_text="errors")))
except Exception:
pass
try:
migrate(migrator.add_column("dialog", "meta_data_filter", JSONField(null=True, default={})))
except Exception:
pass
logging.disable(logging.NOTSET)

View File

@ -63,12 +63,44 @@ def init_superuser():
"invited_by": user_info["id"],
"role": UserTenantRole.OWNER
}
user_id = user_info
tenant_llm = []
for llm in LLMService.query(fid=settings.LLM_FACTORY):
tenant_llm.append(
{"tenant_id": user_info["id"], "llm_factory": settings.LLM_FACTORY, "llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": settings.API_KEY, "api_base": settings.LLM_BASE_URL})
seen = set()
factory_configs = []
for factory_config in [
settings.CHAT_CFG["factory"],
settings.EMBEDDING_CFG["factory"],
settings.ASR_CFG["factory"],
settings.IMAGE2TEXT_CFG["factory"],
settings.RERANK_CFG["factory"],
]:
factory_name = factory_config["factory"]
if factory_name not in seen:
seen.add(factory_name)
factory_configs.append(factory_config)
for factory_config in factory_configs:
for llm in LLMService.query(fid=factory_config["factory"]):
tenant_llm.append(
{
"tenant_id": user_id,
"llm_factory": factory_config["factory"],
"llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": factory_config["api_key"],
"api_base": factory_config["base_url"],
"max_tokens": llm.max_tokens if llm.max_tokens else 8192,
}
)
unique = {}
for item in tenant_llm:
key = (item["tenant_id"], item["llm_factory"], item["llm_name"])
if key not in unique:
unique[key] = item
tenant_llm = list(unique.values())
if not UserService.save(**user_info):
logging.error("can't init admin.")
@ -103,7 +135,7 @@ def init_llm_factory():
except Exception:
pass
factory_llm_infos = settings.FACTORY_LLM_INFOS
factory_llm_infos = settings.FACTORY_LLM_INFOS
for factory_llm_info in factory_llm_infos:
info = deepcopy(factory_llm_info)
llm_infos = info.pop("llm")

View File

@ -16,7 +16,6 @@
import json
import logging
import time
import traceback
from uuid import uuid4
from agent.canvas import Canvas
from api.db import TenantPermission
@ -54,12 +53,12 @@ class UserCanvasService(CommonService):
agents = agents.paginate(page_number, items_per_page)
return list(agents.dicts())
@classmethod
@DB.connection_context()
def get_by_tenant_id(cls, pid):
try:
fields = [
cls.model.id,
cls.model.avatar,
@ -83,7 +82,7 @@ class UserCanvasService(CommonService):
except Exception as e:
logging.exception(e)
return False, None
@classmethod
@DB.connection_context()
def get_by_tenant_ids(cls, joined_tenant_ids, user_id,
@ -103,14 +102,14 @@ class UserCanvasService(CommonService):
]
if keywords:
agents = cls.model.select(*fields).join(User, on=(cls.model.user_id == User.id)).where(
((cls.model.user_id.in_(joined_tenant_ids) & (cls.model.permission ==
((cls.model.user_id.in_(joined_tenant_ids) & (cls.model.permission ==
TenantPermission.TEAM.value)) | (
cls.model.user_id == user_id)),
(fn.LOWER(cls.model.title).contains(keywords.lower()))
)
else:
agents = cls.model.select(*fields).join(User, on=(cls.model.user_id == User.id)).where(
((cls.model.user_id.in_(joined_tenant_ids) & (cls.model.permission ==
((cls.model.user_id.in_(joined_tenant_ids) & (cls.model.permission ==
TenantPermission.TEAM.value)) | (
cls.model.user_id == user_id))
)
@ -122,9 +121,21 @@ class UserCanvasService(CommonService):
agents = agents.paginate(page_number, items_per_page)
return list(agents.dicts()), count
@classmethod
@DB.connection_context()
def accessible(cls, canvas_id, tenant_id):
from api.db.services.user_service import UserTenantService
e, c = UserCanvasService.get_by_tenant_id(canvas_id)
if not e:
return False
tids = [t.tenant_id for t in UserTenantService.query(user_id=tenant_id)]
if c["user_id"] != canvas_id and c["user_id"] not in tids:
return False
return True
def completion(tenant_id, agent_id, session_id=None, **kwargs):
query = kwargs.get("query", "")
query = kwargs.get("query", "") or kwargs.get("question", "")
files = kwargs.get("files", [])
inputs = kwargs.get("inputs", {})
user_id = kwargs.get("user_id", "")
@ -173,223 +184,105 @@ def completion(tenant_id, agent_id, session_id=None, **kwargs):
conv.message.append({"role": "assistant", "content": txt, "created_at": time.time(), "id": message_id})
conv.reference = canvas.get_reference()
conv.errors = canvas.error
API4ConversationService.append_message(conv.id, conv.to_dict())
conv.dsl = str(canvas)
conv = conv.to_dict()
API4ConversationService.append_message(conv["id"], conv)
def completionOpenAI(tenant_id, agent_id, question, session_id=None, stream=True, **kwargs):
"""Main function for OpenAI-compatible completions, structured similarly to the completion function."""
tiktokenenc = tiktoken.get_encoding("cl100k_base")
e, cvs = UserCanvasService.get_by_id(agent_id)
if not e:
yield get_data_openai(
id=session_id,
model=agent_id,
content="**ERROR**: Agent not found."
)
return
if cvs.user_id != tenant_id:
yield get_data_openai(
id=session_id,
model=agent_id,
content="**ERROR**: You do not own the agent"
)
return
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, tenant_id)
canvas.reset()
message_id = str(uuid4())
# Handle new session creation
if not session_id:
query = canvas.get_preset_param()
if query:
for ele in query:
if not ele["optional"]:
if not kwargs.get(ele["key"]):
yield get_data_openai(
id=None,
model=agent_id,
content=f"`{ele['key']}` is required",
completion_tokens=len(tiktokenenc.encode(f"`{ele['key']}` is required")),
prompt_tokens=len(tiktokenenc.encode(question if question else ""))
)
return
ele["value"] = kwargs[ele["key"]]
if ele["optional"]:
if kwargs.get(ele["key"]):
ele["value"] = kwargs[ele['key']]
else:
if "value" in ele:
ele.pop("value")
cvs.dsl = json.loads(str(canvas))
session_id = get_uuid()
conv = {
"id": session_id,
"dialog_id": cvs.id,
"user_id": kwargs.get("user_id", "") if isinstance(kwargs, dict) else "",
"message": [{"role": "assistant", "content": canvas.get_prologue(), "created_at": time.time()}],
"source": "agent",
"dsl": cvs.dsl
}
canvas.messages.append({"role": "user", "content": question, "id": message_id})
canvas.add_user_input(question)
API4ConversationService.save(**conv)
conv = API4Conversation(**conv)
if not conv.message:
conv.message = []
conv.message.append({
"role": "user",
"content": question,
"id": message_id
})
if not conv.reference:
conv.reference = []
conv.reference.append({"chunks": [], "doc_aggs": []})
# Handle existing session
else:
e, conv = API4ConversationService.get_by_id(session_id)
if not e:
yield get_data_openai(
id=session_id,
model=agent_id,
content="**ERROR**: Session not found!"
)
return
canvas = Canvas(json.dumps(conv.dsl), tenant_id)
canvas.messages.append({"role": "user", "content": question, "id": message_id})
canvas.add_user_input(question)
if not conv.message:
conv.message = []
conv.message.append({
"role": "user",
"content": question,
"id": message_id
})
if not conv.reference:
conv.reference = []
conv.reference.append({"chunks": [], "doc_aggs": []})
# Process request based on stream mode
final_ans = {"reference": [], "content": ""}
prompt_tokens = len(tiktokenenc.encode(str(question)))
user_id = kwargs.get("user_id", "")
if stream:
completion_tokens = 0
try:
completion_tokens = 0
for ans in canvas.run(stream=True, bypass_begin=True):
if ans.get("running_status"):
completion_tokens += len(tiktokenenc.encode(ans.get("content", "")))
yield "data: " + json.dumps(
get_data_openai(
id=session_id,
model=agent_id,
content=ans["content"],
object="chat.completion.chunk",
completion_tokens=completion_tokens,
prompt_tokens=prompt_tokens
),
ensure_ascii=False
) + "\n\n"
for ans in completion(
tenant_id=tenant_id,
agent_id=agent_id,
session_id=session_id,
query=question,
user_id=user_id,
**kwargs
):
if isinstance(ans, str):
try:
ans = json.loads(ans[5:]) # remove "data:"
except Exception as e:
logging.exception(f"Agent OpenAI-Compatible completionOpenAI parse answer failed: {e}")
continue
if ans.get("event") != "message":
continue
for k in ans.keys():
final_ans[k] = ans[k]
completion_tokens += len(tiktokenenc.encode(final_ans.get("content", "")))
content_piece = ans["data"]["content"]
completion_tokens += len(tiktokenenc.encode(content_piece))
yield "data: " + json.dumps(
get_data_openai(
id=session_id,
id=session_id or str(uuid4()),
model=agent_id,
content=final_ans["content"],
object="chat.completion.chunk",
finish_reason="stop",
content=content_piece,
prompt_tokens=prompt_tokens,
completion_tokens=completion_tokens,
prompt_tokens=prompt_tokens
stream=True
),
ensure_ascii=False
) + "\n\n"
# Update conversation
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "created_at": time.time(), "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
conv.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
yield "data: [DONE]\n\n"
except Exception as e:
traceback.print_exc()
conv.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
yield "data: " + json.dumps(
get_data_openai(
id=session_id,
id=session_id or str(uuid4()),
model=agent_id,
content="**ERROR**: " + str(e),
content=f"**ERROR**: {str(e)}",
finish_reason="stop",
completion_tokens=len(tiktokenenc.encode("**ERROR**: " + str(e))),
prompt_tokens=prompt_tokens
prompt_tokens=prompt_tokens,
completion_tokens=len(tiktokenenc.encode(f"**ERROR**: {str(e)}")),
stream=True
),
ensure_ascii=False
) + "\n\n"
yield "data: [DONE]\n\n"
else: # Non-streaming mode
else:
try:
all_answer_content = ""
for answer in canvas.run(stream=False, bypass_begin=True):
if answer.get("running_status"):
all_content = ""
for ans in completion(
tenant_id=tenant_id,
agent_id=agent_id,
session_id=session_id,
query=question,
user_id=user_id,
**kwargs
):
if isinstance(ans, str):
ans = json.loads(ans[5:])
if ans.get("event") != "message":
continue
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
final_ans["reference"] = answer.get("reference", [])
all_answer_content += final_ans["content"]
final_ans["content"] = all_answer_content
# Update conversation
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "created_at": time.time(), "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
conv.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
# Return the response in OpenAI format
all_content += ans["data"]["content"]
completion_tokens = len(tiktokenenc.encode(all_content))
yield get_data_openai(
id=session_id,
id=session_id or str(uuid4()),
model=agent_id,
content=final_ans["content"],
finish_reason="stop",
completion_tokens=len(tiktokenenc.encode(final_ans["content"])),
prompt_tokens=prompt_tokens,
param=canvas.get_preset_param() # Added param info like in completion
)
except Exception as e:
traceback.print_exc()
conv.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
yield get_data_openai(
id=session_id,
model=agent_id,
content="**ERROR**: " + str(e),
completion_tokens=completion_tokens,
content=all_content,
finish_reason="stop",
completion_tokens=len(tiktokenenc.encode("**ERROR**: " + str(e))),
prompt_tokens=prompt_tokens
param=None
)
except Exception as e:
yield get_data_openai(
id=session_id or str(uuid4()),
model=agent_id,
prompt_tokens=prompt_tokens,
completion_tokens=len(tiktokenenc.encode(f"**ERROR**: {str(e)}")),
content=f"**ERROR**: {str(e)}",
finish_reason="stop",
param=None
)

View File

@ -23,12 +23,14 @@ from functools import partial
from timeit import default_timer as timer
from langfuse import Langfuse
from peewee import fn
from agentic_reasoning import DeepResearcher
from api import settings
from api.db import LLMType, ParserType, StatusEnum
from api.db.db_models import DB, Dialog
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.langfuse_service import TenantLangfuseService
from api.db.services.llm_service import LLMBundle, TenantLLMService
@ -37,6 +39,7 @@ from rag.app.resume import forbidden_select_fields4resume
from rag.app.tag import label_question
from rag.nlp.search import index_name
from rag.prompts import chunks_format, citation_prompt, cross_languages, full_question, kb_prompt, keyword_extraction, message_fit_in
from rag.prompts.prompts import gen_meta_filter
from rag.utils import num_tokens_from_string, rmSpace
from rag.utils.tavily_conn import Tavily
@ -96,6 +99,67 @@ class DialogService(CommonService):
return list(chats.dicts())
@classmethod
@DB.connection_context()
def get_by_tenant_ids(cls, joined_tenant_ids, user_id, page_number, items_per_page, orderby, desc, keywords, parser_id=None):
from api.db.db_models import User
fields = [
cls.model.id,
cls.model.tenant_id,
cls.model.name,
cls.model.description,
cls.model.language,
cls.model.llm_id,
cls.model.llm_setting,
cls.model.prompt_type,
cls.model.prompt_config,
cls.model.similarity_threshold,
cls.model.vector_similarity_weight,
cls.model.top_n,
cls.model.top_k,
cls.model.do_refer,
cls.model.rerank_id,
cls.model.kb_ids,
cls.model.icon,
cls.model.status,
User.nickname,
User.avatar.alias("tenant_avatar"),
cls.model.update_time,
cls.model.create_time,
]
if keywords:
dialogs = (
cls.model.select(*fields)
.join(User, on=(cls.model.tenant_id == User.id))
.where(
(cls.model.tenant_id.in_(joined_tenant_ids) | (cls.model.tenant_id == user_id)) & (cls.model.status == StatusEnum.VALID.value),
(fn.LOWER(cls.model.name).contains(keywords.lower())),
)
)
else:
dialogs = (
cls.model.select(*fields)
.join(User, on=(cls.model.tenant_id == User.id))
.where(
(cls.model.tenant_id.in_(joined_tenant_ids) | (cls.model.tenant_id == user_id)) & (cls.model.status == StatusEnum.VALID.value),
)
)
if parser_id:
dialogs = dialogs.where(cls.model.parser_id == parser_id)
if desc:
dialogs = dialogs.order_by(cls.model.getter_by(orderby).desc())
else:
dialogs = dialogs.order_by(cls.model.getter_by(orderby).asc())
count = dialogs.count()
if page_number and items_per_page:
dialogs = dialogs.paginate(page_number, items_per_page)
return list(dialogs.dicts()), count
def chat_solo(dialog, messages, stream=True):
if TenantLLMService.llm_id2llm_type(dialog.llm_id) == "image2text":
chat_mdl = LLMBundle(dialog.tenant_id, LLMType.IMAGE2TEXT, dialog.llm_id)
@ -189,6 +253,46 @@ def repair_bad_citation_formats(answer: str, kbinfos: dict, idx: set):
return answer, idx
def meta_filter(metas: dict, filters: list[dict]):
doc_ids = []
def filter_out(v2docs, operator, value):
nonlocal doc_ids
for input,docids in v2docs.items():
try:
input = float(input)
value = float(value)
except Exception:
input = str(input)
value = str(value)
for conds in [
(operator == "contains", str(value).lower() in str(input).lower()),
(operator == "not contains", str(value).lower() not in str(input).lower()),
(operator == "start with", str(input).lower().startswith(str(value).lower())),
(operator == "end with", str(input).lower().endswith(str(value).lower())),
(operator == "empty", not input),
(operator == "not empty", input),
(operator == "=", input == value),
(operator == "", input != value),
(operator == ">", input > value),
(operator == "<", input < value),
(operator == "", input >= value),
(operator == "", input <= value),
]:
try:
if all(conds):
doc_ids.extend(docids)
except Exception:
pass
for k, v2docs in metas.items():
for f in filters:
if k != f["key"]:
continue
filter_out(v2docs, f["op"], f["value"])
return doc_ids
def chat(dialog, messages, stream=True, **kwargs):
assert messages[-1]["role"] == "user", "The last content of this conversation is not from user."
if not dialog.kb_ids and not dialog.prompt_config.get("tavily_api_key"):
@ -208,12 +312,14 @@ def chat(dialog, messages, stream=True, **kwargs):
check_llm_ts = timer()
langfuse_tracer = None
trace_context = {}
langfuse_keys = TenantLangfuseService.filter_by_tenant(tenant_id=dialog.tenant_id)
if langfuse_keys:
langfuse = Langfuse(public_key=langfuse_keys.public_key, secret_key=langfuse_keys.secret_key, host=langfuse_keys.host)
if langfuse.auth_check():
langfuse_tracer = langfuse
langfuse.trace = langfuse_tracer.trace(name=f"{dialog.name}-{llm_model_config['llm_name']}")
trace_id = langfuse_tracer.create_trace_id()
trace_context = {"trace_id": trace_id}
check_langfuse_tracer_ts = timer()
kbs, embd_mdl, rerank_mdl, chat_mdl, tts_mdl = get_models(dialog)
@ -224,9 +330,10 @@ def chat(dialog, messages, stream=True, **kwargs):
retriever = settings.retrievaler
questions = [m["content"] for m in messages if m["role"] == "user"][-3:]
attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else None
attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else []
if "doc_ids" in messages[-1]:
attachments = messages[-1]["doc_ids"]
prompt_config = dialog.prompt_config
field_map = KnowledgebaseService.get_field_map(dialog.kb_ids)
# try to use sql if field mapping is good to go
@ -253,6 +360,14 @@ def chat(dialog, messages, stream=True, **kwargs):
if prompt_config.get("cross_languages"):
questions = [cross_languages(dialog.tenant_id, dialog.llm_id, questions[0], prompt_config["cross_languages"])]
if dialog.meta_data_filter:
metas = DocumentService.get_meta_by_kbs(dialog.kb_ids)
if dialog.meta_data_filter.get("method") == "auto":
filters = gen_meta_filter(chat_mdl, metas, questions[-1])
attachments.extend(meta_filter(metas, filters))
elif dialog.meta_data_filter.get("method") == "manual":
attachments.extend(meta_filter(metas, dialog.meta_data_filter["manual"]))
if prompt_config.get("keyword", False):
questions[-1] += keyword_extraction(chat_mdl, questions[-1])
@ -400,17 +515,19 @@ def chat(dialog, messages, stream=True, **kwargs):
f" - Token speed: {int(tk_num / (generate_result_time_cost / 1000.0))}/s"
)
langfuse_output = "\n" + re.sub(r"^.*?(### Query:.*)", r"\1", prompt, flags=re.DOTALL)
langfuse_output = {"time_elapsed:": re.sub(r"\n", " \n", langfuse_output), "created_at": time.time()}
# Add a condition check to call the end method only if langfuse_tracer exists
if langfuse_tracer and "langfuse_generation" in locals():
langfuse_generation.end(output=langfuse_output)
langfuse_output = "\n" + re.sub(r"^.*?(### Query:.*)", r"\1", prompt, flags=re.DOTALL)
langfuse_output = {"time_elapsed:": re.sub(r"\n", " \n", langfuse_output), "created_at": time.time()}
langfuse_generation.update(output=langfuse_output)
langfuse_generation.end()
return {"answer": think + answer, "reference": refs, "prompt": re.sub(r"\n", " \n", prompt), "created_at": time.time()}
if langfuse_tracer:
langfuse_generation = langfuse_tracer.trace.generation(name="chat", model=llm_model_config["llm_name"], input={"prompt": prompt, "prompt4citation": prompt4citation, "messages": msg})
langfuse_generation = langfuse_tracer.start_generation(
trace_context=trace_context, name="chat", model=llm_model_config["llm_name"], input={"prompt": prompt, "prompt4citation": prompt4citation, "messages": msg}
)
if stream:
last_ans = ""

View File

@ -243,7 +243,7 @@ class DocumentService(CommonService):
from api.db.services.task_service import TaskService
cls.clear_chunk_num(doc.id)
try:
TaskService.filter_delete(Task.doc_id == doc.id)
TaskService.filter_delete([Task.doc_id == doc.id])
page = 0
page_size = 1000
all_chunk_ids = []
@ -574,6 +574,25 @@ class DocumentService(CommonService):
def update_meta_fields(cls, doc_id, meta_fields):
return cls.update_by_id(doc_id, {"meta_fields": meta_fields})
@classmethod
@DB.connection_context()
def get_meta_by_kbs(cls, kb_ids):
fields = [
cls.model.id,
cls.model.meta_fields,
]
meta = {}
for r in cls.model.select(*fields).where(cls.model.kb_id.in_(kb_ids)):
doc_id = r.id
for k,v in r.meta_fields.items():
if k not in meta:
meta[k] = {}
v = str(v)
if v not in meta[k]:
meta[k][v] = []
meta[k][v].append(doc_id)
return meta
@classmethod
@DB.connection_context()
def update_progress(cls):

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import inspect
import logging
import re
from functools import partial
@ -141,6 +142,7 @@ class TenantLLMService(CommonService):
@DB.connection_context()
def model_instance(cls, tenant_id, llm_type, llm_name=None, lang="Chinese", **kwargs):
model_config = TenantLLMService.get_model_config(tenant_id, llm_type, llm_name)
kwargs.update({"provider": model_config["llm_factory"]})
if llm_type == LLMType.EMBEDDING.value:
if model_config["llm_factory"] not in EmbeddingModel:
return
@ -217,7 +219,7 @@ class TenantLLMService(CommonService):
return list(objs)
@staticmethod
def llm_id2llm_type(llm_id: str) ->str|None:
def llm_id2llm_type(llm_id: str) -> str | None:
llm_id, *_ = TenantLLMService.split_model_name_and_factory(llm_id)
llm_factories = settings.FACTORY_LLM_INFOS
for llm_factory in llm_factories:
@ -225,6 +227,15 @@ class TenantLLMService(CommonService):
if llm_id == llm["llm_name"]:
return llm["model_type"].split(",")[-1]
for llm in LLMService.query(llm_name=llm_id):
return llm.model_type
llm = TenantLLMService.get_or_none(llm_name=llm_id)
if llm:
return llm.model_type
for llm in TenantLLMService.query(llm_name=llm_id):
return llm.model_type
class LLMBundle:
def __init__(self, tenant_id, llm_type, llm_name=None, lang="Chinese", **kwargs):
@ -240,13 +251,13 @@ class LLMBundle:
self.verbose_tool_use = kwargs.get("verbose_tool_use")
langfuse_keys = TenantLangfuseService.filter_by_tenant(tenant_id=tenant_id)
self.langfuse = None
if langfuse_keys:
langfuse = Langfuse(public_key=langfuse_keys.public_key, secret_key=langfuse_keys.secret_key, host=langfuse_keys.host)
if langfuse.auth_check():
self.langfuse = langfuse
self.trace = self.langfuse.trace(name=f"{self.llm_type}-{self.llm_name}")
else:
self.langfuse = None
trace_id = self.langfuse.create_trace_id()
self.trace_context = {"trace_id": trace_id}
def bind_tools(self, toolcall_session, tools):
if not self.is_tools:
@ -256,7 +267,7 @@ class LLMBundle:
def encode(self, texts: list):
if self.langfuse:
generation = self.trace.generation(name="encode", model=self.llm_name, input={"texts": texts})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="encode", model=self.llm_name, input={"texts": texts})
embeddings, used_tokens = self.mdl.encode(texts)
llm_name = getattr(self, "llm_name", None)
@ -264,13 +275,14 @@ class LLMBundle:
logging.error("LLMBundle.encode can't update token usage for {}/EMBEDDING used_tokens: {}".format(self.tenant_id, used_tokens))
if self.langfuse:
generation.end(usage_details={"total_tokens": used_tokens})
generation.update(usage_details={"total_tokens": used_tokens})
generation.end()
return embeddings, used_tokens
def encode_queries(self, query: str):
if self.langfuse:
generation = self.trace.generation(name="encode_queries", model=self.llm_name, input={"query": query})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="encode_queries", model=self.llm_name, input={"query": query})
emd, used_tokens = self.mdl.encode_queries(query)
llm_name = getattr(self, "llm_name", None)
@ -278,65 +290,70 @@ class LLMBundle:
logging.error("LLMBundle.encode_queries can't update token usage for {}/EMBEDDING used_tokens: {}".format(self.tenant_id, used_tokens))
if self.langfuse:
generation.end(usage_details={"total_tokens": used_tokens})
generation.update(usage_details={"total_tokens": used_tokens})
generation.end()
return emd, used_tokens
def similarity(self, query: str, texts: list):
if self.langfuse:
generation = self.trace.generation(name="similarity", model=self.llm_name, input={"query": query, "texts": texts})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="similarity", model=self.llm_name, input={"query": query, "texts": texts})
sim, used_tokens = self.mdl.similarity(query, texts)
if not TenantLLMService.increase_usage(self.tenant_id, self.llm_type, used_tokens):
logging.error("LLMBundle.similarity can't update token usage for {}/RERANK used_tokens: {}".format(self.tenant_id, used_tokens))
if self.langfuse:
generation.end(usage_details={"total_tokens": used_tokens})
generation.update(usage_details={"total_tokens": used_tokens})
generation.end()
return sim, used_tokens
def describe(self, image, max_tokens=300):
if self.langfuse:
generation = self.trace.generation(name="describe", metadata={"model": self.llm_name})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="describe", metadata={"model": self.llm_name})
txt, used_tokens = self.mdl.describe(image)
if not TenantLLMService.increase_usage(self.tenant_id, self.llm_type, used_tokens):
logging.error("LLMBundle.describe can't update token usage for {}/IMAGE2TEXT used_tokens: {}".format(self.tenant_id, used_tokens))
if self.langfuse:
generation.end(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.update(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.end()
return txt
def describe_with_prompt(self, image, prompt):
if self.langfuse:
generation = self.trace.generation(name="describe_with_prompt", metadata={"model": self.llm_name, "prompt": prompt})
generation = self.language.start_generation(trace_context=self.trace_context, name="describe_with_prompt", metadata={"model": self.llm_name, "prompt": prompt})
txt, used_tokens = self.mdl.describe_with_prompt(image, prompt)
if not TenantLLMService.increase_usage(self.tenant_id, self.llm_type, used_tokens):
logging.error("LLMBundle.describe can't update token usage for {}/IMAGE2TEXT used_tokens: {}".format(self.tenant_id, used_tokens))
if self.langfuse:
generation.end(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.update(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.end()
return txt
def transcription(self, audio):
if self.langfuse:
generation = self.trace.generation(name="transcription", metadata={"model": self.llm_name})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="transcription", metadata={"model": self.llm_name})
txt, used_tokens = self.mdl.transcription(audio)
if not TenantLLMService.increase_usage(self.tenant_id, self.llm_type, used_tokens):
logging.error("LLMBundle.transcription can't update token usage for {}/SEQUENCE2TXT used_tokens: {}".format(self.tenant_id, used_tokens))
if self.langfuse:
generation.end(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.update(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.end()
return txt
def tts(self, text: str) -> Generator[bytes, None, None]:
if self.langfuse:
span = self.trace.span(name="tts", input={"text": text})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="tts", input={"text": text})
for chunk in self.mdl.tts(text):
if isinstance(chunk, int):
@ -346,7 +363,7 @@ class LLMBundle:
yield chunk
if self.langfuse:
span.end()
generation.end()
def _remove_reasoning_content(self, txt: str) -> str:
first_think_start = txt.find("<think>")
@ -361,16 +378,34 @@ class LLMBundle:
return txt
return txt[last_think_end + len("</think>") :]
@staticmethod
def _clean_param(chat_partial, **kwargs):
func = chat_partial.func
sig = inspect.signature(func)
keyword_args = []
support_var_args = False
for param in sig.parameters.values():
if param.kind == inspect.Parameter.VAR_KEYWORD or param.kind == inspect.Parameter.VAR_POSITIONAL:
support_var_args = True
elif param.kind == inspect.Parameter.KEYWORD_ONLY:
keyword_args.append(param.name)
def chat(self, system: str, history: list, gen_conf: dict={}, **kwargs) -> str:
use_kwargs = kwargs
if not support_var_args:
use_kwargs = {k: v for k, v in kwargs.items() if k in keyword_args}
return use_kwargs
def chat(self, system: str, history: list, gen_conf: dict = {}, **kwargs) -> str:
if self.langfuse:
generation = self.trace.generation(name="chat", model=self.llm_name, input={"system": system, "history": history})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="chat", model=self.llm_name, input={"system": system, "history": history})
chat_partial = partial(self.mdl.chat, system, history, gen_conf)
if self.is_tools and self.mdl.is_tools:
chat_partial = partial(self.mdl.chat_with_tools, system, history, gen_conf)
txt, used_tokens = chat_partial(**kwargs)
use_kwargs = self._clean_param(chat_partial, **kwargs)
txt, used_tokens = chat_partial(**use_kwargs)
txt = self._remove_reasoning_content(txt)
if not self.verbose_tool_use:
@ -380,25 +415,27 @@ class LLMBundle:
logging.error("LLMBundle.chat can't update token usage for {}/CHAT llm_name: {}, used_tokens: {}".format(self.tenant_id, self.llm_name, used_tokens))
if self.langfuse:
generation.end(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.update(output={"output": txt}, usage_details={"total_tokens": used_tokens})
generation.end()
return txt
def chat_streamly(self, system: str, history: list, gen_conf: dict={}, **kwargs):
def chat_streamly(self, system: str, history: list, gen_conf: dict = {}, **kwargs):
if self.langfuse:
generation = self.trace.generation(name="chat_streamly", model=self.llm_name, input={"system": system, "history": history})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="chat_streamly", model=self.llm_name, input={"system": system, "history": history})
ans = ""
chat_partial = partial(self.mdl.chat_streamly, system, history, gen_conf)
total_tokens = 0
if self.is_tools and self.mdl.is_tools:
chat_partial = partial(self.mdl.chat_streamly_with_tools, system, history, gen_conf)
for txt in chat_partial(**kwargs):
use_kwargs = self._clean_param(chat_partial, **kwargs)
for txt in chat_partial(**use_kwargs):
if isinstance(txt, int):
total_tokens = txt
if self.langfuse:
generation.end(output={"output": ans})
generation.update(output={"output": ans})
generation.end()
break
if txt.endswith("</think>"):

View File

@ -38,6 +38,11 @@ EMBEDDING_MDL = ""
RERANK_MDL = ""
ASR_MDL = ""
IMAGE2TEXT_MDL = ""
CHAT_CFG = ""
EMBEDDING_CFG = ""
RERANK_CFG = ""
ASR_CFG = ""
IMAGE2TEXT_CFG = ""
API_KEY = None
PARSERS = None
HOST_IP = None
@ -70,26 +75,26 @@ REGISTER_ENABLED = 1
# sandbox-executor-manager
SANDBOX_ENABLED = 0
SANDBOX_HOST = None
STRONG_TEST_COUNT = int(os.environ.get("STRONG_TEST_COUNT", "8"))
BUILTIN_EMBEDDING_MODELS = ["BAAI/bge-large-zh-v1.5@BAAI", "maidalun1020/bce-embedding-base_v1@Youdao"]
def get_or_create_secret_key():
secret_key = os.environ.get("RAGFLOW_SECRET_KEY")
if secret_key and len(secret_key) >= 32:
return secret_key
# Check if there's a configured secret key
configured_key = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("secret_key")
if configured_key and configured_key != str(date.today()) and len(configured_key) >= 32:
return configured_key
# Generate a new secure key and warn about it
import logging
new_key = secrets.token_hex(32)
logging.warning(
"SECURITY WARNING: Using auto-generated SECRET_KEY. "
f"Generated key: {new_key}"
)
logging.warning(f"SECURITY WARNING: Using auto-generated SECRET_KEY. Generated key: {new_key}")
return new_key
@ -98,10 +103,10 @@ def init_settings():
LIGHTEN = int(os.environ.get("LIGHTEN", "0"))
DATABASE_TYPE = os.getenv("DB_TYPE", "mysql")
DATABASE = decrypt_database_config(name=DATABASE_TYPE)
LLM = get_base_config("user_default_llm", {})
LLM_DEFAULT_MODELS = LLM.get("default_models", {})
LLM_FACTORY = LLM.get("factory")
LLM_BASE_URL = LLM.get("base_url")
LLM = get_base_config("user_default_llm", {}) or {}
LLM_DEFAULT_MODELS = LLM.get("default_models", {}) or {}
LLM_FACTORY = LLM.get("factory", "") or ""
LLM_BASE_URL = LLM.get("base_url", "") or ""
try:
REGISTER_ENABLED = int(os.environ.get("REGISTER_ENABLED", "1"))
except Exception:
@ -114,29 +119,34 @@ def init_settings():
FACTORY_LLM_INFOS = []
global CHAT_MDL, EMBEDDING_MDL, RERANK_MDL, ASR_MDL, IMAGE2TEXT_MDL
global CHAT_CFG, EMBEDDING_CFG, RERANK_CFG, ASR_CFG, IMAGE2TEXT_CFG
if not LIGHTEN:
EMBEDDING_MDL = BUILTIN_EMBEDDING_MODELS[0]
if LLM_DEFAULT_MODELS:
CHAT_MDL = LLM_DEFAULT_MODELS.get("chat_model", CHAT_MDL)
EMBEDDING_MDL = LLM_DEFAULT_MODELS.get("embedding_model", EMBEDDING_MDL)
RERANK_MDL = LLM_DEFAULT_MODELS.get("rerank_model", RERANK_MDL)
ASR_MDL = LLM_DEFAULT_MODELS.get("asr_model", ASR_MDL)
IMAGE2TEXT_MDL = LLM_DEFAULT_MODELS.get("image2text_model", IMAGE2TEXT_MDL)
# factory can be specified in the config name with "@". LLM_FACTORY will be used if not specified
CHAT_MDL = CHAT_MDL + (f"@{LLM_FACTORY}" if "@" not in CHAT_MDL and CHAT_MDL != "" else "")
EMBEDDING_MDL = EMBEDDING_MDL + (f"@{LLM_FACTORY}" if "@" not in EMBEDDING_MDL and EMBEDDING_MDL != "" else "")
RERANK_MDL = RERANK_MDL + (f"@{LLM_FACTORY}" if "@" not in RERANK_MDL and RERANK_MDL != "" else "")
ASR_MDL = ASR_MDL + (f"@{LLM_FACTORY}" if "@" not in ASR_MDL and ASR_MDL != "" else "")
IMAGE2TEXT_MDL = IMAGE2TEXT_MDL + (f"@{LLM_FACTORY}" if "@" not in IMAGE2TEXT_MDL and IMAGE2TEXT_MDL != "" else "")
global API_KEY, PARSERS, HOST_IP, HOST_PORT, SECRET_KEY
API_KEY = LLM.get("api_key")
PARSERS = LLM.get(
"parsers", "naive:General,qa:Q&A,resume:Resume,manual:Manual,table:Table,paper:Paper,book:Book,laws:Laws,presentation:Presentation,picture:Picture,one:One,audio:Audio,email:Email,tag:Tag"
)
chat_entry = _parse_model_entry(LLM_DEFAULT_MODELS.get("chat_model", CHAT_MDL))
embedding_entry = _parse_model_entry(LLM_DEFAULT_MODELS.get("embedding_model", EMBEDDING_MDL))
rerank_entry = _parse_model_entry(LLM_DEFAULT_MODELS.get("rerank_model", RERANK_MDL))
asr_entry = _parse_model_entry(LLM_DEFAULT_MODELS.get("asr_model", ASR_MDL))
image2text_entry = _parse_model_entry(LLM_DEFAULT_MODELS.get("image2text_model", IMAGE2TEXT_MDL))
CHAT_CFG = _resolve_per_model_config(chat_entry, LLM_FACTORY, API_KEY, LLM_BASE_URL)
EMBEDDING_CFG = _resolve_per_model_config(embedding_entry, LLM_FACTORY, API_KEY, LLM_BASE_URL)
RERANK_CFG = _resolve_per_model_config(rerank_entry, LLM_FACTORY, API_KEY, LLM_BASE_URL)
ASR_CFG = _resolve_per_model_config(asr_entry, LLM_FACTORY, API_KEY, LLM_BASE_URL)
IMAGE2TEXT_CFG = _resolve_per_model_config(image2text_entry, LLM_FACTORY, API_KEY, LLM_BASE_URL)
CHAT_MDL = CHAT_CFG.get("model", "") or ""
EMBEDDING_MDL = EMBEDDING_CFG.get("model", "") or ""
RERANK_MDL = RERANK_CFG.get("model", "") or ""
ASR_MDL = ASR_CFG.get("model", "") or ""
IMAGE2TEXT_MDL = IMAGE2TEXT_CFG.get("model", "") or ""
HOST_IP = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("host", "127.0.0.1")
HOST_PORT = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("http_port")
@ -169,6 +179,7 @@ def init_settings():
retrievaler = search.Dealer(docStoreConn)
from graphrag import search as kg_search
kg_retrievaler = kg_search.KGSearch(docStoreConn)
if int(os.environ.get("SANDBOX_ENABLED", "0")):
@ -209,3 +220,34 @@ class RetCode(IntEnum, CustomEnum):
SERVER_ERROR = 500
FORBIDDEN = 403
NOT_FOUND = 404
def _parse_model_entry(entry):
if isinstance(entry, str):
return {"name": entry, "factory": None, "api_key": None, "base_url": None}
if isinstance(entry, dict):
name = entry.get("name") or entry.get("model") or ""
return {
"name": name,
"factory": entry.get("factory"),
"api_key": entry.get("api_key"),
"base_url": entry.get("base_url"),
}
return {"name": "", "factory": None, "api_key": None, "base_url": None}
def _resolve_per_model_config(entry_dict, backup_factory, backup_api_key, backup_base_url):
name = (entry_dict.get("name") or "").strip()
m_factory = entry_dict.get("factory") or backup_factory or ""
m_api_key = entry_dict.get("api_key") or backup_api_key or ""
m_base_url = entry_dict.get("base_url") or backup_base_url or ""
if name and "@" not in name and m_factory:
name = f"{name}@{m_factory}"
return {
"model": name,
"factory": m_factory,
"api_key": m_api_key,
"base_url": m_base_url,
}

View File

@ -402,8 +402,22 @@ def get_data_openai(
finish_reason=None,
object="chat.completion",
param=None,
stream=False
):
total_tokens = prompt_tokens + completion_tokens
if stream:
return {
"id": f"{id}",
"object": "chat.completion.chunk",
"model": model,
"choices": [{
"delta": {"content": content},
"finish_reason": finish_reason,
"index": 0,
}],
}
return {
"id": f"{id}",
"object": object,
@ -414,9 +428,21 @@ def get_data_openai(
"prompt_tokens": prompt_tokens,
"completion_tokens": completion_tokens,
"total_tokens": total_tokens,
"completion_tokens_details": {"reasoning_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0},
"completion_tokens_details": {
"reasoning_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0,
},
},
"choices": [{"message": {"role": "assistant", "content": content}, "logprobs": None, "finish_reason": finish_reason, "index": 0}],
"choices": [{
"message": {
"role": "assistant",
"content": content
},
"logprobs": None,
"finish_reason": finish_reason,
"index": 0,
}],
}
@ -687,7 +713,13 @@ def timeout(seconds: float | int = None, attempts: int = 2, *, exception: Option
async def is_strong_enough(chat_model, embedding_model):
@timeout(30, 2)
count = settings.STRONG_TEST_COUNT
if not chat_model or not embedding_model:
return
if isinstance(count, int) and count <= 0:
return
@timeout(60, 2)
async def _is_strong_enough():
nonlocal chat_model, embedding_model
if embedding_model:
@ -701,5 +733,5 @@ async def is_strong_enough(chat_model, embedding_model):
# Pressure test for GraphRAG task
async with trio.open_nursery() as nursery:
for _ in range(32):
for _ in range(count):
nursery.start_soon(_is_strong_enough)

View File

@ -1 +1,3 @@
import base64
test_image_base64 = "iVBORw0KGgoAAAANSUhEUgAAAGQAAABkCAIAAAD/gAIDAAAA6ElEQVR4nO3QwQ3AIBDAsIP9d25XIC+EZE8QZc18w5l9O+AlZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBWYFZgVmBT+IYAHHLHkdEgAAAABJRU5ErkJggg=="
test_image = base64.b64decode(test_image_base64)

View File

@ -6,6 +6,34 @@
"tags": "LLM,TEXT EMBEDDING,TTS,TEXT RE-RANK,SPEECH2TEXT,MODERATION",
"status": "1",
"llm": [
{
"llm_name": "gpt-5",
"tags": "LLM,CHAT,400k,IMAGE2TEXT",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-mini",
"tags": "LLM,CHAT,400k,IMAGE2TEXT",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-nano",
"tags": "LLM,CHAT,400k,IMAGE2TEXT",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-chat-latest",
"tags": "LLM,CHAT,400k,IMAGE2TEXT",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "gpt-4.1",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
@ -2598,234 +2626,255 @@
"tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,IMAGE2TEXT",
"status": "1",
"llm": [
{
"llm_name": "Qwen3-Embedding-8B",
"tags": "TEXT EMBEDDING,TEXT RE-RANK,32k",
"max_tokens": 32000,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "Qwen3-Embedding-4B",
"tags": "TEXT EMBEDDING,TEXT RE-RANK,32k",
"max_tokens": 32000,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "Qwen3-Embedding-0.6B",
"tags": "TEXT EMBEDDING,TEXT RE-RANK,32k",
"max_tokens": 32000,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "Qwen/Qwen3-235B-A22B",
"tags": "LLM,CHAT,128k",
"max_tokens": 8192,
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen3-30B-A3B",
"tags": "LLM,CHAT,128k",
"max_tokens": 8192,
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen3-32B",
"tags": "LLM,CHAT,128k",
"max_tokens": 8192,
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen3-14B",
"tags": "LLM,CHAT,128k",
"max_tokens": 8192,
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen3-8B",
"tags": "LLM,CHAT,64k",
"max_tokens": 8192,
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/QVQ-72B-Preview",
"tags": "LLM,CHAT,IMAGE2TEXT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": false
},
{
"llm_name": "Pro/deepseek-ai/DeepSeek-R1",
"tags": "LLM,CHAT,64k",
"max_tokens": 16384,
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-R1",
"tags": "LLM,CHAT,64k",
"max_tokens": 16384,
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/deepseek-ai/DeepSeek-V3",
"tags": "LLM,CHAT,64k",
"max_tokens": 8192,
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-V3",
"tags": "LLM,CHAT,64k",
"max_tokens": 8192,
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/deepseek-ai/DeepSeek-V3-1226",
"tags": "LLM,CHAT,64k",
"max_tokens": 4096,
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"tags": "LLM,CHAT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"tags": "LLM,CHAT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"tags": "LLM,CHAT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"tags": "LLM,CHAT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"tags": "LLM,CHAT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"tags": "LLM,CHAT,32k",
"max_tokens": 16384,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-ai/DeepSeek-V2.5",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/QwQ-32B",
"tags": "LLM,CHAT,32k",
"max_tokens": 32768,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2.5-VL-72B-Instruct",
"tags": "LLM,CHAT,IMAGE2TEXT,128k",
"max_tokens": 4096,
"max_tokens": 128000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "Pro/Qwen/Qwen2.5-VL-7B-Instruct",
"tags": "LLM,CHAT,IMAGE2TEXT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "THUDM/GLM-Z1-32B-0414",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "THUDM/GLM-4-32B-0414",
"tags": "LLM,CHAT,32k",
"max_tokens": 8192,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "THUDM/GLM-Z1-9B-0414",
"tags": "LLM,CHAT,32k",
"max_tokens": 8192,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "THUDM/GLM-4-9B-0414",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "THUDM/chatglm3-6b",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "Pro/THUDM/glm-4-9b-chat",
"tags": "LLM,CHAT,128k",
"max_tokens": 4096,
"max_tokens": 128000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "THUDM/GLM-Z1-Rumination-32B-0414",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "THUDM/glm-4-9b-chat",
"tags": "LLM,CHAT,128k",
"max_tokens": 4096,
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/QwQ-32B-Preview",
"tags": "LLM,CHAT,32k",
"max_tokens": 8192,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "Qwen/Qwen2.5-Coder-32B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "Qwen/Qwen2-VL-72B-Instruct",
"tags": "LLM,IMAGE2TEXT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": false
},
{
"llm_name": "Qwen/Qwen2.5-72B-Instruct-128Kt",
"tags": "LLM,IMAGE2TEXT,128k",
"max_tokens": 4096,
"max_tokens": 128000,
"model_type": "image2text",
"is_tools": false
},
@ -2839,98 +2888,98 @@
{
"llm_name": "Qwen/Qwen2.5-72B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2.5-32B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2.5-14B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2.5-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2.5-Coder-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "internlm/internlm2_5-20b-chat",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "internlm/internlm2_5-7b-chat",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2-1.5B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/Qwen/Qwen2.5-Coder-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "Pro/Qwen/Qwen2-VL-7B-Instruct",
"tags": "LLM,CHAT,IMAGE2TEXT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": false
},
{
"llm_name": "Pro/Qwen/Qwen2.5-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/Qwen/Qwen2-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "Pro/Qwen/Qwen2-1.5B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 4096,
"max_tokens": 32000,
"model_type": "chat",
"is_tools": false
},
@ -3267,45 +3316,52 @@
"status": "1",
"llm": [
{
"llm_name": "claude-opus-4-20250514",
"tags": "LLM,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514",
"tags": "LLM,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-3-7-sonnet-20250219",
"tags": "LLM,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-3-5-sonnet-20241022",
"tags": "LLM,IMAGE2TEXT,200k",
"llm_name": "claude-opus-4-1-20250805",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-opus-20240229",
"tags": "LLM,IMAGE2TEXT,200k",
"llm_name": "claude-opus-4-20250514",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-7-sonnet-20250219",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-5-sonnet-20241022",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-5-haiku-20241022",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-haiku-20240307",
"tags": "LLM,IMAGE2TEXT,200k",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "image2text",
"model_type": "chat",
"is_tools": true
}
]

View File

@ -64,9 +64,21 @@ redis:
# config:
# oss_table: 'opendal_storage'
# user_default_llm:
# factory: 'Tongyi-Qianwen'
# api_key: 'sk-xxxxxxxxxxxxx'
# base_url: ''
# factory: 'BAAI'
# api_key: 'backup'
# base_url: 'backup_base_url'
# default_models:
# chat_model:
# name: 'qwen2.5-7b-instruct'
# factory: 'xxxx'
# api_key: 'xxxx'
# base_url: 'https://api.xx.com'
# embedding_model:
# name: 'bge-m3'
# rerank_model: 'bge-reranker-v2'
# asr_model:
# model: 'whisper-large-v3' # alias of name
# image2text_model: ''
# oauth:
# oauth2:
# display_name: "OAuth2"

View File

@ -12,6 +12,7 @@
#
import logging
import re
import sys
from io import BytesIO
@ -20,6 +21,8 @@ from openpyxl import Workbook, load_workbook
from rag.nlp import find_codec
# copied from `/openpyxl/cell/cell.py`
ILLEGAL_CHARACTERS_RE = re.compile(r'[\000-\010]|[\013-\014]|[\016-\037]')
class RAGFlowExcelParser:
@ -50,13 +53,29 @@ class RAGFlowExcelParser:
logging.info(f"openpyxl load error: {e}, try pandas instead")
try:
file_like_object.seek(0)
df = pd.read_excel(file_like_object)
return RAGFlowExcelParser._dataframe_to_workbook(df)
try:
df = pd.read_excel(file_like_object)
return RAGFlowExcelParser._dataframe_to_workbook(df)
except Exception as ex:
logging.info(f"pandas with default engine load error: {ex}, try calamine instead")
file_like_object.seek(0)
df = pd.read_excel(file_like_object, engine='calamine')
return RAGFlowExcelParser._dataframe_to_workbook(df)
except Exception as e_pandas:
raise Exception(f"pandas.read_excel error: {e_pandas}, original openpyxl error: {e}")
@staticmethod
def _clean_dataframe(df: pd.DataFrame):
def clean_string(s):
if isinstance(s, str):
return ILLEGAL_CHARACTERS_RE.sub(" ", s)
return s
return df.apply(lambda col: col.map(clean_string))
@staticmethod
def _dataframe_to_workbook(df):
df = RAGFlowExcelParser._clean_dataframe(df)
wb = Workbook()
ws = wb.active
ws.title = "Data"

View File

@ -87,7 +87,7 @@ class RAGFlowPptParser:
break
texts = []
for shape in sorted(
slide.shapes, key=lambda x: ((x.top if x.top is not None else 0) // 10, x.left)):
slide.shapes, key=lambda x: ((x.top if x.top is not None else 0) // 10, x.left if x.left is not None else 0)):
try:
txt = self.__extract(shape)
if txt:
@ -96,4 +96,4 @@ class RAGFlowPptParser:
logging.exception(e)
txts.append("\n".join(texts))
return txts
return txts

View File

@ -62,6 +62,8 @@ MYSQL_DBNAME=rag_flow
# The port used to expose the MySQL service to the host machine,
# allowing EXTERNAL access to the MySQL database running inside the Docker container.
MYSQL_PORT=5455
# The maximum size of communication packets sent to the MySQL server
MYSQL_MAX_PACKET=1073741824
# The hostname where the MinIO service is exposed
MINIO_HOST=minio
@ -91,13 +93,13 @@ REDIS_PASSWORD=infini_rag_flow
SVR_HTTP_PORT=9380
# The RAGFlow Docker image to download.
# Defaults to the v0.20.0-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0-slim
# Defaults to the v0.20.1-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1
#
# The Docker image of the v0.20.0 edition includes built-in embedding models:
# The Docker image of the v0.20.1 edition includes built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - maidalun1020/bce-embedding-base_v1
#

View File

@ -79,8 +79,8 @@ The [.env](./.env) file contains important environment variables for Docker.
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.20.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.0`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.20.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

298
docker/migration.sh Normal file
View File

@ -0,0 +1,298 @@
#!/bin/bash
# RAGFlow Data Migration Script
# Usage: ./migration.sh [backup|restore] [backup_folder]
#
# This script helps you backup and restore RAGFlow Docker volumes
# including MySQL, MinIO, Redis, and Elasticsearch data.
set -e # Exit on any error
# Instead, we'll handle errors manually for better debugging experience
# Default values
DEFAULT_BACKUP_FOLDER="backup"
VOLUMES=("docker_mysql_data" "docker_minio_data" "docker_redis_data" "docker_esdata01")
BACKUP_FILES=("mysql_backup.tar.gz" "minio_backup.tar.gz" "redis_backup.tar.gz" "es_backup.tar.gz")
# Function to display help information
show_help() {
echo "RAGFlow Data Migration Tool"
echo ""
echo "USAGE:"
echo " $0 <operation> [backup_folder]"
echo ""
echo "OPERATIONS:"
echo " backup - Create backup of all RAGFlow data volumes"
echo " restore - Restore RAGFlow data volumes from backup"
echo " help - Show this help message"
echo ""
echo "PARAMETERS:"
echo " backup_folder - Name of backup folder (default: '$DEFAULT_BACKUP_FOLDER')"
echo ""
echo "EXAMPLES:"
echo " $0 backup # Backup to './backup' folder"
echo " $0 backup my_backup # Backup to './my_backup' folder"
echo " $0 restore # Restore from './backup' folder"
echo " $0 restore my_backup # Restore from './my_backup' folder"
echo ""
echo "DOCKER VOLUMES:"
echo " - docker_mysql_data (MySQL database)"
echo " - docker_minio_data (MinIO object storage)"
echo " - docker_redis_data (Redis cache)"
echo " - docker_esdata01 (Elasticsearch indices)"
}
# Function to check if Docker is running
check_docker() {
if ! docker info >/dev/null 2>&1; then
echo "❌ Error: Docker is not running or not accessible"
echo "Please start Docker and try again"
exit 1
fi
}
# Function to check if volume exists
volume_exists() {
local volume_name=$1
docker volume inspect "$volume_name" >/dev/null 2>&1
}
# Function to check if any containers are using the target volumes
check_containers_using_volumes() {
echo "🔍 Checking for running containers that might be using target volumes..."
# Get all running containers
local running_containers=$(docker ps --format "{{.Names}}")
if [ -z "$running_containers" ]; then
echo "✅ No running containers found"
return 0
fi
# Check each running container for volume usage
local containers_using_volumes=()
local volume_usage_details=()
for container in $running_containers; do
# Get container's mount information
local mounts=$(docker inspect "$container" --format '{{range .Mounts}}{{.Source}}{{"|"}}{{end}}' 2>/dev/null || echo "")
# Check if any of our target volumes are used by this container
for volume in "${VOLUMES[@]}"; do
if echo "$mounts" | grep -q "$volume"; then
containers_using_volumes+=("$container")
volume_usage_details+=("$container -> $volume")
break
fi
done
done
# If any containers are using our volumes, show error and exit
if [ ${#containers_using_volumes[@]} -gt 0 ]; then
echo ""
echo "❌ ERROR: Found running containers using target volumes!"
echo ""
echo "📋 Running containers status:"
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Image}}"
echo ""
echo "🔗 Volume usage details:"
for detail in "${volume_usage_details[@]}"; do
echo " - $detail"
done
echo ""
echo "🛑 SOLUTION: Stop the containers before performing backup/restore operations:"
echo " docker-compose -f docker/<your-docker-compose-file>.yml down"
echo ""
echo "💡 After backup/restore, you can restart with:"
echo " docker-compose -f docker/<your-docker-compose-file>.yml up -d"
echo ""
exit 1
fi
echo "✅ No containers are using target volumes, safe to proceed"
return 0
}
# Function to confirm user action
confirm_action() {
local message=$1
echo -n "$message (y/N): "
read -r response
case "$response" in
[yY]|[yY][eE][sS]) return 0 ;;
*) return 1 ;;
esac
}
# Function to perform backup
perform_backup() {
local backup_folder=$1
echo "🚀 Starting RAGFlow data backup..."
echo "📁 Backup folder: $backup_folder"
echo ""
# Check if any containers are using the volumes
check_containers_using_volumes
# Create backup folder if it doesn't exist
mkdir -p "$backup_folder"
# Backup each volume
for i in "${!VOLUMES[@]}"; do
local volume="${VOLUMES[$i]}"
local backup_file="${BACKUP_FILES[$i]}"
local step=$((i + 1))
echo "📦 Step $step/4: Backing up $volume..."
if volume_exists "$volume"; then
docker run --rm \
-v "$volume":/source \
-v "$(pwd)/$backup_folder":/backup \
alpine tar czf "/backup/$backup_file" -C /source .
echo "✅ Successfully backed up $volume to $backup_folder/$backup_file"
else
echo "⚠️ Warning: Volume $volume does not exist, skipping..."
fi
echo ""
done
echo "🎉 Backup completed successfully!"
echo "📍 Backup location: $(pwd)/$backup_folder"
# List backup files with sizes
echo ""
echo "📋 Backup files created:"
for backup_file in "${BACKUP_FILES[@]}"; do
if [ -f "$backup_folder/$backup_file" ]; then
local size=$(ls -lh "$backup_folder/$backup_file" | awk '{print $5}')
echo " - $backup_file ($size)"
fi
done
}
# Function to perform restore
perform_restore() {
local backup_folder=$1
echo "🔄 Starting RAGFlow data restore..."
echo "📁 Backup folder: $backup_folder"
echo ""
# Check if any containers are using the volumes
check_containers_using_volumes
# Check if backup folder exists
if [ ! -d "$backup_folder" ]; then
echo "❌ Error: Backup folder '$backup_folder' does not exist"
exit 1
fi
# Check if all backup files exist
local missing_files=()
for backup_file in "${BACKUP_FILES[@]}"; do
if [ ! -f "$backup_folder/$backup_file" ]; then
missing_files+=("$backup_file")
fi
done
if [ ${#missing_files[@]} -gt 0 ]; then
echo "❌ Error: Missing backup files:"
for file in "${missing_files[@]}"; do
echo " - $file"
done
echo "Please ensure all backup files are present in '$backup_folder'"
exit 1
fi
# Check for existing volumes and warn user
local existing_volumes=()
for volume in "${VOLUMES[@]}"; do
if volume_exists "$volume"; then
existing_volumes+=("$volume")
fi
done
if [ ${#existing_volumes[@]} -gt 0 ]; then
echo "⚠️ WARNING: The following Docker volumes already exist:"
for volume in "${existing_volumes[@]}"; do
echo " - $volume"
done
echo ""
echo "🔴 IMPORTANT: Restoring will OVERWRITE existing data!"
echo "💡 Recommendation: Create a backup of your current data first:"
echo " $0 backup current_backup_$(date +%Y%m%d_%H%M%S)"
echo ""
if ! confirm_action "Do you want to continue with the restore operation?"; then
echo "❌ Restore operation cancelled by user"
exit 0
fi
fi
# Create volumes and restore data
for i in "${!VOLUMES[@]}"; do
local volume="${VOLUMES[$i]}"
local backup_file="${BACKUP_FILES[$i]}"
local step=$((i + 1))
echo "🔧 Step $step/4: Restoring $volume..."
# Create volume if it doesn't exist
if ! volume_exists "$volume"; then
echo " 📋 Creating Docker volume: $volume"
docker volume create "$volume"
else
echo " 📋 Using existing Docker volume: $volume"
fi
# Restore data
echo " 📥 Restoring data from $backup_file..."
docker run --rm \
-v "$volume":/target \
-v "$(pwd)/$backup_folder":/backup \
alpine tar xzf "/backup/$backup_file" -C /target
echo "✅ Successfully restored $volume"
echo ""
done
echo "🎉 Restore completed successfully!"
echo "💡 You can now start your RAGFlow services"
}
# Main script logic
main() {
# Check if Docker is available
check_docker
# Parse command line arguments
local operation=${1:-}
local backup_folder=${2:-$DEFAULT_BACKUP_FOLDER}
# Handle help or no arguments
if [ -z "$operation" ] || [ "$operation" = "help" ] || [ "$operation" = "-h" ] || [ "$operation" = "--help" ]; then
show_help
exit 0
fi
# Validate operation
case "$operation" in
backup)
perform_backup "$backup_folder"
;;
restore)
perform_restore "$backup_folder"
;;
*)
echo "❌ Error: Invalid operation '$operation'"
echo ""
show_help
exit 1
;;
esac
}
# Run main function with all arguments
main "$@"

View File

@ -9,6 +9,7 @@ mysql:
port: 3306
max_connections: 900
stale_timeout: 300
max_allowed_packet: ${MYSQL_MAX_PACKET:-1073741824}
minio:
user: '${MINIO_USER:-rag_flow}'
password: '${MINIO_PASSWORD:-infini_rag_flow}'

View File

@ -99,8 +99,8 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.20.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.0`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.20.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

View File

@ -77,7 +77,7 @@ After building the infiniflow/ragflow:nightly-slim image, you are ready to launc
1. Edit Docker Compose Configuration
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.20.0-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.20.1-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
2. Launch the Service

View File

@ -30,17 +30,17 @@ The "garbage in garbage out" status quo remains unchanged despite the fact that
Each RAGFlow release is available in two editions:
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.20.0-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.20.0`
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.20.1-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.20.1`
---
### Which embedding models can be deployed locally?
RAGFlow offers two Docker image editions, `v0.20.0-slim` and `v0.20.0`:
RAGFlow offers two Docker image editions, `v0.20.1-slim` and `v0.20.1`:
- `infiniflow/ragflow:v0.20.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.0`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.20.1-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.1`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

View File

@ -9,7 +9,7 @@ The component equipped with reasoning, tool usage, and multi-agent collaboration
---
An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.0 onwards, an **Agent** component is able to work independently and with the following capabilities:
An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.1 onwards, an **Agent** component is able to work independently and with the following capabilities:
- Autonomous reasoning with reflection and adjustment based on environmental feedback.
- Use of tools or subagents to complete tasks.
@ -82,7 +82,7 @@ An integer specifying the number of previous dialogue rounds to input into the L
This feature is used for multi-turn dialogue *only*.
:::
### Max retrieves
### Max retries
Defines the maximum number of attempts the agent will make to retry a failed task or operation before stopping or reporting failure.
@ -92,7 +92,11 @@ The waiting period in seconds that the agent observes before retrying a failed t
### Max rounds
Defines the maximum number reflection rounds of the selected chat model. Defaults to 5 rounds.
Defines the maximum number reflection rounds of the selected chat model. Defaults to 1 round.
:::tip NOTE
Increasing this value will significantly extend your agent's response time.
:::
### Output

View File

@ -9,7 +9,7 @@ A component that retrieves information from specified datasets.
## Scenarios
A **Retrieval** component is essential in most RAG scenarios, where information is extracted from designated knowledge bases before being sent to the LLM for content generation. As of v0.20.0, a **Retrieval** component can operate either as a workflow component or as a tool of an **Agent**, enabling the Agent to control its invocation and search queries.
A **Retrieval** component is essential in most RAG scenarios, where information is extracted from designated knowledge bases before being sent to the LLM for content generation. As of v0.20.1, a **Retrieval** component can operate either as a workflow component or as a tool of an **Agent**, enabling the Agent to control its invocation and search queries.
## Configurations

View File

@ -63,7 +63,7 @@ docker build -t sandbox-executor-manager:latest ./executor_manager
3. Add the following entry to your /etc/hosts file to resolve the executor manager service:
```bash
127.0.0.1 sandbox-executor-manager
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
```
4. Start the RAGFlow service as usual.

View File

@ -48,7 +48,7 @@ You start an AI conversation by creating an assistant.
- If no target language is selected, the system will search only in the language of your query, which may cause relevant information in other languages to be missed.
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
- As of v0.20.0, if you add custom variables here, the only way you can pass in their values is to call:
- As of v0.20.1, if you add custom variables here, the only way you can pass in their values is to call:
- HTTP method [Converse with chat assistant](../../references/http_api_reference.md#converse-with-chat-assistant), or
- Python method [Converse with chat assistant](../../references/python_api_reference.md#converse-with-chat-assistant).

View File

@ -128,7 +128,7 @@ See [Run retrieval test](./run_retrieval_test.md) for details.
## Search for knowledge base
As of RAGFlow v0.20.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
As of RAGFlow v0.20.1, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
![search knowledge base](https://github.com/infiniflow/ragflow/assets/93570324/836ae94c-2438-42be-879e-c7ad2a59693e)

View File

@ -87,4 +87,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.20.0, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.20.1, bulk download is not supported, nor can you download an entire folder.

View File

@ -0,0 +1,108 @@
# Data Migration Guide
A common scenario is processing large datasets on a powerful instance (e.g., with a GPU) and then migrating the entire RAGFlow service to a different production environment (e.g., a CPU-only server). This guide explains how to safely back up and restore your data using our provided migration script.
## Identifying Your Data
By default, RAGFlow uses Docker volumes to store all persistent data, including your database, uploaded files, and search indexes. You can see these volumes by running:
```bash
docker volume ls
```
The output will look similar to this:
```text
DRIVER VOLUME NAME
local docker_esdata01
local docker_minio_data
local docker_mysql_data
local docker_redis_data
```
These volumes contain all the data you need to migrate.
## Step 1: Stop RAGFlow Services
Before starting the migration, you must stop all running RAGFlow services on the **source machine**. Navigate to the project's root directory and run:
```bash
docker-compose -f docker/docker-compose.yml down
```
**Important:** Do **not** use the `-v` flag (e.g., `docker-compose down -v`), as this will delete all your data volumes. The migration script includes a check and will prevent you from running it if services are active.
## Step 2: Back Up Your Data
We provide a convenient script to package all your data volumes into a single backup folder.
For a quick reference of the script's commands and options, you can run:
```bash
bash docker/migration.sh help
```
To create a backup, run the following command from the project's root directory:
```bash
bash docker/migration.sh backup
```
This will create a `backup/` folder in your project root containing compressed archives of your data volumes.
You can also specify a custom name for your backup folder:
```bash
bash docker/migration.sh backup my_ragflow_backup
```
This will create a folder named `my_ragflow_backup/` instead.
## Step 3: Transfer the Backup Folder
Copy the entire backup folder (e.g., `backup/` or `my_ragflow_backup/`) from your source machine to the RAGFlow project directory on your **target machine**. You can use tools like `scp`, `rsync`, or a physical drive for the transfer.
## Step 4: Restore Your Data
On the **target machine**, ensure that RAGFlow services are not running. Then, use the migration script to restore your data from the backup folder.
If your backup folder is named `backup/`, run:
```bash
bash docker/migration.sh restore
```
If you used a custom name, specify it in the command:
```bash
bash docker/migration.sh restore my_ragflow_backup
```
The script will automatically create the necessary Docker volumes and unpack the data.
**Note:** If the script detects that Docker volumes with the same names already exist on the target machine, it will warn you that restoring will overwrite the existing data and ask for confirmation before proceeding.
## Step 5: Start RAGFlow Services
Once the restore process is complete, you can start the RAGFlow services on your new machine:
```bash
docker-compose -f docker/docker-compose.yml up -d
```
**Note:** If you already have build an service by docker-compose before, you may need to backup your data for target machine like this guide above and run like:
```bash
# Please backup by `sh docker/migration.sh backup backup_dir_name` before you do the following line.
# !!! this line -v flag will delete the original docker volume
docker-compose -f docker/docker-compose.yml down -v
docker-compose -f docker/docker-compose.yml up -d
```
Your RAGFlow instance is now running with all the data from your original machine.

View File

@ -18,7 +18,7 @@ RAGFlow ships with a built-in [Langfuse](https://langfuse.com) integration so th
Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
:::info NOTE
• RAGFlow **≥ 0.20.0** (contains the Langfuse connector)
• RAGFlow **≥ 0.20.1** (contains the Langfuse connector)
• A Langfuse workspace (cloud or self-hosted) with a _Project Public Key_ and _Secret Key_
:::

View File

@ -66,10 +66,10 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
git clone https://github.com/infiniflow/ragflow.git
```
2. Switch to the latest, officially published release, e.g., `v0.20.0`:
2. Switch to the latest, officially published release, e.g., `v0.20.1`:
```bash
git checkout -f v0.20.0
git checkout -f v0.20.1
```
3. Update **ragflow/docker/.env**:
@ -83,14 +83,14 @@ To upgrade RAGFlow, you must upgrade **both** your code **and** your Docker imag
<TabItem value="slim">
```bash
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0-slim
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1-slim
```
</TabItem>
<TabItem value="full">
```bash
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1
```
</TabItem>
@ -114,10 +114,10 @@ No, you do not need to. Upgrading RAGFlow in itself will *not* remove your uploa
1. From an environment with Internet access, pull the required Docker image.
2. Save the Docker image to a **.tar** file.
```bash
docker save -o ragflow.v0.20.0.tar infiniflow/ragflow:v0.20.0
docker save -o ragflow.v0.20.1.tar infiniflow/ragflow:v0.20.1
```
3. Copy the **.tar** file to the target server.
4. Load the **.tar** file into Docker:
```bash
docker load -i ragflow.v0.20.0.tar
docker load -i ragflow.v0.20.1.tar
```

View File

@ -44,7 +44,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abnormal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
RAGFlow v0.20.0 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
RAGFlow v0.20.1 uses Elasticsearch or [Infinity](https://github.com/infiniflow/infinity) for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
<Tabs
defaultValue="linux"
@ -184,13 +184,13 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```bash
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/docker
$ git checkout -f v0.20.0
$ git checkout -f v0.20.1
```
3. Use the pre-built Docker images and start up the server:
:::tip NOTE
The command below downloads the `v0.20.0-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.0` for the full edition `v0.20.0`.
The command below downloads the `v0.20.1-slim` edition of the RAGFlow Docker image. Refer to the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.1` for the full edition `v0.20.1`.
:::
```bash
@ -207,8 +207,8 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
| RAGFlow image tag | Image size (GB) | Has embedding models and Python packages? | Stable? |
| ------------------- | --------------- | ----------------------------------------- | ------------------------ |
| `v0.20.0` | &approx;9 | :heavy_check_mark: | Stable release |
| `v0.20.0-slim` | &approx;2 | ❌ | Stable release |
| `v0.20.1` | &approx;9 | :heavy_check_mark: | Stable release |
| `v0.20.1-slim` | &approx;2 | ❌ | Stable release |
| `nightly` | &approx;9 | :heavy_check_mark: | *Unstable* nightly build |
| `nightly-slim` | &approx;2 | ❌ | *Unstable* nightly build |
@ -217,7 +217,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```
:::danger IMPORTANT
The embedding models included in `v0.20.0` and `nightly` are:
The embedding models included in `v0.20.1` and `nightly` are:
- BAAI/bge-large-zh-v1.5
- maidalun1020/bce-embedding-base_v1

View File

@ -19,7 +19,7 @@ import TOCInline from '@theme/TOCInline';
### Cross-language search
Cross-language search (also known as cross-lingual retrieval) is a feature introduced in version 0.20.0. It enables users to submit queries in one language (for example, English) and retrieve relevant documents written in other languages such as Chinese or Spanish. This feature is enabled by the systems default chat model, which translates queries to ensure accurate matching of semantic meaning across languages.
Cross-language search (also known as cross-lingual retrieval) is a feature introduced in version 0.20.1. It enables users to submit queries in one language (for example, English) and retrieve relevant documents written in other languages such as Chinese or Spanish. This feature is enabled by the systems default chat model, which translates queries to ensure accurate matching of semantic meaning across languages.
By enabling cross-language search, users can effortlessly access a broader range of information regardless of language barriers, significantly enhancing the systems usability and inclusiveness.

View File

@ -1118,14 +1118,14 @@ Failure:
### List documents
**GET** `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}`
**GET** `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp}`
Lists documents in a specified dataset.
#### Request
- Method: GET
- URL: `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}`
- URL: `/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp}`
- Headers:
- `'content-Type: application/json'`
- `'Authorization: Bearer <YOUR_API_KEY>'`
@ -1134,7 +1134,7 @@ Lists documents in a specified dataset.
```bash
curl --request GET \
--url http://{address}/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name} \
--url http://{address}/api/v1/datasets/{dataset_id}/documents?page={page}&page_size={page_size}&orderby={orderby}&desc={desc}&keywords={keywords}&id={document_id}&name={document_name}&create_time_from={timestamp}&create_time_to={timestamp} \
--header 'Authorization: Bearer <YOUR_API_KEY>'
```
@ -1156,6 +1156,10 @@ curl --request GET \
Indicates whether the retrieved documents should be sorted in descending order. Defaults to `true`.
- `id`: (*Filter parameter*), `string`
The ID of the document to retrieve.
- `create_time_from`: (*Filter parameter*), `integer`
Unix timestamp for filtering documents created after this time. 0 means no filter. Defaults to `0`.
- `create_time_to`: (*Filter parameter*), `integer`
Unix timestamp for filtering documents created before this time. 0 means no filter. Defaults to `0`.
#### Response

View File

@ -507,7 +507,16 @@ print(doc)
### List documents
```python
Dataset.list_documents(id:str =None, keywords: str=None, page: int=1, page_size:int = 30, order_by:str = "create_time", desc: bool = True) -> list[Document]
Dataset.list_documents(
id: str = None,
keywords: str = None,
page: int = 1,
page_size: int = 30,
order_by: str = "create_time",
desc: bool = True,
create_time_from: int = 0,
create_time_to: int = 0
) -> list[Document]
```
Lists documents in the current dataset.
@ -541,6 +550,12 @@ The field by which documents should be sorted. Available options:
Indicates whether the retrieved documents should be sorted in descending order. Defaults to `True`.
##### create_time_from: `int`
Unix timestamp for filtering documents created after this time. 0 means no filter. Defaults to 0.
##### create_time_to: `int`
Unix timestamp for filtering documents created before this time. 0 means no filter. Defaults to 0.
#### Returns
- Success: A list of `Document` objects.

View File

@ -9,8 +9,8 @@ Key features, improvements and bug fixes in the latest releases.
:::info
Each RAGFlow release is available in two editions:
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.19.1-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.19.1`
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.20.1-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.20.1`
:::
:::danger IMPORTANT
@ -22,6 +22,65 @@ The embedding models included in a full edition are:
These two embedding models are optimized specifically for English and Chinese, so performance may be compromised if you use them to embed documents in other languages.
:::
## v0.20.1
Released on August 8, 2025.
### New Features
- The **Retrieval** component now supports the dynamic specification of knowledge base names using variables.
- The user interface now includes a French language option.
### Added Models
- GPT-5
- Claude 4.1
### New agent templates (both workflow and agentic)
- SQL Assistant Workflow: Empowers non-technical teams (e.g., operations, product) to independently query business data.
- Choose Your Knowledge Base Workflow: Lets users select a knowledge base to query during conversations. [#9325](https://github.com/infiniflow/ragflow/pull/9325)
- Choose Your Knowledge Base Agent: Delivers higher-quality responses with extended reasoning time, suited for complex queries. [#9325](https://github.com/infiniflow/ragflow/pull/9325)
### Fixed Issues
- The **Agent** component was unable to invoke models installed via vLLM.
- Agents could not be shared with the team.
- Embedding an Agent into a webpage was not functioning properly.
## v0.20.0
Released on August 4, 2025.
### Compatibility changes
From v0.20.0 onwards, Agents are no longer compatible with earlier versions, and all existing Agents from previous versions must be rebuilt following the upgrade.
### New features
- Unified orchestration of both Agents and Workflows.
- A comprehensive refactor of the Agent, greatly enhancing its capabilities and usability, with support for Multi-Agent configurations, planning and reflection, and visual functionalities.
- Fully implemented MCP functionality, allowing for MCP Server import, Agents functioning as MCP Clients, and RAGFlow itself operating as an MCP Server.
- Access to runtime logs for Agents.
- Chat histories with Agents available through the management panel.
- Integration of a new, more robust version of Infinity, enabling the auto-tagging functionality with Infinity as the underlying document engine.
- An OpenAI-compatible API that supports file reference information.
- Support for new models, including Kimi K2, Grok 4, and Voyage embedding.
- RAGFlows codebase is now mirrored on Gitee.
- Introduction of a new model provider, Gitee AI.
### New agent templates introduced
- Multi-Agent based Deep Research: Collaborative Agent teamwork led by a Lead Agent with multiple Subagents, distinct from traditional workflow orchestration.
- An intelligent Q&A chatbot leveraging internal knowledge bases, designed for customer service and training scenarios.
- A resume analysis template used by the RAGFlow team to screen, analyze, and record candidate information.
- A blog generation workflow that transforms raw ideas into SEO-friendly blog content.
- An intelligent customer service workflow.
- A user feedback analysis template that directs user feedback to appropriate teams through semantic analysis.
- Trip Planner: Uses web search and map MCP servers to assist with travel planning.
- Image Lingo: Translates content from uploaded photos.
- An information search assistant that retrieves answers from both internal knowledge bases and the web.
## v0.19.1
Released on June 23, 2025.

View File

@ -47,7 +47,7 @@ class Extractor:
self._language = language
self._entity_types = entity_types or DEFAULT_ENTITY_TYPES
@timeout(60*3)
@timeout(60*5)
def _chat(self, system, history, gen_conf={}):
hist = deepcopy(history)
conf = deepcopy(gen_conf)

View File

@ -33,13 +33,13 @@ env:
REDIS_PASSWORD: infini_rag_flow_helm
# The RAGFlow Docker image to download.
# Defaults to the v0.20.0-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE: infiniflow/ragflow:v0.20.0-slim
# Defaults to the v0.20.1-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE: infiniflow/ragflow:v0.20.1-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
# RAGFLOW_IMAGE: infiniflow/ragflow:v0.20.0
# RAGFLOW_IMAGE: infiniflow/ragflow:v0.20.1
#
# The Docker image of the v0.20.0 edition includes:
# The Docker image of the v0.20.1 edition includes:
# - Built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - BAAI/bge-reranker-v2-m3

View File

@ -180,7 +180,7 @@ async def list_tools(*, connector) -> list[types.Tool]:
return [
types.Tool(
name="ragflow_retrieval",
description="Retrieve relevant chunks from the RAGFlow retrieve interface based on the question, using the specified dataset_ids and optionally document_ids. Below is the list of all available datasets, including their descriptions and IDs. If you're unsure which datasets are relevant to the question, simply pass all dataset IDs to the function."
description="Retrieve relevant chunks from the RAGFlow retrieve interface based on the question. You can optionally specify dataset_ids to search only specific datasets, or omit dataset_ids entirely to search across ALL available datasets. You can also optionally specify document_ids to search within specific documents. When dataset_ids is not provided or is empty, the system will automatically search across all available datasets. Below is the list of all available datasets, including their descriptions and IDs:"
+ dataset_description,
inputSchema={
"type": "object",
@ -188,14 +188,16 @@ async def list_tools(*, connector) -> list[types.Tool]:
"dataset_ids": {
"type": "array",
"items": {"type": "string"},
"description": "Optional array of dataset IDs to search. If not provided or empty, all datasets will be searched."
},
"document_ids": {
"type": "array",
"items": {"type": "string"},
"description": "Optional array of document IDs to search within."
},
"question": {"type": "string"},
"question": {"type": "string", "description": "The question or query to search for."},
},
"required": ["dataset_ids", "question"],
"required": ["question"],
},
),
]
@ -206,8 +208,26 @@ async def list_tools(*, connector) -> list[types.Tool]:
async def call_tool(name: str, arguments: dict, *, connector) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
if name == "ragflow_retrieval":
document_ids = arguments.get("document_ids", [])
dataset_ids = arguments.get("dataset_ids", [])
# If no dataset_ids provided or empty list, get all available dataset IDs
if not dataset_ids:
dataset_list_str = connector.list_datasets()
dataset_ids = []
# Parse the dataset list to extract IDs
if dataset_list_str:
for line in dataset_list_str.strip().split('\n'):
if line.strip():
try:
dataset_info = json.loads(line.strip())
dataset_ids.append(dataset_info["id"])
except (json.JSONDecodeError, KeyError):
# Skip malformed lines
continue
return connector.retrieval(
dataset_ids=arguments["dataset_ids"],
dataset_ids=dataset_ids,
document_ids=document_ids,
question=arguments["question"],
)

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow"
version = "0.20.0"
version = "0.20.1"
description = "[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license-files = ["LICENSE"]
@ -24,7 +24,7 @@ dependencies = [
"chardet==5.2.0",
"cn2an==0.5.22",
"cohere==5.6.2",
"Crawl4AI==0.3.8",
"Crawl4AI>=0.3.8",
"dashscope==1.20.11",
"deepl==1.18.0",
"demjson3==3.0.6",
@ -43,7 +43,7 @@ dependencies = [
"groq==0.9.0",
"hanziconv==0.3.2",
"html-text==0.6.2",
"httpx==0.27.0",
"httpx==0.27.2",
"huggingface-hub>=0.25.0,<0.26.0",
"infinity-sdk==0.6.0-dev4",
"infinity-emb>=0.0.66,<0.0.67",
@ -58,7 +58,7 @@ dependencies = [
"ollama==0.2.1",
"onnxruntime==1.19.2; sys_platform == 'darwin' or platform_machine != 'x86_64'",
"onnxruntime-gpu==1.19.2; sys_platform != 'darwin' and platform_machine == 'x86_64'",
"openai==1.45.0",
"openai>=1.45.0",
"opencv-python==4.10.0.84",
"opencv-python-headless==4.10.0.84",
"openpyxl>=3.1.0,<4.0.0",
@ -128,6 +128,8 @@ dependencies = [
"opensearch-py==2.7.1",
"pluginlib==0.9.4",
"click>=8.1.8",
"python-calamine>=0.4.0",
"litellm>=1.74.15.post1",
]
[project.optional-dependencies]

View File

@ -226,17 +226,20 @@ class Docx(DocxParser):
for r in tb.rows:
html += "<tr>"
i = 0
while i < len(r.cells):
span = 1
c = r.cells[i]
for j in range(i + 1, len(r.cells)):
if c.text == r.cells[j].text:
span += 1
i = j
else:
break
i += 1
html += f"<td>{c.text}</td>" if span == 1 else f"<td colspan='{span}'>{c.text}</td>"
try:
while i < len(r.cells):
span = 1
c = r.cells[i]
for j in range(i + 1, len(r.cells)):
if c.text == r.cells[j].text:
span += 1
i = j
else:
break
i += 1
html += f"<td>{c.text}</td>" if span == 1 else f"<td colspan='{span}'>{c.text}</td>"
except Exception as e:
logging.warning(f"Error parsing table, ignore: {e}")
html += "</tr>"
html += "</table>"
tbls.append(((None, html), ""))

View File

@ -42,7 +42,7 @@ class Ppt(PptParser):
try:
with BytesIO() as buffered:
slide.get_thumbnail(
0.5, 0.5).save(
0.1, 0.1).save(
buffered, drawing.imaging.ImageFormat.jpeg)
buffered.seek(0)
imgs.append(Image.open(buffered).copy())
@ -135,7 +135,8 @@ def chunk(filename, binary=None, from_page=0, to_page=100000,
sections = pdf_parser(filename, binary, from_page=from_page, to_page=to_page, callback=callback)
elif layout_recognizer == "Plain Text":
pdf_parser = PlainParser()
sections, _ = pdf_parser(filename, binary, from_page=from_page, to_page=to_page, callback=callback)
sections, _ = pdf_parser(filename if not binary else binary, from_page=from_page, to_page=to_page,
callback=callback)
else:
vision_model = LLMBundle(kwargs["tenant_id"], LLMType.IMAGE2TEXT, llm_name=layout_recognizer, lang=lang)
pdf_parser = VisionParser(vision_model=vision_model, **kwargs)

View File

@ -40,7 +40,6 @@ class Excel(ExcelParser):
total = 0
for sheetname in wb.sheetnames:
total += len(list(wb[sheetname].rows))
res, fails, done = [], [], 0
rn = 0
for sheetname in wb.sheetnames:
@ -48,31 +47,204 @@ class Excel(ExcelParser):
rows = list(ws.rows)
if not rows:
continue
headers = [cell.value for cell in rows[0]]
missed = set([i for i, h in enumerate(headers) if h is None])
headers = [cell.value for i, cell in enumerate(rows[0]) if i not in missed]
headers, header_rows = self._parse_headers(ws, rows)
if not headers:
continue
data = []
for i, r in enumerate(rows[1:]):
for i, r in enumerate(rows[header_rows:]):
rn += 1
if rn - 1 < from_page:
continue
if rn - 1 >= to_page:
break
row = [cell.value for ii, cell in enumerate(r) if ii not in missed]
if len(row) != len(headers):
row_data = self._extract_row_data(ws, r, header_rows + i, len(headers))
if row_data is None:
fails.append(str(i))
continue
data.append(row)
if self._is_empty_row(row_data):
continue
data.append(row_data)
done += 1
if np.array(data).size == 0:
if len(data) == 0:
continue
res.append(pd.DataFrame(np.array(data), columns=headers))
df = pd.DataFrame(data, columns=headers)
res.append(df)
callback(0.3, ("Extract records: {}~{}".format(from_page + 1, min(to_page, from_page + rn)) + (f"{len(fails)} failure, line: %s..." % (",".join(fails[:3])) if fails else "")))
return res
def _parse_headers(self, ws, rows):
if len(rows) == 0:
return [], 0
has_complex_structure = self._has_complex_header_structure(ws, rows)
if has_complex_structure:
return self._parse_multi_level_headers(ws, rows)
else:
return self._parse_simple_headers(rows)
def _has_complex_header_structure(self, ws, rows):
if len(rows) < 1:
return False
merged_ranges = list(ws.merged_cells.ranges)
# 检查前两行是否涉及合并单元格
for rng in merged_ranges:
if rng.min_row <= 2: # 只要合并区域涉及第1或第2行
return True
return False
def _row_looks_like_header(self, row):
header_like_cells = 0
data_like_cells = 0
non_empty_cells = 0
for cell in row:
if cell.value is not None:
non_empty_cells += 1
val = str(cell.value).strip()
if self._looks_like_header(val):
header_like_cells += 1
elif self._looks_like_data(val):
data_like_cells += 1
if non_empty_cells == 0:
return False
return header_like_cells >= data_like_cells
def _parse_simple_headers(self, rows):
if not rows:
return [], 0
header_row = rows[0]
headers = []
for cell in header_row:
if cell.value is not None:
header_value = str(cell.value).strip()
if header_value:
headers.append(header_value)
else:
pass
final_headers = []
for i, cell in enumerate(header_row):
if cell.value is not None:
header_value = str(cell.value).strip()
if header_value:
final_headers.append(header_value)
else:
final_headers.append(f"Column_{i + 1}")
else:
final_headers.append(f"Column_{i + 1}")
return final_headers, 1
def _parse_multi_level_headers(self, ws, rows):
if len(rows) < 2:
return [], 0
header_rows = self._detect_header_rows(rows)
if header_rows == 1:
return self._parse_simple_headers(rows)
else:
return self._build_hierarchical_headers(ws, rows, header_rows), header_rows
def _detect_header_rows(self, rows):
if len(rows) < 2:
return 1
header_rows = 1
max_check_rows = min(5, len(rows))
for i in range(1, max_check_rows):
row = rows[i]
if self._row_looks_like_header(row):
header_rows = i + 1
else:
break
return header_rows
def _looks_like_header(self, value):
if len(value) < 1:
return False
if any(ord(c) > 127 for c in value):
return True
if len([c for c in value if c.isalpha()]) >= 2:
return True
if any(c in value for c in ["(", ")", "", ":", "", "", "_", "-"]):
return True
return False
def _looks_like_data(self, value):
if len(value) == 1 and value.upper() in ["Y", "N", "M", "X", "/", "-"]:
return True
if value.replace(".", "").replace("-", "").replace(",", "").isdigit():
return True
if value.startswith("0x") and len(value) <= 10:
return True
return False
def _build_hierarchical_headers(self, ws, rows, header_rows):
headers = []
max_col = max(len(row) for row in rows[:header_rows]) if header_rows > 0 else 0
merged_ranges = list(ws.merged_cells.ranges)
for col_idx in range(max_col):
header_parts = []
for row_idx in range(header_rows):
if col_idx < len(rows[row_idx]):
cell_value = rows[row_idx][col_idx].value
merged_value = self._get_merged_cell_value(ws, row_idx + 1, col_idx + 1, merged_ranges)
if merged_value is not None:
cell_value = merged_value
if cell_value is not None:
cell_value = str(cell_value).strip()
if cell_value and cell_value not in header_parts and self._is_valid_header_part(cell_value):
header_parts.append(cell_value)
if header_parts:
header = "-".join(header_parts)
headers.append(header)
else:
headers.append(f"Column_{col_idx + 1}")
final_headers = [h for h in headers if h and h != "-"]
return final_headers
def _is_valid_header_part(self, value):
if len(value) == 1 and value.upper() in ["Y", "N", "M", "X"]:
return False
if value.replace(".", "").replace("-", "").replace(",", "").isdigit():
return False
if value in ["/", "-", "+", "*", "="]:
return False
return True
def _get_merged_cell_value(self, ws, row, col, merged_ranges):
for merged_range in merged_ranges:
if merged_range.min_row <= row <= merged_range.max_row and merged_range.min_col <= col <= merged_range.max_col:
return ws.cell(merged_range.min_row, merged_range.min_col).value
return None
def _extract_row_data(self, ws, row, absolute_row_idx, expected_cols):
row_data = []
merged_ranges = list(ws.merged_cells.ranges)
actual_row_num = absolute_row_idx + 1
for col_idx in range(expected_cols):
cell_value = None
actual_col_num = col_idx + 1
try:
cell_value = ws.cell(row=actual_row_num, column=actual_col_num).value
except ValueError:
if col_idx < len(row):
cell_value = row[col_idx].value
if cell_value is None:
merged_value = self._get_merged_cell_value(ws, actual_row_num, actual_col_num, merged_ranges)
if merged_value is not None:
cell_value = merged_value
else:
cell_value = self._get_inherited_value(ws, actual_row_num, actual_col_num, merged_ranges)
row_data.append(cell_value)
return row_data
def _get_inherited_value(self, ws, row, col, merged_ranges):
for merged_range in merged_ranges:
if merged_range.min_row <= row <= merged_range.max_row and merged_range.min_col <= col <= merged_range.max_col:
return ws.cell(merged_range.min_row, merged_range.min_col).value
return None
def _is_empty_row(self, row_data):
for val in row_data:
if val is not None and str(val).strip() != "":
return False
return True
def trans_datatime(s):
try:

View File

@ -19,6 +19,48 @@
import importlib
import inspect
from strenum import StrEnum
class SupportedLiteLLMProvider(StrEnum):
Tongyi_Qianwen = "Tongyi-Qianwen"
Dashscope = "Dashscope"
Bedrock = "Bedrock"
Moonshot = "Moonshot"
xAI = "xAI"
DeepInfra = "DeepInfra"
Groq = "Groq"
Cohere = "Cohere"
Gemini = "Gemini"
DeepSeek = "DeepSeek"
Nvidia = "NVIDIA"
TogetherAI = "TogetherAI"
Anthropic = "Anthropic"
FACTORY_DEFAULT_BASE_URL = {
SupportedLiteLLMProvider.Tongyi_Qianwen: "https://dashscope.aliyuncs.com/compatible-mode/v1",
SupportedLiteLLMProvider.Dashscope: "https://dashscope.aliyuncs.com/compatible-mode/v1",
SupportedLiteLLMProvider.Moonshot: "https://api.moonshot.cn/v1",
}
LITELLM_PROVIDER_PREFIX = {
SupportedLiteLLMProvider.Tongyi_Qianwen: "dashscope/",
SupportedLiteLLMProvider.Dashscope: "dashscope/",
SupportedLiteLLMProvider.Bedrock: "bedrock/",
SupportedLiteLLMProvider.Moonshot: "moonshot/",
SupportedLiteLLMProvider.xAI: "xai/",
SupportedLiteLLMProvider.DeepInfra: "deepinfra/",
SupportedLiteLLMProvider.Groq: "groq/",
SupportedLiteLLMProvider.Cohere: "", # don't need a prefix
SupportedLiteLLMProvider.Gemini: "gemini/",
SupportedLiteLLMProvider.DeepSeek: "deepseek/",
SupportedLiteLLMProvider.Nvidia: "nvidia_nim/",
SupportedLiteLLMProvider.TogetherAI: "together_ai/",
SupportedLiteLLMProvider.Anthropic: "", # don't need a prefix
}
ChatModel = globals().get("ChatModel", {})
CvModel = globals().get("CvModel", {})
EmbeddingModel = globals().get("EmbeddingModel", {})
@ -26,6 +68,7 @@ RerankModel = globals().get("RerankModel", {})
Seq2txtModel = globals().get("Seq2txtModel", {})
TTSModel = globals().get("TTSModel", {})
MODULE_MAPPING = {
"chat_model": ChatModel,
"cv_model": CvModel,
@ -42,20 +85,30 @@ for module_name, mapping_dict in MODULE_MAPPING.items():
module = importlib.import_module(full_module_name)
base_class = None
lite_llm_base_class = None
for name, obj in inspect.getmembers(module):
if inspect.isclass(obj) and name == "Base":
base_class = obj
break
if base_class is None:
continue
if inspect.isclass(obj):
if name == "Base":
base_class = obj
elif name == "LiteLLMBase":
lite_llm_base_class = obj
assert hasattr(obj, "_FACTORY_NAME"), "LiteLLMbase should have _FACTORY_NAME field."
if hasattr(obj, "_FACTORY_NAME"):
if isinstance(obj._FACTORY_NAME, list):
for factory_name in obj._FACTORY_NAME:
mapping_dict[factory_name] = obj
else:
mapping_dict[obj._FACTORY_NAME] = obj
if base_class is not None:
for _, obj in inspect.getmembers(module):
if inspect.isclass(obj) and issubclass(obj, base_class) and obj is not base_class and hasattr(obj, "_FACTORY_NAME"):
if isinstance(obj._FACTORY_NAME, list):
for factory_name in obj._FACTORY_NAME:
mapping_dict[factory_name] = obj
else:
mapping_dict[obj._FACTORY_NAME] = obj
for _, obj in inspect.getmembers(module):
if inspect.isclass(obj) and issubclass(obj, base_class) and obj is not base_class and hasattr(obj, "_FACTORY_NAME"):
if isinstance(obj._FACTORY_NAME, list):
for factory_name in obj._FACTORY_NAME:
mapping_dict[factory_name] = obj
else:
mapping_dict[obj._FACTORY_NAME] = obj
__all__ = [
"ChatModel",

File diff suppressed because it is too large Load Diff

View File

@ -59,6 +59,10 @@ class Base(ABC):
def _image_prompt(self, text, images):
if not images:
return text
if isinstance(images, str) or "bytes" in type(images).__name__:
images = [images]
pmpt = [{"type": "text", "text": text}]
for img in images:
pmpt.append({
@ -518,6 +522,7 @@ class GeminiCV(Base):
def chat_streamly(self, system, history, gen_conf, images=[]):
from transformers import GenerationConfig
ans = ""
response = None
try:
response = self.model.generate_content(
self._form_history(system, history, images),
@ -533,8 +538,11 @@ class GeminiCV(Base):
except Exception as e:
yield ans + "\n**ERROR**: " + str(e)
yield response._chunks[-1].usage_metadata.total_token_count
if response and hasattr(response, "usage_metadata") and hasattr(response.usage_metadata, "total_token_count"):
yield response.usage_metadata.total_token_count
else:
yield 0
class NvidiaCV(Base):
_FACTORY_NAME = "NVIDIA"
@ -616,15 +624,18 @@ class NvidiaCV(Base):
return "**ERROR**: " + str(e), 0
def chat_streamly(self, system, history, gen_conf, images=[], **kwargs):
total_tokens = 0
try:
response = self._request(self._form_history(system, history, images), gen_conf)
cnt = response["choices"][0]["message"]["content"]
if "usage" in response and "total_tokens" in response["usage"]:
total_tokens += response["usage"]["total_tokens"]
for resp in cnt:
yield resp
except Exception as e:
yield "\n**ERROR**: " + str(e)
yield response["usage"]["total_tokens"]
yield total_tokens
class AnthropicCV(Base):
@ -795,4 +806,4 @@ class GoogleCV(AnthropicCV, GeminiCV):
yield ans
else:
for ans in GeminiCV.chat_streamly(self, system, history, gen_conf, images):
yield ans
yield ans

View File

@ -37,7 +37,12 @@ from rag.utils import num_tokens_from_string, truncate
class Base(ABC):
def __init__(self, key, model_name):
def __init__(self, key, model_name, **kwargs):
"""
Constructor for abstract base class.
Parameters are accepted for interface consistency but are not stored.
Subclasses should implement their own initialization as needed.
"""
pass
def encode(self, texts: list):
@ -864,7 +869,7 @@ class VoyageEmbed(Base):
class HuggingFaceEmbed(Base):
_FACTORY_NAME = "HuggingFace"
def __init__(self, key, model_name, base_url=None):
def __init__(self, key, model_name, base_url=None, **kwargs):
if not model_name:
raise ValueError("Model name cannot be None")
self.key = key
@ -946,4 +951,4 @@ class Ai302Embed(Base):
def __init__(self, key, model_name, base_url="https://api.302.ai/v1/embeddings"):
if not base_url:
base_url = "https://api.302.ai/v1/embeddings"
super().__init__(key, model_name, base_url)
super().__init__(key, model_name, base_url)

View File

@ -33,7 +33,11 @@ from api.utils.log_utils import log_exception
from rag.utils import num_tokens_from_string, truncate
class Base(ABC):
def __init__(self, key, model_name):
def __init__(self, key, model_name, **kwargs):
"""
Abstract base class constructor.
Parameters are not stored; initialization is left to subclasses.
"""
pass
def similarity(self, query: str, texts: list):
@ -264,7 +268,7 @@ class LocalAIRerank(Base):
max_rank = np.max(rank)
# Avoid division by zero if all ranks are identical
if max_rank - min_rank != 0:
if not np.isclose(min_rank, max_rank, atol=1e-3):
rank = (rank - min_rank) / (max_rank - min_rank)
else:
rank = np.zeros_like(rank)
@ -315,7 +319,7 @@ class NvidiaRerank(Base):
class LmStudioRerank(Base):
_FACTORY_NAME = "LM-Studio"
def __init__(self, key, model_name, base_url):
def __init__(self, key, model_name, base_url, **kwargs):
pass
def similarity(self, query: str, texts: list):
@ -396,7 +400,7 @@ class CoHereRerank(Base):
class TogetherAIRerank(Base):
_FACTORY_NAME = "TogetherAI"
def __init__(self, key, model_name, base_url):
def __init__(self, key, model_name, base_url, **kwargs):
pass
def similarity(self, query: str, texts: list):

View File

@ -28,7 +28,11 @@ from rag.utils import num_tokens_from_string
class Base(ABC):
def __init__(self, key, model_name):
def __init__(self, key, model_name, **kwargs):
"""
Abstract base class constructor.
Parameters are not stored; initialization is left to subclasses.
"""
pass
def transcription(self, audio, **kwargs):

View File

@ -63,7 +63,11 @@ class ServeTTSRequest(BaseModel):
class Base(ABC):
def __init__(self, key, model_name, base_url):
def __init__(self, key, model_name, base_url, **kwargs):
"""
Abstract base class constructor.
Parameters are not stored; subclasses should handle their own initialization.
"""
pass
def tts(self, audio):

View File

@ -611,6 +611,10 @@ def naive_merge_with_images(texts, images, chunk_token_num=128, delimiter="\n。
if re.match(f"^{dels}$", sub_sec):
continue
add_chunk(sub_sec, image)
for img in images:
if isinstance(img, Image.Image):
img.close()
return cks, result_images
@ -634,6 +638,16 @@ def concat_img(img1, img2):
return img2
if not img1 and not img2:
return None
if img1 is img2:
return img1
if isinstance(img1, Image.Image) and isinstance(img2, Image.Image):
pixel_data1 = img1.tobytes()
pixel_data2 = img2.tobytes()
if pixel_data1 == pixel_data2:
return img1
width1, height1 = img1.size
width2, height2 = img2.size
@ -643,7 +657,6 @@ def concat_img(img1, img2):
new_image.paste(img1, (0, 0))
new_image.paste(img2, (0, height1))
return new_image

View File

@ -383,8 +383,6 @@ class Dealer:
vector_column = f"q_{dim}_vec"
zero_vector = [0.0] * dim
sim_np = np.array(sim)
if doc_ids:
similarity_threshold = 0
filtered_count = (sim_np >= similarity_threshold).sum()
ranks["total"] = int(filtered_count) # Convert from np.int64 to Python int otherwise JSON serializable error
for i in idx:

View File

@ -4,6 +4,9 @@ Task: {{ task }}
Context: {{ context }}
**Agent Prompt**
{{ agent_prompt }}
**Analysis Requirements:**
1. Is it just a small talk? (If yes, no further plan or analysis is needed)
2. What is the core objective of the task?

View File

@ -0,0 +1,53 @@
You are a metadata filtering condition generator. Analyze the user's question and available document metadata to output a JSON array of filter objects. Follow these rules:
1. **Metadata Structure**:
- Metadata is provided as JSON where keys are attribute names (e.g., "color"), and values are objects mapping attribute values to document IDs.
- Example:
{
"color": {"red": ["doc1"], "blue": ["doc2"]},
"listing_date": {"2025-07-11": ["doc1"], "2025-08-01": ["doc2"]}
}
2. **Output Requirements**:
- Always output a JSON array of filter objects
- Each object must have:
"key": (metadata attribute name),
"value": (string value to compare),
"op": (operator from allowed list)
3. **Operator Guide**:
- Use these operators only: ["contains", "not contains", "start with", "end with", "empty", "not empty", "=", "≠", ">", "<", "≥", "≤"]
- Date ranges: Break into two conditions (≥ start_date AND < next_month_start)
- Negations: Always use "≠" for exclusion terms ("not", "except", "exclude", "≠")
- Implicit logic: Derive unstated filters (e.g., "July" [≥ YYYY-07-01, < YYYY-08-01])
4. **Processing Steps**:
a) Identify ALL filterable attributes in the query (both explicit and implicit)
b) For dates:
- Infer missing year from current date if needed
- Always format dates as "YYYY-MM-DD"
- Convert ranges: [≥ start, < end]
c) For values: Match EXACTLY to metadata's value keys
d) Skip conditions if:
- Attribute doesn't exist in metadata
- Value has no match in metadata
5. **Example**:
- User query: "上市日期七月份的有哪些商品不要蓝色的"
- Metadata: { "color": {...}, "listing_date": {...} }
- Output:
[
{"key": "listing_date", "value": "2025-07-01", "op": "≥"},
{"key": "listing_date", "value": "2025-08-01", "op": "<"},
{"key": "color", "value": "blue", "op": "≠"}
]
6. **Final Output**:
- ONLY output valid JSON array
- NO additional text/explanations
**Current Task**:
- Today's date: {{current_date}}
- Available metadata keys: {{metadata_keys}}
- User query: "{{user_question}}"

View File

@ -149,6 +149,7 @@ NEXT_STEP = load_prompt("next_step")
REFLECT = load_prompt("reflect")
SUMMARY4MEMORY = load_prompt("summary4memory")
RANK_MEMORY = load_prompt("rank_memory")
META_FILTER = load_prompt("meta_filter")
PROMPT_JINJA_ENV = jinja2.Environment(autoescape=False, trim_blocks=True, lstrip_blocks=True)
@ -335,13 +336,13 @@ def form_history(history, limit=-6):
return context
def analyze_task(chat_mdl, task_name, tools_description: list[dict]):
def analyze_task(chat_mdl, prompt, task_name, tools_description: list[dict]):
tools_desc = tool_schema(tools_description)
context = ""
template = PROMPT_JINJA_ENV.from_string(ANALYZE_TASK_USER)
kwd = chat_mdl.chat(ANALYZE_TASK_SYSTEM,[{"role": "user", "content": template.render(task=task_name, context=context, tools_desc=tools_desc)}], {})
context = template.render(task=task_name, context=context, agent_prompt=prompt, tools_desc=tools_desc)
kwd = chat_mdl.chat(ANALYZE_TASK_SYSTEM,[{"role": "user", "content": context}], {})
if isinstance(kwd, tuple):
kwd = kwd[0]
kwd = re.sub(r"^.*</think>", "", kwd, flags=re.DOTALL)
@ -413,3 +414,20 @@ def rank_memories(chat_mdl, goal:str, sub_goal:str, tool_call_summaries: list[st
ans = chat_mdl.chat(msg[0]["content"], msg[1:], stop="<|stop|>")
return re.sub(r"^.*</think>", "", ans, flags=re.DOTALL)
def gen_meta_filter(chat_mdl, meta_data:dict, query: str) -> list:
sys_prompt = PROMPT_JINJA_ENV.from_string(META_FILTER).render(
current_date=datetime.datetime.today().strftime('%Y-%m-%d'),
metadata_keys=json.dumps(meta_data),
user_question=query
)
user_prompt = "Generate filters:"
ans = chat_mdl.chat(sys_prompt, [{"role": "user", "content": user_prompt}])
ans = re.sub(r"(^.*</think>|```json\n|```\n*$)", "", ans, flags=re.DOTALL)
try:
ans = json_repair.loads(ans)
assert isinstance(ans, list), ans
return ans
except Exception:
logging.exception(f"Loading json failure: {ans}")
return []

View File

@ -231,7 +231,7 @@ async def get_storage_binary(bucket, name):
return await trio.to_thread.run_sync(lambda: STORAGE_IMPL.get(bucket, name))
@timeout(60*40, 1)
@timeout(60*80, 1)
async def build_chunks(task, progress_callback):
if task["size"] > DOC_MAXIMUM_SIZE:
set_progress(task["id"], prog=-1, msg="File size exceeds( <= %dMb )" %
@ -284,7 +284,7 @@ async def build_chunks(task, progress_callback):
try:
d = copy.deepcopy(document)
d.update(chunk)
d["id"] = xxhash.xxh64((chunk["content_with_weight"] + str(d["doc_id"])).encode("utf-8")).hexdigest()
d["id"] = xxhash.xxh64((chunk["content_with_weight"] + str(d["doc_id"])).encode("utf-8", "surrogatepass")).hexdigest()
d["create_time"] = str(datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.now().timestamp()
if not d.get("image"):
@ -304,7 +304,11 @@ async def build_chunks(task, progress_callback):
converted_image = d["image"].convert("RGB")
d["image"].close() # Close original image
d["image"] = converted_image
d["image"].save(output_buffer, format='JPEG')
try:
d["image"].save(output_buffer, format='JPEG')
except OSError as e:
logging.warning(
"Saving image of chunk {}/{}/{} got exception, ignore: {}".format(task["location"], task["name"], d["id"], str(e)))
async with minio_limiter:
await trio.to_thread.run_sync(lambda: STORAGE_IMPL.put(task["kb_id"], d["id"], output_buffer.getvalue()))
@ -420,7 +424,6 @@ def init_kb(row, vector_size: int):
return settings.docStoreConn.createIdx(idxnm, row.get("kb_id", ""), vector_size)
@timeout(60*20)
async def embedding(docs, mdl, parser_config=None, callback=None):
if parser_config is None:
parser_config = {}
@ -441,10 +444,15 @@ async def embedding(docs, mdl, parser_config=None, callback=None):
tts = np.concatenate([vts for _ in range(len(tts))], axis=0)
tk_count += c
@timeout(60)
def batch_encode(txts):
nonlocal mdl
return mdl.encode([truncate(c, mdl.max_length-10) for c in txts])
cnts_ = np.array([])
for i in range(0, len(cnts), EMBEDDING_BATCH_SIZE):
async with embed_limiter:
vts, c = await trio.to_thread.run_sync(lambda: mdl.encode([truncate(c, mdl.max_length-10) for c in cnts[i: i + EMBEDDING_BATCH_SIZE]]))
vts, c = await trio.to_thread.run_sync(lambda: batch_encode(cnts[i: i + EMBEDDING_BATCH_SIZE]))
if len(cnts_) == 0:
cnts_ = vts
else:

View File

@ -23,7 +23,7 @@ SET GLOBAL max_allowed_packet={}
def get_opendal_config():
try:
opendal_config = get_base_config('opendal', {})
if opendal_config.get("scheme") == 'mysql':
if opendal_config.get("scheme", "mysql") == 'mysql':
mysql_config = get_base_config('mysql', {})
max_packet = mysql_config.get("max_allowed_packet", 134217728)
kwargs = {
@ -33,7 +33,7 @@ def get_opendal_config():
"user": mysql_config.get("user", "root"),
"password": mysql_config.get("password", ""),
"database": mysql_config.get("name", "test_open_dal"),
"table": opendal_config.get("config").get("oss_table", "opendal_storage"),
"table": opendal_config.get("config", {}).get("oss_table", "opendal_storage"),
"max_allowed_packet": str(max_packet)
}
kwargs["connection_string"] = f"mysql://{kwargs['user']}:{quote_plus(kwargs['password'])}@{kwargs['host']}:{kwargs['port']}/{kwargs['database']}?max_allowed_packet={max_packet}"

View File

@ -227,9 +227,20 @@ class RedisDB:
"""https://redis.io/docs/latest/commands/xreadgroup/"""
for _ in range(3):
try:
group_info = self.REDIS.xinfo_groups(queue_name)
if not any(gi["name"] == group_name for gi in group_info):
self.REDIS.xgroup_create(queue_name, group_name, id="0", mkstream=True)
try:
group_info = self.REDIS.xinfo_groups(queue_name)
if not any(gi["name"] == group_name for gi in group_info):
self.REDIS.xgroup_create(queue_name, group_name, id="0", mkstream=True)
except redis.exceptions.ResponseError as e:
if "no such key" in str(e).lower():
self.REDIS.xgroup_create(queue_name, group_name, id="0", mkstream=True)
elif "busygroup" in str(e).lower():
logging.warning("Group already exists, continue.")
pass
else:
raise
args = {
"groupname": group_name,
"consumername": consumer_name,
@ -338,8 +349,8 @@ class RedisDB:
logging.warning("RedisDB.delete " + str(key) + " got exception: " + str(e))
self.__open__()
return False
REDIS_CONN = RedisDB()

View File

@ -30,7 +30,8 @@ class RAGFlowS3:
self.s3_config = settings.S3
self.access_key = self.s3_config.get('access_key', None)
self.secret_key = self.s3_config.get('secret_key', None)
self.region = self.s3_config.get('region', None)
self.session_token = self.s3_config.get('session_token', None)
self.region_name = self.s3_config.get('region_name', None)
self.endpoint_url = self.s3_config.get('endpoint_url', None)
self.signature_version = self.s3_config.get('signature_version', None)
self.addressing_style = self.s3_config.get('addressing_style', None)
@ -73,31 +74,32 @@ class RAGFlowS3:
s3_params = {
'aws_access_key_id': self.access_key,
'aws_secret_access_key': self.secret_key,
'aws_session_token': self.session_token,
}
if self.region in self.s3_config:
s3_params['region_name'] = self.region
if 'endpoint_url' in self.s3_config:
if self.region_name:
s3_params['region_name'] = self.region_name
if self.endpoint_url:
s3_params['endpoint_url'] = self.endpoint_url
if 'signature_version' in self.s3_config:
config_kwargs['signature_version'] = self.signature_version
if 'addressing_style' in self.s3_config:
config_kwargs['addressing_style'] = self.addressing_style
if self.signature_version:
s3_params['signature_version'] = self.signature_version
if self.addressing_style:
s3_params['addressing_style'] = self.addressing_style
if config_kwargs:
s3_params['config'] = Config(**config_kwargs)
self.conn = boto3.client('s3', **s3_params)
self.conn = [boto3.client('s3', **s3_params)]
except Exception:
logging.exception(f"Fail to connect at region {self.region} or endpoint {self.endpoint_url}")
logging.exception(f"Fail to connect at region {self.region_name} or endpoint {self.endpoint_url}")
def __close__(self):
del self.conn
del self.conn[0]
self.conn = None
@use_default_bucket
def bucket_exists(self, bucket):
def bucket_exists(self, bucket, *args, **kwargs):
try:
logging.debug(f"head_bucket bucketname {bucket}")
self.conn.head_bucket(Bucket=bucket)
self.conn[0].head_bucket(Bucket=bucket)
exists = True
except ClientError:
logging.exception(f"head_bucket error {bucket}")
@ -109,10 +111,10 @@ class RAGFlowS3:
fnm = "txtxtxtxt1"
fnm, binary = f"{self.prefix_path}/{fnm}" if self.prefix_path else fnm, b"_t@@@1"
if not self.bucket_exists(bucket):
self.conn.create_bucket(Bucket=bucket)
self.conn[0].create_bucket(Bucket=bucket)
logging.debug(f"create bucket {bucket} ********")
r = self.conn.upload_fileobj(BytesIO(binary), bucket, fnm)
r = self.conn[0].upload_fileobj(BytesIO(binary), bucket, fnm)
return r
def get_properties(self, bucket, key):
@ -123,14 +125,14 @@ class RAGFlowS3:
@use_prefix_path
@use_default_bucket
def put(self, bucket, fnm, binary):
def put(self, bucket, fnm, binary, *args, **kwargs):
logging.debug(f"bucket name {bucket}; filename :{fnm}:")
for _ in range(1):
try:
if not self.bucket_exists(bucket):
self.conn.create_bucket(Bucket=bucket)
self.conn[0].create_bucket(Bucket=bucket)
logging.info(f"create bucket {bucket} ********")
r = self.conn.upload_fileobj(BytesIO(binary), bucket, fnm)
r = self.conn[0].upload_fileobj(BytesIO(binary), bucket, fnm)
return r
except Exception:
@ -140,18 +142,18 @@ class RAGFlowS3:
@use_prefix_path
@use_default_bucket
def rm(self, bucket, fnm):
def rm(self, bucket, fnm, *args, **kwargs):
try:
self.conn.delete_object(Bucket=bucket, Key=fnm)
self.conn[0].delete_object(Bucket=bucket, Key=fnm)
except Exception:
logging.exception(f"Fail rm {bucket}/{fnm}")
@use_prefix_path
@use_default_bucket
def get(self, bucket, fnm):
def get(self, bucket, fnm, *args, **kwargs):
for _ in range(1):
try:
r = self.conn.get_object(Bucket=bucket, Key=fnm)
r = self.conn[0].get_object(Bucket=bucket, Key=fnm)
object_data = r['Body'].read()
return object_data
except Exception:
@ -162,9 +164,9 @@ class RAGFlowS3:
@use_prefix_path
@use_default_bucket
def obj_exist(self, bucket, fnm):
def obj_exist(self, bucket, fnm, *args, **kwargs):
try:
if self.conn.head_object(Bucket=bucket, Key=fnm):
if self.conn[0].head_object(Bucket=bucket, Key=fnm):
return True
except ClientError as e:
if e.response['Error']['Code'] == '404':
@ -174,10 +176,10 @@ class RAGFlowS3:
@use_prefix_path
@use_default_bucket
def get_presigned_url(self, bucket, fnm, expires):
def get_presigned_url(self, bucket, fnm, expires, *args, **kwargs):
for _ in range(10):
try:
r = self.conn.generate_presigned_url('get_object',
r = self.conn[0].generate_presigned_url('get_object',
Params={'Bucket': bucket,
'Key': fnm},
ExpiresIn=expires)
@ -188,3 +190,17 @@ class RAGFlowS3:
self.__open__()
time.sleep(1)
return
@use_prefix_path
@use_default_bucket
def rm_bucket(self, bucket, *args, **kwargs):
for conn in self.conn:
try:
if not conn.bucket_exists(bucket):
continue
for o in conn.list_objects_v2(Bucket=bucket):
conn.delete_object(bucket, o.object_name)
conn.delete_bucket(Bucket=bucket)
return
except Exception as e:
logging.error(f"Fail rm {bucket}: " + str(e))

View File

@ -1,6 +1,6 @@
[project]
name = "ragflow-sdk"
version = "0.20.0"
version = "0.20.1"
description = "Python client sdk of [RAGFlow](https://github.com/infiniflow/ragflow). RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding."
authors = [{ name = "Zhichang Yu", email = "yuzhichang@gmail.com" }]
license = { text = "Apache License, Version 2.0" }

View File

@ -24,7 +24,6 @@ class Chat(Base):
self.id = ""
self.name = "assistant"
self.avatar = "path/to/avatar"
self.dataset_ids = ["kb1"]
self.llm = Chat.LLM(rag, {})
self.prompt = Chat.Prompt(rag, {})
super().__init__(rag, res_dict)

View File

@ -63,8 +63,30 @@ class DataSet(Base):
return doc_list
raise Exception(res.get("message"))
def list_documents(self, id: str | None = None, name: str | None = None, keywords: str | None = None, page: int = 1, page_size: int = 30, orderby: str = "create_time", desc: bool = True):
res = self.get(f"/datasets/{self.id}/documents", params={"id": id, "name": name, "keywords": keywords, "page": page, "page_size": page_size, "orderby": orderby, "desc": desc})
def list_documents(
self,
id: str | None = None,
name: str | None = None,
keywords: str | None = None,
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
create_time_from: int = 0,
create_time_to: int = 0,
):
params = {
"id": id,
"name": name,
"keywords": keywords,
"page": page,
"page_size": page_size,
"orderby": orderby,
"desc": desc,
"create_time_from": create_time_from,
"create_time_to": create_time_to,
}
res = self.get(f"/datasets/{self.id}/documents", params=params)
res = res.json()
documents = []
if res.get("code") == 0:

View File

@ -44,6 +44,7 @@ class Document(Base):
self.process_duration = 0.0
self.run = "0"
self.status = "1"
self.meta_fields = {}
for k in list(res_dict.keys()):
if k not in self.__dict__:
res_dict.pop(k)

2
sdk/python/uv.lock generated
View File

@ -342,7 +342,7 @@ wheels = [
[[package]]
name = "ragflow-sdk"
version = "0.20.0"
version = "0.20.1"
source = { virtual = "." }
dependencies = [
{ name = "beartype" },

View File

@ -245,4 +245,4 @@ class TestUpdatedChunk:
delete_documents(HttpApiAuth, dataset_id, {"ids": [document_id]})
res = update_chunk(HttpApiAuth, dataset_id, document_id, chunk_ids[0])
assert res["code"] == 102
assert res["message"] == f"Can't find this chunk {chunk_ids[0]}"
assert res["message"] == f"You don't own the document {document_id}."

View File

@ -163,9 +163,9 @@ class TestDatasetsList:
[
{"orderby": ""},
{"orderby": "unknown"},
({"orderby": "CREATE_TIME"}, lambda r: (is_sorted(r["data"], "create_time", True))),
({"orderby": "UPDATE_TIME"}, lambda r: (is_sorted(r["data"], "update_time", True))),
({"orderby": " create_time "}, lambda r: (is_sorted(r["data"], "update_time", True))),
{"orderby": "CREATE_TIME"},
{"orderby": "UPDATE_TIME"},
{"orderby": " create_time "},
],
ids=["empty", "unknown", "orderby_create_time_upper", "orderby_update_time_upper", "whitespace"],
)

Some files were not shown because too many files have changed in this diff Show More