Compare commits

..

216 Commits

Author SHA1 Message Date
43ea312144 Fix: search highlight. (#10616)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 18:45:43 +08:00
ce05696d95 Fix: Open the parser operator configuration, save it, and run the agent. An error will be reported. #10615 (#10619)
### What problem does this PR solve?

Fix: Open the parser operator configuration, save it, and run the agent.
An error will be reported. #10615

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 18:45:27 +08:00
0f62bfda21 Feat: add forgot password reset (update naming style), solve #8547 (#10606)
### What problem does this PR solve?

Feat: add forgot password reset (update naming style), solve #8547

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-10-16 17:48:20 +08:00
70ffe2b4e8 Feat: The bottom anchor of the agent node is only displayed when there is a downstream node #9869 (#10611)
### What problem does this PR solve?

Feat: The bottom anchor of the agent node is only displayed when there
is a downstream node #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-16 17:47:55 +08:00
e76db6e222 Fix: Bug fixes #9869 (#10600)
### What problem does this PR solve?

Fix: Bug fixes #9869
- Added the disabled attribute to control the modal confirmation button
state
- Conditionally rendered the catalog enhancement toggle component
- Replaced the selector component and removed unused imports
- Removed redundant catalog enhancement text in the Chinese language
pack

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 15:43:05 +08:00
7b664b5a84 Feat: Collapse the excess portion of the tool node and retrieval node #9869 (#10604)
### What problem does this PR solve?

Feat: Collapse the excess portion of the tool node and retrieval node
#9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-16 15:17:13 +08:00
8a41057236 Fix: Add RetryingPooledPostgresqlDatabase to handle max_retries param (#10524)
## What problem does this PR solve?

Fixes the PostgreSQL connection error that prevents RAGFlow from
starting:

peewee.ProgrammingError: invalid dsn: invalid connection option
"max_retries"


## Problem Analysis

The `BaseDataBase` class in `api/db/db_models.py` adds `max_retries` and
`retry_delay` to the database configuration dict before passing it to
the database connection constructor.

- **MySQL**: Has `RetryingPooledMySQLDatabase` class that properly
extracts these custom parameters using `kwargs.pop()` before calling the
parent constructor
- **PostgreSQL**: Was using the base `PooledPostgresqlDatabase` class
which passes all parameters directly to `psycopg2.connect()`, which
doesn't recognize `max_retries` as a valid connection option

## Solution

Created `RetryingPooledPostgresqlDatabase` class that:
- Extracts `max_retries` and `retry_delay` parameters before
initialization
- Implements retry logic with exponential backoff for connection
failures
- Handles PostgreSQL-specific connection errors (connection refused,
server closed, etc.)
- Mirrors the existing `RetryingPooledMySQLDatabase` implementation

Updated the `PooledDatabase` enum to use the new retrying class for
PostgreSQL.

## Benefits

 Prevents invalid connection parameters from being passed to psycopg2  
 Adds automatic retry logic for PostgreSQL connection failures  
 Provides better error logging for PostgreSQL-specific issues  
 Maintains consistency between MySQL and PostgreSQL database handling  

## Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

## Testing

Tested with PostgreSQL database configuration and verified:
- Server starts without the "invalid dsn" error
- Database connections are established successfully
- Retry logic works correctly on connection failures

Co-authored-by: Andrea Bugeja <andrea.bugeja@gig.com>
2025-10-16 15:08:41 +08:00
447041d265 Feat: add forgot password reset, solve #8547 (#10586)
### What problem does this PR solve?

Feat: add forgot password reset, solve #8547

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-10-16 15:07:49 +08:00
f0375c4acd Update architecture image and ragflow-cli version (#10605)
### What problem does this PR solve?

1. Update architecture image
2. ragflow-cli doesn't indicate the version

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-16 14:30:55 +08:00
8af769de41 Fix: add toc_kwd field and update page_num_int type (#10596)
### What problem does this PR solve?

- Added new field 'toc_kwd' to infinity_mapping.json for table of
contents keyword support
- Changed page_num_int from integer to array type in task_executor.py to
handle multiple page numbers

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 12:47:24 +08:00
f808bc32ba Fix (dataset setting): Remove the introduction and use of TagItems in the configuration. #9869 (#10595)
### What problem does this PR solve?

Fix (dataset setting): Remove the introduction and use of TagItems in
the configuration. #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 12:46:54 +08:00
e8cb1d8fc4 Build(deps): Bump axios from 1.7.2 to 1.12.0 in /web (#10393)
Bumps [axios](https://github.com/axios/axios) from 1.7.2 to 1.12.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/axios/axios/releases">axios's
releases</a>.</em></p>
<blockquote>
<h2>Release v1.12.0</h2>
<h2>Release notes:</h2>
<h3>Bug Fixes</h3>
<ul>
<li>adding build artifacts (<a
href="9ec86de257">9ec86de</a>)</li>
<li>dont add dist on release (<a
href="a2edc3606a">a2edc36</a>)</li>
<li><strong>fetch-adapter:</strong> set correct Content-Type for Node
FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)
(<a
href="a9f47afbf3">a9f47af</a>)</li>
<li><strong>node:</strong> enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)
(<a
href="945435fc51">945435f</a>)</li>
<li>package exports (<a
href="https://redirect.github.com/axios/axios/issues/5627">#5627</a>)
(<a
href="aa78ac23fc">aa78ac2</a>)</li>
<li><strong>params:</strong> removing '[' and ']' from URL encode
exclude characters (<a
href="https://redirect.github.com/axios/axios/issues/3316">#3316</a>)
(<a
href="https://redirect.github.com/axios/axios/issues/5715">#5715</a>)
(<a
href="6d84189349">6d84189</a>)</li>
<li>release pr run (<a
href="fd7f404488">fd7f404</a>)</li>
<li><strong>types:</strong> change the type guard on isCancel (<a
href="https://redirect.github.com/axios/axios/issues/5595">#5595</a>)
(<a
href="0dbb7fd4f6">0dbb7fd</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>adapter:</strong> surface low‑level network error details;
attach original error via cause (<a
href="https://redirect.github.com/axios/axios/issues/6982">#6982</a>)
(<a
href="78b290c57c">78b290c</a>)</li>
<li><strong>fetch:</strong> add fetch, Request, Response env config
variables for the adapter; (<a
href="https://redirect.github.com/axios/axios/issues/7003">#7003</a>)
(<a
href="c959ff2901">c959ff2</a>)</li>
<li>support reviver on JSON.parse (<a
href="https://redirect.github.com/axios/axios/issues/5926">#5926</a>)
(<a
href="2a9763426e">2a97634</a>),
closes <a
href="https://redirect.github.com/axios/axios/issues/5924">#5924</a></li>
<li><strong>types:</strong> extend AxiosResponse interface to include
custom headers type (<a
href="https://redirect.github.com/axios/axios/issues/6782">#6782</a>)
(<a
href="7960d34ede">7960d34</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a
href="https://github.com/WillianAgostini" title="+132/-16760
([#7002](https://github.com/axios/axios/issues/7002)
[#5926](https://github.com/axios/axios/issues/5926)
[#6782](https://github.com/axios/axios/issues/6782) )">Willian
Agostini</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/DigitalBrainJS" title="+4263/-293
([#7006](https://github.com/axios/axios/issues/7006)
[#7003](https://github.com/axios/axios/issues/7003) )">Dmitriy
Mozgovoy</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/mkhani01"
title="+111/-15 ([#6982](https://github.com/axios/axios/issues/6982)
)">khani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/AmeerAssadi"
title="+123/-0 ([#7011](https://github.com/axios/axios/issues/7011)
)">Ameer Assadi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/emiedonmokumo"
title="+55/-35 ([#6998](https://github.com/axios/axios/issues/6998)
)">Emiedonmokumo Dick-Boro</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/opsysdebug"
title="+8/-8 ([#6980](https://github.com/axios/axios/issues/6980)
)">Zeroday BYTE</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jasonsaayman"
title="+7/-7 ([#6985](https://github.com/axios/axios/issues/6985)
[#6985](https://github.com/axios/axios/issues/6985) )">Jason
Saayman</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/HealGaren"
title="+5/-7 ([#5715](https://github.com/axios/axios/issues/5715)
)">최예찬</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/gligorkot"
title="+3/-1 ([#5627](https://github.com/axios/axios/issues/5627)
)">Gligor Kotushevski</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/adimit"
title="+2/-1 ([#5595](https://github.com/axios/axios/issues/5595)
)">Aleksandar Dimitrov</a></li>
</ul>
<h2>Release v1.11.0</h2>
<h2>Release notes:</h2>
<h3>Bug Fixes</h3>
<ul>
<li>form-data npm pakcage (<a
href="https://redirect.github.com/axios/axios/issues/6970">#6970</a>)
(<a
href="e72c193722">e72c193</a>)</li>
<li>prevent RangeError when using large Buffers (<a
href="https://redirect.github.com/axios/axios/issues/6961">#6961</a>)
(<a
href="a2214ca1bc">a2214ca</a>)</li>
<li><strong>types:</strong> resolve type discrepancies between ESM and
CJS TypeScript declaration files (<a
href="https://redirect.github.com/axios/axios/issues/6956">#6956</a>)
(<a
href="8517aa16f8">8517aa1</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a href="https://github.com/izzygld"
title="+186/-93 ([#6970](https://github.com/axios/axios/issues/6970)
)">izzy goldman</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/manishsahanidev" title="+70/-0
([#6961](https://github.com/axios/axios/issues/6961) )">Manish
Sahani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/noritaka1166"
title="+12/-10 ([#6938](https://github.com/axios/axios/issues/6938)
[#6939](https://github.com/axios/axios/issues/6939) )">Noritaka
Kobayashi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jrnail23"
title="+13/-2 ([#6956](https://github.com/axios/axios/issues/6956)
)">James Nail</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/Tejaswi1305"
title="+1/-1 ([#6894](https://github.com/axios/axios/issues/6894)
)">Tejaswi1305</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/axios/axios/blob/v1.x/CHANGELOG.md">axios's
changelog</a>.</em></p>
<blockquote>
<h1><a
href="https://github.com/axios/axios/compare/v1.11.0...v1.12.0">1.12.0</a>
(2025-09-11)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>adding build artifacts (<a
href="9ec86de257">9ec86de</a>)</li>
<li>dont add dist on release (<a
href="a2edc3606a">a2edc36</a>)</li>
<li><strong>fetch-adapter:</strong> set correct Content-Type for Node
FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)
(<a
href="a9f47afbf3">a9f47af</a>)</li>
<li><strong>node:</strong> enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)
(<a
href="945435fc51">945435f</a>)</li>
<li>package exports (<a
href="https://redirect.github.com/axios/axios/issues/5627">#5627</a>)
(<a
href="aa78ac23fc">aa78ac2</a>)</li>
<li><strong>params:</strong> removing '[' and ']' from URL encode
exclude characters (<a
href="https://redirect.github.com/axios/axios/issues/3316">#3316</a>)
(<a
href="https://redirect.github.com/axios/axios/issues/5715">#5715</a>)
(<a
href="6d84189349">6d84189</a>)</li>
<li>release pr run (<a
href="fd7f404488">fd7f404</a>)</li>
<li><strong>types:</strong> change the type guard on isCancel (<a
href="https://redirect.github.com/axios/axios/issues/5595">#5595</a>)
(<a
href="0dbb7fd4f6">0dbb7fd</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>adapter:</strong> surface low‑level network error details;
attach original error via cause (<a
href="https://redirect.github.com/axios/axios/issues/6982">#6982</a>)
(<a
href="78b290c57c">78b290c</a>)</li>
<li><strong>fetch:</strong> add fetch, Request, Response env config
variables for the adapter; (<a
href="https://redirect.github.com/axios/axios/issues/7003">#7003</a>)
(<a
href="c959ff2901">c959ff2</a>)</li>
<li>support reviver on JSON.parse (<a
href="https://redirect.github.com/axios/axios/issues/5926">#5926</a>)
(<a
href="2a9763426e">2a97634</a>),
closes <a
href="https://redirect.github.com/axios/axios/issues/5924">#5924</a></li>
<li><strong>types:</strong> extend AxiosResponse interface to include
custom headers type (<a
href="https://redirect.github.com/axios/axios/issues/6782">#6782</a>)
(<a
href="7960d34ede">7960d34</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a
href="https://github.com/WillianAgostini" title="+132/-16760
([#7002](https://github.com/axios/axios/issues/7002)
[#5926](https://github.com/axios/axios/issues/5926)
[#6782](https://github.com/axios/axios/issues/6782) )">Willian
Agostini</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/DigitalBrainJS" title="+4263/-293
([#7006](https://github.com/axios/axios/issues/7006)
[#7003](https://github.com/axios/axios/issues/7003) )">Dmitriy
Mozgovoy</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/mkhani01"
title="+111/-15 ([#6982](https://github.com/axios/axios/issues/6982)
)">khani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/AmeerAssadi"
title="+123/-0 ([#7011](https://github.com/axios/axios/issues/7011)
)">Ameer Assadi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/emiedonmokumo"
title="+55/-35 ([#6998](https://github.com/axios/axios/issues/6998)
)">Emiedonmokumo Dick-Boro</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/opsysdebug"
title="+8/-8 ([#6980](https://github.com/axios/axios/issues/6980)
)">Zeroday BYTE</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jasonsaayman"
title="+7/-7 ([#6985](https://github.com/axios/axios/issues/6985)
[#6985](https://github.com/axios/axios/issues/6985) )">Jason
Saayman</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/HealGaren"
title="+5/-7 ([#5715](https://github.com/axios/axios/issues/5715)
)">최예찬</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/gligorkot"
title="+3/-1 ([#5627](https://github.com/axios/axios/issues/5627)
)">Gligor Kotushevski</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/adimit"
title="+2/-1 ([#5595](https://github.com/axios/axios/issues/5595)
)">Aleksandar Dimitrov</a></li>
</ul>
<h1><a
href="https://github.com/axios/axios/compare/v1.10.0...v1.11.0">1.11.0</a>
(2025-07-22)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>form-data npm pakcage (<a
href="https://redirect.github.com/axios/axios/issues/6970">#6970</a>)
(<a
href="e72c193722">e72c193</a>)</li>
<li>prevent RangeError when using large Buffers (<a
href="https://redirect.github.com/axios/axios/issues/6961">#6961</a>)
(<a
href="a2214ca1bc">a2214ca</a>)</li>
<li><strong>types:</strong> resolve type discrepancies between ESM and
CJS TypeScript declaration files (<a
href="https://redirect.github.com/axios/axios/issues/6956">#6956</a>)
(<a
href="8517aa16f8">8517aa1</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a href="https://github.com/izzygld"
title="+186/-93 ([#6970](https://github.com/axios/axios/issues/6970)
)">izzy goldman</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/manishsahanidev" title="+70/-0
([#6961](https://github.com/axios/axios/issues/6961) )">Manish
Sahani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/noritaka1166"
title="+12/-10 ([#6938](https://github.com/axios/axios/issues/6938)
[#6939](https://github.com/axios/axios/issues/6939) )">Noritaka
Kobayashi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jrnail23"
title="+13/-2 ([#6956](https://github.com/axios/axios/issues/6956)
)">James Nail</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0d8ad6e1de"><code>0d8ad6e</code></a>
chore(release): v1.12.0 (<a
href="https://redirect.github.com/axios/axios/issues/7013">#7013</a>)</li>
<li><a
href="fd7f404488"><code>fd7f404</code></a>
fix: release pr run</li>
<li><a
href="a2edc3606a"><code>a2edc36</code></a>
fix: dont add dist on release</li>
<li><a
href="9ec86de257"><code>9ec86de</code></a>
fix: adding build artifacts</li>
<li><a
href="945435fc51"><code>945435f</code></a>
fix(node): enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)</li>
<li><a
href="28e5e3016d"><code>28e5e30</code></a>
chore(sponsor): update sponsor block (<a
href="https://redirect.github.com/axios/axios/issues/7005">#7005</a>)</li>
<li><a
href="d03f245a40"><code>d03f245</code></a>
chore(CI): fixed release info script to use npm registry instead of git
as fi...</li>
<li><a
href="a0bc911379"><code>a0bc911</code></a>
chore: removing dist files from src (<a
href="https://redirect.github.com/axios/axios/issues/7002">#7002</a>)</li>
<li><a
href="c959ff2901"><code>c959ff2</code></a>
feat(fetch): add fetch, Request, Response env config variables for the
adapte...</li>
<li><a
href="a9f47afbf3"><code>a9f47af</code></a>
fix(fetch-adapter): set correct Content-Type for Node FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/axios/axios/compare/v1.7.2...v1.12.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=axios&package-manager=npm_and_yarn&previous-version=1.7.2&new-version=1.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-16 09:40:18 +08:00
4e86ee4ff9 Feat: Support Specifying OpenRouter Model Provider (#10550)
### What problem does this PR solve?
issue:
[#5787](https://github.com/infiniflow/ragflow/issues/5787)
change:
Support Specifying OpenRouter Model Provider

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-16 09:39:59 +08:00
c99034f717 Update admin client README and doc (#10594)
### What problem does this PR solve?

As title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-16 09:39:10 +08:00
86b254d214 Improve file management (#10577)
### What problem does this PR solve?

Improve file management. #10287.

Passed tests:

1. Create folder `A` and `B`.
2. Upload a file inside `A`, called `file`.
3. Create a KB, called `K`.
3. Link `file` to `K`.
4. Parse `file` inside of `K`. (OK)
5. Move `file` from `A` to `B`.
6. Parse `file` inside of `K`. (OK)
7. Move `file` from `B` to `A`.
8. Parse `file` inside of `K`. (OK)
9. Move entire folder `A` into `B`. (B -> A -> file)
10. Parse `file` inside of `K`. (OK)
11. Delete folder `B`.
12. All clear. (There is no document inside of `K`)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 09:38:25 +08:00
1c38f4cefb Use relative path to import same module (#10587)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-15 21:04:17 +08:00
74c195cd36 Doc: Added Long context RAG guide (#10591)
### What problem does this PR solve?

### Type of change


- [x] Documentation Update
2025-10-15 21:00:19 +08:00
e48bec1cbf Don't rerank for infinity (#10579)
### What problem does this PR solve?

Don't need rerank for infinity since Infinity normalizes each way score
before fusion.

### Type of change

- [x] Refactoring
2025-10-15 20:15:49 +08:00
205a5eb9f5 Docs: Updated dataset configuration, KG building and RAPTOR building for v0.21.0 (#10584)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-10-15 16:39:26 +08:00
8844826208 Refactor admin client for message prompts (#10583)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-15 16:22:07 +08:00
8fe4281d81 Fix (i18n): Update the Chinese and English description of RAPTOR functionality #9869 (#10581)
…tionality

### What problem does this PR solve?

Fix (i18n): Update the Chinese and English description of RAPTOR
functionality #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 15:42:46 +08:00
f0
fb1bedbd3c fix(_handle_entity_relation_summary): correctly calculate the descriptions_list (#10534)
### What problem does this PR solve?

Since `description_list` was a tuple containing a single element (which
was the actual list of descriptions), `len(description_list)` was always
**1**.

The subsequent check:
`if len(description_list) <= 12:` always evaluated to `True` (since $1
\le 12$), even if the inner list contained more than 12 descriptions.
This prevented the necessary summarization logic from running for long
lists.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 15:30:06 +08:00
6e55b9146c Doc: update released tag. (#10578)
### What problem does this PR solve?

Update to released version tag in pyproject.toml 

### Type of change

- [x] Documentation Update
2025-10-15 15:14:52 +08:00
071ea9c493 Fix: support auto width when print table (#10575)
### What problem does this PR solve?

Print table support auto width.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 14:57:44 +08:00
5037a28e4d Fix problem with Google Cloud models with reasoning (like gemini) - Additional fix to issue #10474 (#10502)
### What problem does this PR solve?

Issue #10474  -  Update to PR #10477 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 14:54:20 +08:00
fdac4afd10 Fix admin: can't read config and empty line error (#10574)
### What problem does this PR solve?

As title.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-15 13:07:16 +08:00
769d701f56 Fix: Optimize metadata filters, add Ingestion pipeline options to agent templates page #9869 (#10572)
### What problem does this PR solve?

Fix: Optimize metadata filters, add Ingestion pipeline options to agent
templates page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 12:31:05 +08:00
8b512cdadf Feat: Creating a data flow from a template page #9869 (#10573)
### What problem does this PR solve?

Feat: Creating a data flow from a template page #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-15 12:22:41 +08:00
3ae126836a Docs: Update version references to v0.21.0 in READMEs and docs (#10565)
### What problem does this PR solve?

- Update version tags in README files (including translations) from
v0.20.5 to v0.21.0
- Modify Docker image references and documentation to reflect new
version
- Update version badges and image descriptions
- Maintain consistency across all language variants of README files

### Type of change

- [x] Documentation Update
2025-10-15 11:46:24 +08:00
e8bfda6020 Fix: Click the reset button on the agent page shared externally, and the greeting in conversation mode should not be deleted. #10567 (#10571)
### What problem does this PR solve?

Fix: Click the reset button on the agent page shared externally, and the
greeting in conversation mode should not be deleted. #10567
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 11:22:37 +08:00
34c54cd459 Fix: agent templates... (#10564)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-15 10:31:30 +08:00
3d873d98fb Update README (#10563)
### Type of change

- [x] Documentation Update
2025-10-15 09:58:07 +08:00
fbe25b5add Fix release notes (#10560)
### What problem does this PR solve?

Use infinity 0.6.0

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-15 09:27:52 +08:00
0c6c7c8fe7 Update release notes 2025-10-14 23:22:51 +08:00
e266f9a66f Doc: Added v0.21.0 release notes (#10559)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-14 23:16:19 +08:00
fde6e5ab39 Feat: Create Stock_research_report.json (#10555)
ini

### What problem does this PR solve?

Create Stock_research_report.json

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-14 21:35:20 +08:00
67529825e2 Feat: Contribute ingestion pipeline templates (#10551)
### Type of change

- [x] Other (please describe): contribute agent templates
2025-10-14 21:29:42 +08:00
738a7d5c24 Fix: Adding Ingestion Pipeline Classification to Agents Template #9869 (#10556)
### What problem does this PR solve?

Fix: Adding Ingestion Pipeline Classification to Agents Template #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 21:06:49 +08:00
83ec915d51 Feat: auto release (#10557)
### What problem does this PR solve?

Add cli build to release.yml.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 21:06:27 +08:00
e535099f36 bump infinity to v0.6.0 (#10558)
### What problem does this PR solve?

bump infinity to v0.6.0

### Type of change

- [x] Other (please describe): Infinity
2025-10-14 20:52:11 +08:00
16b5feadb7 Fix: canvas list with team. (#10549)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 19:38:54 +08:00
960f47c4d4 Fix: When I click to interrupt the chat, the page reports an error #10553 (#10554)
### What problem does this PR solve?
Fix: When I click to interrupt the chat, the page reports an error
#10553

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 19:07:18 +08:00
51139de178 Fix: Switch the default theme from light mode to dark mode and improve some styles #9869 (#10552)
### What problem does this PR solve?

Fix: Switch the default theme from light mode to dark mode and improve
some styles #9869
-Update UI component styles such as input boxes, tables, and prompt
boxes
-Optimize login page layout and style details
-Revise some of the wording, such as uniformly changing "data flow" to
"pipeline"
-Adjust the parser to support the markdown type

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 19:06:50 +08:00
1f5167f1ca Feat: Adjust the style of note nodes #9869 (#10547)
### What problem does this PR solve?

Feat: Adjust the style of note nodes #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 17:15:26 +08:00
578ea34b3e Feat: build ragflow-cli (#10544)
### What problem does this PR solve?

Build admin client.

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 16:28:43 +08:00
5fb3d2f55c Fix: update parser id for change_parser. (#10545)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 15:49:05 +08:00
d99d1e3518 Feat: Merge splitter and hierarchicalMerger into one node #9869 (#10543)
### What problem does this PR solve?

Feat: Merge splitter and hierarchicalMerger into one node #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 14:55:47 +08:00
5b387b68ba The 'cmd' module is introduced to make the CLI easy to use. (#10542)
…pdate comand

### What problem does this PR solve?

To make the CLI easy to use.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-14 14:53:00 +08:00
f92a45dcc4 Feat: let toc run asynchronizly... (#10513)
### What problem does this PR solve?

#10436 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 14:14:52 +08:00
c4b8e4845c Docs: The full edition has only two built-in embedding models (#10540)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update
2025-10-14 14:13:37 +08:00
87659dcd3a Fix: unexpected Auth return code (#10539)
### What problem does this PR solve?

Fix unexpected Auth return code.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 14:13:10 +08:00
6fd9508017 Docs: Updated parse_documents (#10536)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-14 13:40:56 +08:00
113851a692 Add 'status' field when list services (#10538)
### What problem does this PR solve?

```
admin> list services;
command: list services;
Listing all services
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+---------+
| extra                                                                                     | host      | id | name          | port  | service_type   | status  |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+---------+
| {}                                                                                        | 0.0.0.0   | 0  | ragflow_0     | 9380  | ragflow_server | Timeout |
| {'meta_type': 'mysql', 'password': 'infini_rag_flow', 'username': 'root'}                 | localhost | 1  | mysql         | 5455  | meta_data      | Alive   |
| {'password': 'infini_rag_flow', 'store_type': 'minio', 'user': 'rag_flow'}                | localhost | 2  | minio         | 9000  | file_store     | Alive   |
| {'password': 'infini_rag_flow', 'retrieval_type': 'elasticsearch', 'username': 'elastic'} | localhost | 3  | elasticsearch | 1200  | retrieval      | Alive   |
| {'db_name': 'default_db', 'retrieval_type': 'infinity'}                                   | localhost | 4  | infinity      | 23817 | retrieval      | Timeout |
| {'database': 1, 'mq_type': 'redis', 'password': 'infini_rag_flow'}                        | localhost | 5  | redis         | 6379  | message_queue  | Alive   |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+---------+
admin> 
Use '\q' to quit
admin> 
```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-14 13:40:32 +08:00
66c69d10fe Fix: Update the parsing editor to support dynamic field names and optimize UI styles #9869 (#10535)
### What problem does this PR solve?

Fix: Update the parsing editor to support dynamic field names and
optimize UI styles #9869
-Modify the default background color of UI components such as Input and
Select to 'bg bg base'`
-Remove TagItems component reference from naive configuration page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 13:31:48 +08:00
781d49cd0e Feat: Display the configuration of data flow operators on the node #9869 (#10533)
### What problem does this PR solve?

Feat: Display the configuration of data flow operators on the node #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 13:30:54 +08:00
aaae938f54 Add kibana tool in the docker compose file(#10525) (#10526)
### What problem does this PR solve?

add kibana tool in the docker compose file(#10525)

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: virgilwong <hyhvirgil@gmail.com>
2025-10-14 09:38:47 +08:00
9e73f799b2 Feat: add Zhipu GLM-ASR model (#10529)
### What problem does this PR solve?

Add Zhipu GLM-ASR model

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 09:32:45 +08:00
21a62130c8 Fix: empty references in agent conversation (#10528)
### What problem does this PR solve?
issue:
#10495
change:
fix empty references in agent conversation

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 09:32:13 +08:00
68e47c81d4 Feat: Add parse_document with feed back (#10523)
### What problem does this PR solve?

Solved: Sync Parse Document API #5635
Feat: Add parse_document with feed back, user can view the status of
each document after parsing finished.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2025-10-14 09:31:19 +08:00
f11d8af936 Fix: wrong Knowledgebase tasks_finish_at (#10521)
### What problem does this PR solve?

Wrong Knowledgebase tasks_finish_at.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-14 09:30:46 +08:00
74ec734d69 Feat: add admin server to docker (#10522)
### What problem does this PR solve?

Add admin server to docker.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 19:05:54 +08:00
8c75803b70 Fix: XSS vulnerability in Ragflow's chat view (#10519)
### What problem does this PR solve?

Fix: XSS vulnerability in Ragflow's chat view

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 19:04:25 +08:00
ff4239c7cf Docs: Updated descriptions on metadata filtering (#10518)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-13 17:33:04 +08:00
cf5867b146 Feat: Merge title splitter and token splitter into chunker category #9869 (#10517)
### What problem does this PR solve?

Feat: Merge title splitter and token splitter into chunker category
#9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 15:46:14 +08:00
77481ab3ab Fix: Optimized the login page and fixed some known issues. #9869 (#10514)
### What problem does this PR solve?

Fix: Optimized the login page and fixed some known issues. #9869

- Added the FlipCard3D component to implement a 3D flip effect on the
login/registration forms.
- Adjusted the Spotlight component to support custom positioning and
color configurations.
- Updated the route to point to the new login page /login-next.
- Added a cancel interface to the auto-generate function.
- Fixed scroll bar issues in PDF preview.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 15:31:36 +08:00
9c53b3336a Fix: The Context Generator(Transformer) node can only be followed by a Tokenizer(Indexer) and a Context Generator(Transformer). #9869 (#10515)
### What problem does this PR solve?

Fix: The Context Generator node can only be followed by a Tokenizer and
a Context Generator. #9869
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 14:37:30 +08:00
24481f0332 Fix: Update lm studio models support, refer to #8116 (#10509)
### What problem does this PR solve?

Fix: Update lm studio models support, refer to #8116

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update
2025-10-13 13:58:08 +08:00
4e6b84bb41 Feat: add trino support (#10512)
### What problem does this PR solve?
issue:
[#10296](https://github.com/infiniflow/ragflow/issues/10296)
change:
- ExeSQL: support connecting to Trino.
- Validation: password can be empty only when db_type === "trino";
  all other database types keep the existing requirement (non-empty).

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 13:57:40 +08:00
65c3f0406c Fix: maintain backward compatibility for KB tasks (#10508)
### What problem does this PR solve?

Maintain backward compatibility for KB tasks

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 11:53:48 +08:00
7fb8b30cc2 fix: decode before format to json (#10506)
### What problem does this PR solve?

Decode bytes before format to json.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 11:11:06 +08:00
acca3640f7 Feat: Modify the background color of the canvas #9869 (#10507)
### What problem does this PR solve?

Feat: Modify the background color of the canvas #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-13 11:10:54 +08:00
58836d84fe Fix: Mcp reset error, #10497 (#10498)
### What problem does this PR solve?

Fix #10497

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-13 09:34:44 +08:00
ad56137a59 Feat: ​​OpenSearch's support for newly embedding models​​ (#10494)
### What problem does this PR solve?

fix issues:https://github.com/infiniflow/ragflow/issues/10402

As the newly distributed embedding models support vector dimensions max
to 4096, while current OpenSearch's max dimension support is 1536.
As I tested, the 4096-dimensions vector will be treated as a float type
which is unacceptable in OpenSearch.

Besides, OpenSearch supports max to 16000 dimensions by defalut with the
vector engine(Faiss). According to:
https://docs.opensearch.org/2.19/field-types/supported-field-types/knn-methods-engines/

I added max to 10240 dimensions support for OpenSearch, as I think will
be sufficient in the future.

As I tested , it worked well on my own server (treated as knn_vector)by
using qwen3-embedding:8b as the embedding model:
<img width="1338" height="790" alt="image"
src="https://github.com/user-attachments/assets/a9b2d284-fcf6-4cea-859a-6aadccf36ace"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)


By the way, I will still focus on the stuff about
Elasticsearch/Opensearch as search engines and vector databases.

Co-authored-by: 张雨豪 <zhangyh80@chinatelecom.cn>
2025-10-11 19:58:12 +08:00
2828e321bc Fix: remove lang for autio. (#10496)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 19:38:07 +08:00
932781ea4e Fix: incorrect agent template #10393 (#10491)
### What problem does this PR solve?

Fix: incorrect agent template #10493

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 19:37:42 +08:00
5200711441 Feat: add support for multi-column PDF parsing (#10475)
### What problem does this PR solve?

Add support for multi-columns PDF parsing. #9878, #9919.

Two-column sample:
<img width="1885" height="1020" alt="image"
src="https://github.com/user-attachments/assets/0270c028-2db8-4ca6-a4b7-cd5830882d28"
/>

Three-column sample: 
<img width="1881" height="992" alt="image"
src="https://github.com/user-attachments/assets/9ee88844-d5b1-4927-9e4e-3bd810d6e03a"
/>

Single-column sample:
<img width="1883" height="1042" alt="image"
src="https://github.com/user-attachments/assets/e93d3d18-43c3-4067-b5fa-e454ed0ab093"
/>



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 18:46:09 +08:00
c21cea2038 Fix: Added table of contents extraction functionality and optimized form item layout #9869 (#10492)
### What problem does this PR solve?

Fix: Added table of contents extraction functionality and optimized form
item layout #9869

- Added `EnableTocToggle` component to toggle table of contents
extraction on and off
- Added multiple parser configuration components (such as naive, book,
laws, etc.), displaying different parser components based on built-in
slicing methods

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 18:45:55 +08:00
6a0f448419 Feat: Modify the default style of the agent node anchor #9869 (#10489)
### What problem does this PR solve?

Feat: Modify the default style of the agent node anchor #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 18:45:38 +08:00
7d2f65671f Feat: debugging toc part. (#10486)
### What problem does this PR solve?

#10436

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 18:45:21 +08:00
a0d5f81098 Feat: include author, journal name, volume, issue, page, and DOI in PubMed search results (#10481)
### What problem does this PR solve?

issue:
[#6571](https://github.com/infiniflow/ragflow/issues/6571)
change:
include author, journal name, volume, issue, page, and DOI in PubMed
search results

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-11 16:00:16 +08:00
52f26f4643 feat(login): Refactor the login page and add dynamic background and highlight effects #9869 (#10482)
### What problem does this PR solve?

Refactor(login): Refactor the login page and add dynamic background and
highlight effects #9869

### Type of change
- [x] Refactoring
2025-10-11 15:24:42 +08:00
313e92dd9b Fix: Fixed the issue where the connection lines of placeholder nodes in the agent canvas could not be displayed #9869 (#10485)
### What problem does this PR solve?

Fix: Fixed the issue where the connection lines of placeholder nodes in
the agent canvas could not be displayed #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 15:24:27 +08:00
fee757eb41 Fix: Disable reasoning on Gemini 2.5 Flash by default (#10477)
### What problem does this PR solve?

Gemini 2.5 Flash Models use reasoning by default. There is currently no
way to disable this behaviour. This leads to very long response times (>
1min). The default behaviour should be, that reasoning is disabled and
configurable

issue #10474 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 10:22:51 +08:00
b5ddc7ca05 fix: return type annotation for get_urls() in download_deps (#10478)
### What problem does this PR solve?

Fixes the return type annotation for the `get_urls` function in
`download_deps.py`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 09:49:09 +08:00
534fa60b2a Fix: Agent.reset() argument wrong #10463 & Unable to converse with agent through Python API. #10415 (#10472)
### What problem does this PR solve?
Fix: Agent.reset() argument wrong #10463 & Unable to converse with agent
through Python API. #10415

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 20:44:05 +08:00
390b2b8f26 Feat: Adjust the style of the delete button on the agent side #9869 (#10473)
### What problem does this PR solve?

Feat: Adjust the style of the delete button on the agent side #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 19:52:08 +08:00
0283e4098f Fix #10408 (#10471)
### What problem does this PR solve?

Google Cloud model does not work correctly with gemini-2.5 models
Close #10408

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-10 19:18:24 +08:00
2cdba3d1e6 Fix: Fixed the issue where swagger apidocs could not be opened properly(#9522) (#10461)
### What problem does this PR solve?

Fix: Fixed the issue where swagger apidocs could not be opened
properly(#9522)

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: virgilwong <hyhvirgil@gmail.com>
2025-10-10 18:36:20 +08:00
f0
eb0b37d7ee typo: fix _entity_index_dilimiter_key to entity_index_delimiter_key (#10465)
### What problem does this PR solve?

typo: fix _entity_index_dilimiter_key to entity_index_delimiter_key

### Type of change

- [x] Refactoring
2025-10-10 18:36:00 +08:00
198e52e990 Feat: Added toc enhance field to chat and retrieval operator configuration #10436 (#10470)
### What problem does this PR solve?

Feat: Added toc enhance field to chat and retrieval operator
configuration #10436

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 18:35:43 +08:00
a50ccf77f9 Fix: Fixed some discovered bugs #9869 (#10466)
### What problem does this PR solve?

Fix: Bug fixes #9869
- Adjusted the breadcrumb display logic on the data flow results page
- Added the default display of "Local Upload" to the Source field in the
dataset overview table
- Replaced the original Mindmap Task field with the GraphRAG Task field
on the dataset settings page
- Optimized the build button status check criteria and adjusted the
progress information display logic
- Introduced a Tooltip in the parsing status cell component and removed
redundant Button components

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 17:16:34 +08:00
deaf15a08b Fix: typo (#10469)
### What problem does this PR solve?

Fix some typo in document.

### Type of change

- [x] Documentation Update
2025-10-10 17:08:20 +08:00
0d8791936e Feat: TOC retrieval (#10456)
### What problem does this PR solve?

#10436

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 17:07:55 +08:00
5d167cd772 feat: support qwq reasoning models with non-stream output (#10468)
### What problem does this PR solve?
issue:
[#6193](https://github.com/infiniflow/ragflow/issues/6193)
change:
support qwq reasoning models with non-stream output

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 16:38:04 +08:00
f35c5ed119 Fix: Fixed an issue where parser configurations could be added infinitely #9869 (#10464)
### What problem does this PR solve?

Fix: Fixed an issue where parser configurations could be added
infinitely #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 16:30:13 +08:00
fc46d6bb87 Fix: Fixed the issue where the operator added by clicking the plus sign in the data flow would overlap with the original section #9886 (#10458)
### What problem does this PR solve?

Fix: Fixed the issue where the operator added by clicking the plus sign
in the data flow would overlap with the original section #9886

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 13:10:18 +08:00
8252b1c5c0 bump infinity (#10422)
### What problem does this PR solve?

bump infinity to v0.6.0-dev7
Needs https://github.com/infiniflow/infinity/pull/3016

### Type of change
- [x] Other (please describe): Infinity
2025-10-10 12:41:45 +08:00
c802a6ffdd Feat: Add prompts for toc relevance check according to #10436 (#10457)
### What problem does this PR solve?

Feat: Add prompts for toc relevance check according to #10436

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 11:44:46 +08:00
9b06734ced Feat: add total in List dataset API (#10448)
### What problem does this PR solve?

Feat: add total in List dataset API,  solved #10360 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 11:20:55 +08:00
6ab4c1a6e9 Refactor: improve how NvidiaCV calculate res total token counts (#10455)
### What problem does this PR solve?
improve how NvidiaCV calculate res total token counts

### Type of change
- [x] Refactoring
2025-10-10 11:03:40 +08:00
f631073ac2 Fix OCR GPU provider mem limit handling (#10407)
### What problem does this PR solve?

- Running DeepDoc OCR on large PDFs inside the GPU docker-compose setup
would intermittently fail with
[ONNXRuntimeError] ... p2o.Clip.6 ... Available memory of 0 is smaller
than requested bytes ...
- Root cause: load_model() in deepdoc/vision/ocr.py treated
device_id=None as-is.
torch.cuda.device_count() > device_id then raised a TypeError, the
helper returned False, and ONNXRuntime quietly fell back to
CPUExecutionProvider with
the hard-coded 512 MB limit, which then triggered the allocator failure.
- Environment where this reproduces: Windows 11, AMD 5900x, 64 GB RAM,
RTX 3090 (24 GB), docker-compose-gpu.yml from upstream, default DeepDoc
+ GraphRAG
parser settings, ingesting heavy PDF such as 《内科学》(第10版).pdf (~180 MB).

  Fixes:

- Normalize device_id to 0 when it is None before calling any CUDA APIs,
so the GPU path is considered available.
- Allow configuring the CUDA provider’s memory cap via
OCR_GPU_MEM_LIMIT_MB (default 2048 MB) and expose
OCR_ARENA_EXTEND_STRATEGY; the calculated byte
  limit is logged to confirm the effective settings.

  After the change, ragflow_server.log shows for example
load_model ... uses GPU (device 0, gpu_mem_limit=21474836480,
arena_strategy=kNextPowerOfTwo) and the same document finishes OCR
without allocator errors.

  ### Type of change

  - [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-10 11:03:12 +08:00
8aabc2807c Feat: Pipeline Docx file supports Markdown output (#10439)
### What problem does this PR solve?

Pipeline Docx file supports Markdown output.

<img width="1242" height="755" alt="image"
src="https://github.com/user-attachments/assets/63cca75b-20b9-4a90-a01c-c0c2fccf1f2a"
/>

<img width="1227" height="717" alt="image"
src="https://github.com/user-attachments/assets/0dcb94b2-7ba0-48d5-9231-dc6e5c4b4192"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-10 09:39:15 +08:00
d931c33ced Fix typos: retrievaler -> retriever (#10372)
### What problem does this PR solve?

Fix typos

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-10 09:17:36 +08:00
f4324e89d9 Feat: Importing data flow files from the list page #9869 (#10446)
### What problem does this PR solve?

Feat: Importing data flow files from the list page #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 19:03:29 +08:00
f04c9e2937 Fix: correctly update parser method & correct vllm pdf parser (#10441)
### What problem does this PR solve?

Fix: correctly update parser method

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-09 19:03:12 +08:00
1fc2889f98 Feat: Replace the collapse icon #9869 (#10442)
### What problem does this PR solve?

Feat: Replace the collapse icon #9869

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 17:21:38 +08:00
ee0c38da66 fix:update searxng_url logic (#10440)
### What problem does this PR solve?
issue:
[#10417](https://github.com/infiniflow/ragflow/issues/10417)
change:
Adjusted the `searxng_url` priority logic to ensure the
frontend-provided URL takes precedence over the model’s default
configuration. This allows user-specified SearXNG endpoints to be
correctly applied during execution, improving flexibility across
different environments.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-09 16:56:23 +08:00
c1806e1ab2 Fix: Bug fixes and removed previous settings page code #9869 (#10435)
### What problem does this PR solve?

Fix: Bug fixes and removed previous settings page code
- Modified the default text color class name in the FileStatusBadge
component
- Adjusted the FilterType interface definition for ListFilterBar to
support JSX labels and an optional count field
- Removed redundant changeRaptor callback functions and comment blocks
in RaptorFormFields
- Corrected the isHorizontal check logic in SliderInputFormField
- Optimized the Command component's scrolling behavior and enhanced its
event handling
- Adjusted the TooltipContent style to support automatic scrollbar
display
- Removed the old settings page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-10-09 16:55:27 +08:00
66d0d44a00 Feat: HTTP componant supports variables (#10432)
### What problem does this PR solve?

HTTP component supports variables. #10382




![http1](https://github.com/user-attachments/assets/196a2a5b-461c-455c-8896-ec2efe7c0a13)


![http2](https://github.com/user-attachments/assets/0ab97cb0-323c-456e-b556-6f416d52e59f)


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 16:05:58 +08:00
2078d88c28 Feat: Modify the translation file of the workflow #9869 (#10430)
### What problem does this PR solve?
Feat: Modify the translation file of the workflow #9869


### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 13:55:36 +08:00
7734ad7fcd Doc: guide for admin service (#10431)
### What problem does this PR solve?

Add guide document for admin cli and service.

### Type of change

- [x] Documentation Update
2025-10-09 13:48:02 +08:00
1a47e136e3 Feat: Adds a new feature that enables the LLM to extract a structured table of contents (TOC) directly from plain text. (#10428)
### What problem does this PR solve?

**Adds a new feature that enables the LLM to extract a structured table
of contents (TOC) directly from plain text.**
_This implementation prioritizes efficiency over reasoning — the model
runs in a strictly deterministic mode (thinking disabled) to minimize
latency.
As a result, overall performance may be less optimal, but the extraction
speed and consistency are guaranteed._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-09 13:47:31 +08:00
cbf04ee470 Feat: Use data pipeline to visualize the parsing configuration of the knowledge base (#10423)
### What problem does this PR solve?

#9869

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-10-09 12:36:19 +08:00
ef0aecea3b Refactor: fix admin exception (#10400)
### What problem does this PR solve?

As title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-09 11:15:33 +08:00
dfc5fa1f4d Feat: add DeerAPI support (#10303)
### Related issues
#10078 

### What problem does this PR solve?
Integrate DeerAPI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

Co-authored-by: DeerAPI <tensor.null@gmail.com>
2025-10-09 11:14:49 +08:00
f341dc03b8 Feature/agent UI style optimization (#10385)
### What problem does this PR solve?

Hi team,  @ZhenhangTung @KevinHuSh  @cike8899 
About #10384 , I've completed the UI optimization adjustments for the
Agent page according to our previous discussions and the design draft
sketches provided by @Naomi. The main modifications include:

1. Adjusted the style and content of placeholder-node.
2. Adjusted the location of the dropdown (to the right of the
placeholder-node) .
3. Adjusted the tooltip position spacing when the mouse hovers in the
dropdown menu.
4. Hides the thick scroll bar on the dropdown component.
5. Highlight the connection line when dragging to generate a
placeholder-node

<img width="1323" height="509" alt="Image"
src="https://github.com/user-attachments/assets/0d366f7f-477d-4c00-bb58-d5d58b3a745f"
/>

Please review the related code modifications when you have time. Let me
know if further adjustments are needed!
Thanks!

### Type of change

- [x] Other (please describe): UI Enhancement

---------

Co-authored-by: leonlai <leonlai@futurefab.ai>
2025-10-09 11:12:12 +08:00
4585edc20e Refactor: improve cv model logics (#10414)
1. improve how to get total token count

Improve how to get total token count

### Type of change
- [x] Refactoring
2025-10-09 09:47:36 +08:00
dba9158f9a Fix MY_GITHUB_TOKEN (#10404)
### What problem does this PR solve?

Fix MY_GITHUB_TOKEN

### Type of change

- [x] Other (please describe): CI
2025-10-03 16:49:58 +08:00
82f572ff95 Check workflow duplication (#10399)
### What problem does this PR solve?

Check workflow duplication

### Type of change

- [x] Other (please describe): CI
2025-10-02 10:57:08 +08:00
518a00630e Fix highlight with infinity (#10345)
Fix highlight with infinity
Fix on OpenSUSE Tumbleweed

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 19:15:01 +08:00
aa61ae24dc Doc: Fixed a typo (#10391)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-09-30 18:57:34 +08:00
fb950079ef Feat/service manage (#10381)
### What problem does this PR solve?

- Admin service support SHOW SERVICE <id>.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

issue: #10241
2025-09-30 16:23:09 +08:00
aec8c15e7e fix: reset chat state when creating new dialog (#10380)
### What problem does this PR solve?
issue:
[Question]: New Chat Creation Renames Edited Chat Instead of Creating a
New One #10373
change:
reset chat state when creating new dialog

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 16:05:38 +08:00
7c620bdc69 Fix: unexpected operation of document management (#10366)
### What problem does this PR solve?

 Unexpected operation of document management.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 15:20:38 +08:00
e7dde69584 Fix: reset tools in/out-put. (#10378)
### What problem does this PR solve?

#10361

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-30 15:13:18 +08:00
d6eded1959 Refactor: move exceptions to common folder (#10383)
### What problem does this PR solve?

as title

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-09-30 15:10:04 +08:00
80f851922a Feat: add support for LongCat-Flash-Thinking and Claude Sonnet 4.5 (#10374)
### What problem does this PR solve?

Add support for LongCat-Flash-Thinking and Claude Sonnet 4.5.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-30 12:04:14 +08:00
17757930a3 Feat: add support for international Dashscope service (#10356)
### What problem does this PR solve?

 Add support for international Dashscope service. #10340 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 14:49:45 +08:00
a8883905a7 Feat: Add baseUrl to the Tongyi Qianwen model configuration modal #10340 (#10357)
### What problem does this PR solve?

Feat: Add baseUrl to the Tongyi Qianwen model configuration modal #10340

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-29 12:44:18 +08:00
8426cbbd02 Feat: Keep connection status during generating agent by drag and drop … (#10141)
### What problem does this PR solve?

About issue #10140

In version 0.20.1, we implemented the generation of new node through
mouse drag and drop. If we could create a workflow module like in Coze,
where there is not only a dropdown menu but also an intermediate node
(placeholder node) after the drag and drop is completed, this could
improve the user experience.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 10:28:19 +08:00
0b759f559c Fix: invalid user can login from OSS (#10348)
### What problem does this PR solve?

An invalid user can log in from OSS
https://github.com/infiniflow/ragflow/issues/10293

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-29 10:16:31 +08:00
2d5d10ecbf Feat/admin drop user (#10342)
### What problem does this PR solve?

- Admin client support drop user.

Issue: #10241 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-29 10:16:13 +08:00
954bd5a1c2 Fix: unexpected operation of file management (#10343)
### What problem does this PR solve?

Fix unexpected operation of file management.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 19:44:01 +08:00
ccb1c269e8 fix: Handling Null Values in SQL Execution Results (#10332)
### What problem does this PR solve?

Close #10324
The agent reported an error "undefined" after using the ExeSql tool.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

now:
<img width="641" height="687" alt="img"
src="https://github.com/user-attachments/assets/eb3bbd27-4cf8-42ce-939c-fc012a93efa0"
/>
2025-09-28 15:40:06 +08:00
6dfb0c245c Fix: The dataset uses the new id to obtain the knowledge graph #10333 (#10339)
### What problem does this PR solve?

Fix: The dataset uses the new id to obtain the knowledge graph #10333

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 15:39:49 +08:00
72d1047a8f Fix: update Japanese translations in ja.ts (#10338)
### What problem does this PR solve?

This PR addresses [issue
#9962](https://github.com/infiniflow/ragflow/issues/9962).
It updates the Japanese translations in `web/src/locales/ja.ts`.  

For this contribution, the scope is intentionally limited to **Chat**
and **Knowledge Base** related UI texts, ensuring focused and
incremental improvement without affecting other modules.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 15:39:05 +08:00
bece37e6c8 Fix: floating widget match style with original one (#10317)
### What problem does this PR solve?

These changes are intended to implement the remaining functionalities of
the fullscreen widget.

The question arises: how to display document prieview of PDFs in this
floating widget?
- simply enlarge the widget window
- implement zoom in/out
- render outside the iframe?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 13:58:10 +08:00
59cb0eb8bc fix: remove ibm-db dependency and refactor import order (#10330)
### What problem does this PR solve?
issue: 
#10326
change:
 remove ibm-db dependency and refactor import order

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 12:19:32 +08:00
fc56217eb3 Fix: The enterprise version of the knowledge graph cannot be displayed. #10333 (#10334)
### What problem does this PR solve?
Fix: The enterprise version of the knowledge graph cannot be displayed.
#10333
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-28 12:18:58 +08:00
723cf9443e Fix:After setting user's is_active to 0, the user can still log in to RAGFlow. (#10325)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10293

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-28 12:18:01 +08:00
bd94b5dfb5 feat: add IBM DB2 support (#10306)
### What problem does this PR solve?

issue:#5617
change:add IBM DB2 support in ExeSQL 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-26 14:55:19 +08:00
ef59c5bab9 FIX: Rename the CometEmbed and CometSeq2txt classes to CometAPIEmbed and CometAPISeq2txt, and correct supported_models.mdx. (#10298)
### What problem does this PR solve?

Rename the CometEmbed and CometSeq2txt classes to CometAPIEmbed and
CometAPISeq2txt, and correct supported_models.mdx.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-26 10:50:56 +08:00
62b7c655c5 Refactor: migrate the function to specific file (#10201)
### What problem does this PR solve?

Move base64 related function to api/common/base64.py

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-09-25 23:37:50 +08:00
b0b866c8fd Refactor: move some functions out of api/utils/__init__.py (#10216)
### What problem does this PR solve?

Refactor import modules.

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-09-25 18:04:49 +08:00
3a831d0c28 Fixed the issue where database connections were interrupted under high concurrency (#10126)
### What problem does this PR solve?

Fixed the issue where database connections were interrupted under high
concurrency

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-09-25 17:03:43 +08:00
9e323a9351 Feat(nlp): add "怎么办" pattern to question word removal (#10284)
### What problem does this PR solve?

Added "怎么办" to the regex pattern in rmWWW method to improve query
cleaning by removing this common question phrase along with other
question words.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 16:47:56 +08:00
7ac95b759b Feat/admin service (#10233)
### What problem does this PR solve?

- Admin client support show user and create user command.
- Admin client support alter user password and active status.
- Admin client support list user datasets.

issue: #10241

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-25 16:15:15 +08:00
daea357940 Fix: invalid COMPONENT_EXEC_TIMEOUT (#10278)
### What problem does this PR solve?

Fix invalid COMPONENT_EXEC_TIMEOUT. #10273

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 14:11:09 +08:00
4aa1abd8e5 Refactor: move encrypt/decrypt to one file (#10203)
### What problem does this PR solve?

Move base64 related function to api/common/base64.py

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-25 12:53:03 +08:00
922b5c652d Refactor: fix typos (#10200)
### What problem does this PR solve?

1. Fix typos
2. Rename function
3. Use English to write comment

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-25 12:05:43 +08:00
aaa97874c6 fix: replace traceback.print_exc() with logging.exception(e) in conve… (#10275)
…rsation_app.py

### What problem does this PR solve?
issue:
#10188
change:
This PR replaces traceback.print_exc() with logging.exception(e) in
conversation_app.py to ensure that full error tracebacks are captured by
the logging system instead of being written only to stderr.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 11:45:44 +08:00
193d93d820 Refactor: Improve the logic clean conf for ZhipuChat (#10274)
### What problem does this PR solve?
Improve the logic clean conf for ZhipuChat

### Type of change
- [x] Refactoring
2025-09-25 10:28:03 +08:00
4058715df7 Docs: Knowledge base renamed to dataset. (#10269)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-09-25 09:45:27 +08:00
3f595029d7 fix: Wrong Qwen models's ID (#10272)
### What problem does this PR solve?
fix: Wrong Qwen models's ID
[Bug]: ERROR: litellm.NotFoundError: DashscopeException - The model
Qwen/Qwen3-Omni-Flash does not exist or you do not have access to it.
change: delete wrong qwen model id

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 09:43:44 +08:00
e8f5a4da56 add model: qwen3-max and qewn3-vl series (#10256)
### What problem does this PR solve?
qwen3-max and qewn3-vl series
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 20:00:53 +08:00
a9472e3652 add Qwen models (#10263)
### What problem does this PR solve?

add Qwen models

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 16:52:12 +08:00
4dd48b60f3 Fix: Russian language config.ts (#10250)
### What problem does this PR solve?

Fix ru language

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-24 12:48:41 +08:00
e4ab8ba2de UI: Update Russian language ru.ts (#10251)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 12:48:13 +08:00
a1f848bfe0 Fix:max_tokens must be at least 1, got -950, BadRequestError (#10252)
### What problem does this PR solve?
https://github.com/infiniflow/ragflow/issues/10235

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-09-24 10:49:34 +08:00
f2309ff93e UI: Add Russian language (#10249)
Add Russian language

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 10:18:47 +08:00
38be53cf31 fix: prevent list index out of range in chat streaming (#10238)
### What problem does this PR solve?
issue:
[Bug]: ERROR: list index out of range #10188
change:
fix a potential list index out of range error in chat response parsing
by adding explicit checks for empty choices.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-23 19:59:39 +08:00
65a06d62d8 Flow text processing bug (#10246)
### What problem does this PR solve?
@KevinHuSh 

Hello, my submission this morning did not fully resolve this issue.
After researching the knowledge, I have decided to delete the two lines
of regular expression processing that were added this morning.

```
remote 2 line
modify 1 line
```
I have mounted the following code in Docker compose and verified that it
will no longer report '\ m' errors

<img width="1050" height="447" alt="image"
src="https://github.com/user-attachments/assets/2aaf1b86-04ac-45ce-a2f1-052fed620e80"
/>

[my before pull](https://github.com/infiniflow/ragflow/pull/10211) 

<img width="1000" height="603" alt="image"
src="https://github.com/user-attachments/assets/fb3909ef-00ee-46c6-a26f-e64736777291"
/>

Thanks for your code Review

### Type of change

- [√ ] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: mxc <mxc@example.com>
2025-09-23 19:59:13 +08:00
10cbbb76f8 revert gpt5 integration (#10228)
### What problem does this PR solve?

  Revert back to chat.completions.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe):
  Revert back to chat.completions.
2025-09-23 16:06:12 +08:00
1c84d1b562 Fix: azure OpenAI retry (#10213)
### What problem does this PR solve?

Currently, Azure OpenAI returns one minute Quota limit responses when
chat API is utilized. This change is needed in order to be able to
process almost any documents using models deployed in Azure Foundry.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-23 12:19:28 +08:00
4eb7659499 Fix bug: broken import from rag.prompts.prompts (#10217)
### What problem does this PR solve?

Fix broken imports

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-23 10:19:25 +08:00
46a61e5aff Fix: string merge bug about agent TextProcessing. (\m) (#10211)
### What problem does this PR solve?
An error occurred while merging strings containing '\m' in the Text
Processing function of the agent.

Convert \ m to m using regular expressions

From my example alone, it doesn't affect the original meaning, it's
still math

<img width="1227" height="1056" alt="image"
src="https://github.com/user-attachments/assets/9306a8ca-bb97-47bf-b91f-77acfce49875"
/>


### Type of change

- [ √ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: mxc <mxc@example.com>
2025-09-23 10:16:11 +08:00
da82566304 Fix: resolve hash collisions by switching to UUID &correct logic for always-true statements & Update GPT api integration & Support qianwen-deepresearch (#10208)
### What problem does this PR solve?

Fix: resolve hash collisions by switching to UUID &correct logic for
always-true statements, solved: #10165
Feat: Update GPT api integration, solved: #10204 
Feat: Support qianwen-deepresearch, solved: #10163 
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-23 09:34:30 +08:00
c8b79dfed4 The retrieval component needs to support returning JSON data(#10170) (#10171)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-22 17:28:29 +08:00
da80fa40bc fix python_api example (#10196)
### What problem does this PR solve?

Fix coding example in example

### Type of change

- [x] Documentation Update
2025-09-22 17:27:25 +08:00
94dbd4aac9 Refactor: use the same implement for total token count from res (#10197)
### What problem does this PR solve?
use the same implement for total token count from res

### Type of change

- [x] Refactoring
2025-09-22 17:17:06 +08:00
ca9f30e1a1 Add tree_merge for law parsers, significantly outperforming hierarchical_merge (#10202)
### What problem does this PR solve?
Add tree_merge for law parsers, significantly outperforming
hierarchical_merge, solved: #8637
1. Add tree_merge for law parsers, include build_tree and get_tree by
dfs.
2. add Copyright statement for helath_utils
### Type of change

- [x] Documentation Update
- [x] Performance Improvement
2025-09-22 16:33:21 +08:00
2e4295d5ca Chat Widget (#10187)
### What problem does this PR solve?

Add a chat widget. I'll probably need some assistance to get this ready
for merge!

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
2025-09-22 11:03:33 +08:00
d11b1628a1 Feat: add admin CLI and admin service (#10186)
### What problem does this PR solve?

Introduce new feature: RAGFlow system admin service and CLI

### Introduction

Admin Service is a dedicated management component designed to monitor,
maintain, and administrate the RAGFlow system. It provides comprehensive
tools for ensuring system stability, performing operational tasks, and
managing users and permissions efficiently.

The service offers monitoring of critical components, including the
RAGFlow server, Task Executor processes, and dependent services such as
MySQL, Infinity / Elasticsearch, Redis, and MinIO. It automatically
checks their health status, resource usage, and uptime, and performs
restarts in case of failures to minimize downtime.

For user and system management, it supports listing, creating,
modifying, and deleting users and their associated resources like
knowledge bases and Agents.

Built with scalability and reliability in mind, the Admin Service
ensures smooth system operation and simplifies maintenance workflows.

It consists of a server-side Service and a command-line client (CLI),
both implemented in Python. User commands are parsed using the Lark
parsing toolkit.

- **Admin Service**: A backend service that interfaces with the RAGFlow
system to execute administrative operations and monitor its status.
- **Admin CLI**: A command-line interface that allows users to connect
to the Admin Service and issue commands for system management.

### Starting the Admin Service

1. Before start Admin Service, please make sure RAGFlow system is
already started.

2.  Run the service script:
    ```bash
    python admin/admin_server.py
    ```
The service will start and listen for incoming connections from the CLI
on the configured port.

### Using the Admin CLI

1.  Ensure the Admin Service is running.
2.  Launch the CLI client:
    ```bash
    python admin/admin_client.py -h 0.0.0.0 -p 9381
## Supported Commands
Commands are case-insensitive and must be terminated with a semicolon
(`;`).
### Service Management Commands
-  [x] `LIST SERVICES;`
    -   Lists all available services within the RAGFlow system.
-  [ ] `SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by
`<id>`.
-  [ ] `STARTUP SERVICE <id>;`
    -   Attempts to start the service identified by `<id>`.
-  [ ] `SHUTDOWN SERVICE <id>;`
- Attempts to gracefully shut down the service identified by `<id>`.
-  [ ] `RESTART SERVICE <id>;`
    -   Attempts to restart the service identified by `<id>`.
### User Management Commands
-  [x] `LIST USERS;`
    -   Lists all users known to the system.
-  [ ] `SHOW USER '<username>';`
- Shows details and permissions for the specified user. The username
must be enclosed in single or double quotes.
-  [ ] `DROP USER '<username>';`
    -   Removes the specified user from the system. Use with caution.
-  [ ] `ALTER USER PASSWORD '<username>' '<new_password>';`
    -   Changes the password for the specified user.
### Data and Agent Commands
-  [ ] `LIST DATASETS OF '<username>';`
    -   Lists the datasets associated with the specified user.
-  [ ] `LIST AGENTS OF '<username>';`
    -   Lists the agents associated with the specified user.
### Meta-Commands
Meta-commands are prefixed with a backslash (`\`).
-   `\?` or `\help`
    -   Shows help information for the available commands.
-   `\q` or `\quit`
    -   Exits the CLI application.
## Examples
```commandline
admin> list users;
+-------------------------------+------------------------+-----------+-------------+
| create_date                   | email                  | is_active | nickname    |
+-------------------------------+------------------------+-----------+-------------+
| Fri, 22 Nov 2024 16:03:41 GMT | jeffery@infiniflow.org | 1         | Jeffery     |
| Fri, 22 Nov 2024 16:10:55 GMT | aya@infiniflow.org     | 1         | Waterdancer |
+-------------------------------+------------------------+-----------+-------------+
admin> list services;
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| extra                                                                                     | host      | id | name          | port  | service_type   |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| {}                                                                                        | 0.0.0.0   | 0  | ragflow_0     | 9380  | ragflow_server |
| {'meta_type': 'mysql', 'password': 'infini_rag_flow', 'username': 'root'}                 | localhost | 1  | mysql         | 5455  | meta_data      |
| {'password': 'infini_rag_flow', 'store_type': 'minio', 'user': 'rag_flow'}                | localhost | 2  | minio         | 9000  | file_store     |
| {'password': 'infini_rag_flow', 'retrieval_type': 'elasticsearch', 'username': 'elastic'} | localhost | 3  | elasticsearch | 1200  | retrieval      |
| {'db_name': 'default_db', 'retrieval_type': 'infinity'}                                   | localhost | 4  | infinity      | 23817 | retrieval      |
| {'database': 1, 'mq_type': 'redis', 'password': 'infini_rag_flow'}                        | localhost | 5  | redis         | 6379  | message_queue  |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-22 10:37:49 +08:00
45f9f428db Fix: enable scrolling at chat setting (#10184)
### What problem does this PR solve?

This PR is related to
[#9961](https://github.com/infiniflow/ragflow/issues/9961).
In the Chat Settings screen, the textarea did not support scrolling when
the content grew longer than its visible area, which made it less
convenient to use.
Also, there was no Japanese placeholder text to guide users on what to
enter in the field.

This PR improves the user experience by:
- Adding `overflow-y-auto` to the textarea so that long content can be
scrolled smoothly.
- Introducing a placeholder (`メッセージを入力してください...`) to provide clearer
guidance for users.


https://github.com/user-attachments/assets/95553331-087b-42c5-a41d-5dfe08047bae

### What has been considered

As an alternative solution, I explored replacing the textarea with the
existing `PromptEditor` component.
However, this approach triggered a `canvas not found.` alert.  
The current implementation of `PromptEditor` internally attempts to
fetch **agent (canvas) information**, but in the Chat Settings screen no
such ID exists. As a result, the API call fails and the backend returns
`canvas not found.`.

One possible workaround would be to extend `PromptEditor` with a
**“disable variable picker” flag**, ensuring that plugins are not loaded
in contexts like Chat Settings. While feasible, this would have a
broader impact across the codebase.

Given these considerations, I decided to address the issue in a simpler
way by applying a Tailwind utility (`overflow-y-auto`). Since the UI
design is expected to change in the future, this solution is considered
sufficient for now.
<img width="1501" height="794" alt="Screenshot 2025-09-20 at 15 00 12"
src="https://github.com/user-attachments/assets/85578ee8-489f-4ede-b3af-bafd7afe95bd"
/>


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)  
- [ ] New Feature (non-breaking change which adds functionality)  
- [ ] Documentation Update  
- [ ] Refactoring  
- [ ] Performance Improvement  
- [ ] Other (please describe):
2025-09-22 10:37:34 +08:00
902703d145 Fix: skip tag query if tag kbs are invalid (#10168)
### What problem does this PR solve?

Skip `tag_query` step if `tag_kbs` are empty. 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 19:12:18 +08:00
7ccca2143c perf: add get_all_kb_doc_count func to simplify kb.doc_num updating (#10169)
### What problem does this PR solve?

Add get_all_kb_doc_count func to simplify kb.doc_num updating.

### Type of change

- [x] Performance Improvement
2025-09-19 19:11:50 +08:00
70ce02faf4 Feat: add support for Anthropic third-party API (#10173)
### What problem does this PR solve?
issue:
[Bug]: anthropic model have not baseurl selecting,need add #8546
change:
This PR adds support for using Anthropic models through a third-party
API by allowing a custom base_url.
It ensures compatibility with both the official Anthropic endpoint and
external providers.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 19:06:14 +08:00
3f1741c8c6 Docs: How to accelerate question answering (#10179)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-09-19 18:18:46 +08:00
6c24ad7966 fix: correct rerank_model condition logic (#10174)
### What problem does this PR solve?

fix the rerank_model condition logic by correcting the np.isclose check.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 16:02:10 +08:00
4846589599 Docs: Input and output variables defined in the Input and Output sections must also be implemented in your code. (#10162)
### What problem does this PR solve?
 
#10089 

### Type of change

- [x] Documentation Update
2025-09-19 11:35:58 +08:00
a24547aa66 Support server health check by http://localhost:<port>/v1/system/healthz (#10150)
### What problem does this PR solve?

Support server health check. Solved issue: #10106

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 11:11:07 +08:00
a04c5247ab Feat: Add file convert to document API just like file2document_app.py (#10158)
### What problem does this PR solve?

Add file convert to document API just like file2document_app.py

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-19 09:59:54 +08:00
ed6a76dcc0 Add Firecrawl integration for RAGFlow (#10152)
## 🚀 Firecrawl Integration for RAGFlow

This PR implements the Firecrawl integration for RAGFlow as requested in
issue https://github.com/firecrawl/firecrawl/issues/2167

###  Features Implemented

- **Data Source Integration**: Firecrawl appears as a selectable data
source in RAGFlow
- **Configuration Management**: Users can input Firecrawl API keys
through RAGFlow's interface
- **Web Scraping**: Supports single URL scraping, website crawling, and
batch processing
- **Content Processing**: Converts scraped content to RAGFlow's document
format with chunking
- **Error Handling**: Comprehensive error handling for rate limits,
failed requests, and malformed content
- **UI Components**: Complete UI schema and workflow components for
RAGFlow integration

### 📁 Files Added

- `intergrations/firecrawl/` - Complete integration package
- `intergrations/firecrawl/integration.py` - RAGFlow integration entry
point
- `intergrations/firecrawl/firecrawl_connector.py` - API communication
- `intergrations/firecrawl/firecrawl_config.py` - Configuration
management
- `intergrations/firecrawl/firecrawl_processor.py` - Content processing
- `intergrations/firecrawl/firecrawl_ui.py` - UI components
- `intergrations/firecrawl/ragflow_integration.py` - Main integration
class
- `intergrations/firecrawl/README.md` - Complete documentation
- `intergrations/firecrawl/example_usage.py` - Usage examples

### 🧪 Testing

The integration has been thoroughly tested with:
- Configuration validation
- Connection testing
- Content processing and chunking
- UI component rendering
- Error handling scenarios

### 📋 Acceptance Criteria Met

-  Integration appears as selectable data source in RAGFlow's data
source options
-  Users can input Firecrawl API keys through RAGFlow's configuration
interface
-  Successfully scrapes content from provided URLs and imports into
RAGFlow's document store
-  Handles common edge cases (rate limits, failed requests, malformed
content)
-  Includes basic documentation and README updates
-  Code follows RAGFlow's existing patterns and coding standards

### �� Related Issue

https://github.com/firecrawl/firecrawl/issues/2167

---------

Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
2025-09-19 09:58:17 +08:00
a0ccbec8bd Fix: knowledge base's embedded model form layout and dependency imports in the main branch. #9869 (#10160)
### What problem does this PR solve?

Fix: Fixed the knowledge base's embedded model form layout and
dependency imports in the main branch.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-19 09:57:21 +08:00
4693c5382a Feat: migrate OpenAI-compatible chats to LiteLLM (#10148)
### What problem does this PR solve?

Migrate OpenAI-compatible chats to LiteLLM.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 17:16:59 +08:00
ff3b4d0dcd Fix: Merge different types of models from the same manufacturer #10146 (#10157)
### What problem does this PR solve?

Fix: Merge different types of models from the same manufacturer #10146

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 17:15:54 +08:00
62d35b1b73 Fix: handle zero (#10149)
### What problem does this PR solve?

Handle zero and nan in calculate.
#10125

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 16:28:03 +08:00
91b609447d Fix: embedding model failure in CometAPI (#10137)
### What problem does this PR solve?

Related PR:
Feat: add CometAPI to LLMFactory and update related mappings #10119 

Change:
Fixes the issue where the embedding model in CometAPI was not being
called correctly

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: TensorNull <tensor.null@gmail.com>
2025-09-18 14:49:47 +08:00
c353840244 Feat: add support for KB document basic info (#10134)
### What problem does this PR solve?

Add support for KB document basic info

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 09:52:33 +08:00
f12b9fdcd4 Feat: add CometAPI to LLMFactory and update related mappings (#10119)
### Related issues
#10078

### What problem does this PR solve?
Integrate CometAPI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2025-09-18 09:51:29 +08:00
80ede65bbe Docs: Updated database types supported by the Execute SQL tool (#10113)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-09-18 09:47:35 +08:00
52cf186028 Correct the text of vectorSimilarityWeight in zh.ts (#10128)
### What problem does this PR solve?

The original text for vectorSimilarityWeight in Chinese version was
"相似度相似度权重," which is obviously a malformed phrase. It has now been
changed to "向量相似度权重". Also, align it with the English version 'Vector
similarity weight'.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 09:46:54 +08:00
ea0f1d47a5 Support image recognition for url links in Markdown file, fix log error in code_exec (#10139)
### What problem does this PR solve?

Support image recognition with image links in markdown files, solved
issue: #8755
Fixed log info error in code_exec, solved issue: #10064

### Type of change (8755)

- [x] New Feature (non-breaking change which adds functionality)

### Type of change (10064)

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-18 09:44:17 +08:00
9fe7c92217 Build(deps): Bump axios from 1.9.0 to 1.12.0 in /sandbox/sandbox_base_image/nodejs (#10091)
Bumps [axios](https://github.com/axios/axios) from 1.9.0 to 1.12.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/axios/axios/releases">axios's
releases</a>.</em></p>
<blockquote>
<h2>Release v1.12.0</h2>
<h2>Release notes:</h2>
<h3>Bug Fixes</h3>
<ul>
<li>adding build artifacts (<a
href="9ec86de257">9ec86de</a>)</li>
<li>dont add dist on release (<a
href="a2edc3606a">a2edc36</a>)</li>
<li><strong>fetch-adapter:</strong> set correct Content-Type for Node
FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)
(<a
href="a9f47afbf3">a9f47af</a>)</li>
<li><strong>node:</strong> enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)
(<a
href="945435fc51">945435f</a>)</li>
<li>package exports (<a
href="https://redirect.github.com/axios/axios/issues/5627">#5627</a>)
(<a
href="aa78ac23fc">aa78ac2</a>)</li>
<li><strong>params:</strong> removing '[' and ']' from URL encode
exclude characters (<a
href="https://redirect.github.com/axios/axios/issues/3316">#3316</a>)
(<a
href="https://redirect.github.com/axios/axios/issues/5715">#5715</a>)
(<a
href="6d84189349">6d84189</a>)</li>
<li>release pr run (<a
href="fd7f404488">fd7f404</a>)</li>
<li><strong>types:</strong> change the type guard on isCancel (<a
href="https://redirect.github.com/axios/axios/issues/5595">#5595</a>)
(<a
href="0dbb7fd4f6">0dbb7fd</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>adapter:</strong> surface low‑level network error details;
attach original error via cause (<a
href="https://redirect.github.com/axios/axios/issues/6982">#6982</a>)
(<a
href="78b290c57c">78b290c</a>)</li>
<li><strong>fetch:</strong> add fetch, Request, Response env config
variables for the adapter; (<a
href="https://redirect.github.com/axios/axios/issues/7003">#7003</a>)
(<a
href="c959ff2901">c959ff2</a>)</li>
<li>support reviver on JSON.parse (<a
href="https://redirect.github.com/axios/axios/issues/5926">#5926</a>)
(<a
href="2a9763426e">2a97634</a>),
closes <a
href="https://redirect.github.com/axios/axios/issues/5924">#5924</a></li>
<li><strong>types:</strong> extend AxiosResponse interface to include
custom headers type (<a
href="https://redirect.github.com/axios/axios/issues/6782">#6782</a>)
(<a
href="7960d34ede">7960d34</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a
href="https://github.com/WillianAgostini" title="+132/-16760
([#7002](https://github.com/axios/axios/issues/7002)
[#5926](https://github.com/axios/axios/issues/5926)
[#6782](https://github.com/axios/axios/issues/6782) )">Willian
Agostini</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/DigitalBrainJS" title="+4263/-293
([#7006](https://github.com/axios/axios/issues/7006)
[#7003](https://github.com/axios/axios/issues/7003) )">Dmitriy
Mozgovoy</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/mkhani01"
title="+111/-15 ([#6982](https://github.com/axios/axios/issues/6982)
)">khani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/AmeerAssadi"
title="+123/-0 ([#7011](https://github.com/axios/axios/issues/7011)
)">Ameer Assadi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/emiedonmokumo"
title="+55/-35 ([#6998](https://github.com/axios/axios/issues/6998)
)">Emiedonmokumo Dick-Boro</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/opsysdebug"
title="+8/-8 ([#6980](https://github.com/axios/axios/issues/6980)
)">Zeroday BYTE</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jasonsaayman"
title="+7/-7 ([#6985](https://github.com/axios/axios/issues/6985)
[#6985](https://github.com/axios/axios/issues/6985) )">Jason
Saayman</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/HealGaren"
title="+5/-7 ([#5715](https://github.com/axios/axios/issues/5715)
)">최예찬</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/gligorkot"
title="+3/-1 ([#5627](https://github.com/axios/axios/issues/5627)
)">Gligor Kotushevski</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/adimit"
title="+2/-1 ([#5595](https://github.com/axios/axios/issues/5595)
)">Aleksandar Dimitrov</a></li>
</ul>
<h2>Release v1.11.0</h2>
<h2>Release notes:</h2>
<h3>Bug Fixes</h3>
<ul>
<li>form-data npm pakcage (<a
href="https://redirect.github.com/axios/axios/issues/6970">#6970</a>)
(<a
href="e72c193722">e72c193</a>)</li>
<li>prevent RangeError when using large Buffers (<a
href="https://redirect.github.com/axios/axios/issues/6961">#6961</a>)
(<a
href="a2214ca1bc">a2214ca</a>)</li>
<li><strong>types:</strong> resolve type discrepancies between ESM and
CJS TypeScript declaration files (<a
href="https://redirect.github.com/axios/axios/issues/6956">#6956</a>)
(<a
href="8517aa16f8">8517aa1</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a href="https://github.com/izzygld"
title="+186/-93 ([#6970](https://github.com/axios/axios/issues/6970)
)">izzy goldman</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/manishsahanidev" title="+70/-0
([#6961](https://github.com/axios/axios/issues/6961) )">Manish
Sahani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/noritaka1166"
title="+12/-10 ([#6938](https://github.com/axios/axios/issues/6938)
[#6939](https://github.com/axios/axios/issues/6939) )">Noritaka
Kobayashi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jrnail23"
title="+13/-2 ([#6956](https://github.com/axios/axios/issues/6956)
)">James Nail</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/Tejaswi1305"
title="+1/-1 ([#6894](https://github.com/axios/axios/issues/6894)
)">Tejaswi1305</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/axios/axios/blob/v1.x/CHANGELOG.md">axios's
changelog</a>.</em></p>
<blockquote>
<h1><a
href="https://github.com/axios/axios/compare/v1.11.0...v1.12.0">1.12.0</a>
(2025-09-11)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>adding build artifacts (<a
href="9ec86de257">9ec86de</a>)</li>
<li>dont add dist on release (<a
href="a2edc3606a">a2edc36</a>)</li>
<li><strong>fetch-adapter:</strong> set correct Content-Type for Node
FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)
(<a
href="a9f47afbf3">a9f47af</a>)</li>
<li><strong>node:</strong> enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)
(<a
href="945435fc51">945435f</a>)</li>
<li>package exports (<a
href="https://redirect.github.com/axios/axios/issues/5627">#5627</a>)
(<a
href="aa78ac23fc">aa78ac2</a>)</li>
<li><strong>params:</strong> removing '[' and ']' from URL encode
exclude characters (<a
href="https://redirect.github.com/axios/axios/issues/3316">#3316</a>)
(<a
href="https://redirect.github.com/axios/axios/issues/5715">#5715</a>)
(<a
href="6d84189349">6d84189</a>)</li>
<li>release pr run (<a
href="fd7f404488">fd7f404</a>)</li>
<li><strong>types:</strong> change the type guard on isCancel (<a
href="https://redirect.github.com/axios/axios/issues/5595">#5595</a>)
(<a
href="0dbb7fd4f6">0dbb7fd</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>adapter:</strong> surface low‑level network error details;
attach original error via cause (<a
href="https://redirect.github.com/axios/axios/issues/6982">#6982</a>)
(<a
href="78b290c57c">78b290c</a>)</li>
<li><strong>fetch:</strong> add fetch, Request, Response env config
variables for the adapter; (<a
href="https://redirect.github.com/axios/axios/issues/7003">#7003</a>)
(<a
href="c959ff2901">c959ff2</a>)</li>
<li>support reviver on JSON.parse (<a
href="https://redirect.github.com/axios/axios/issues/5926">#5926</a>)
(<a
href="2a9763426e">2a97634</a>),
closes <a
href="https://redirect.github.com/axios/axios/issues/5924">#5924</a></li>
<li><strong>types:</strong> extend AxiosResponse interface to include
custom headers type (<a
href="https://redirect.github.com/axios/axios/issues/6782">#6782</a>)
(<a
href="7960d34ede">7960d34</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a
href="https://github.com/WillianAgostini" title="+132/-16760
([#7002](https://github.com/axios/axios/issues/7002)
[#5926](https://github.com/axios/axios/issues/5926)
[#6782](https://github.com/axios/axios/issues/6782) )">Willian
Agostini</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/DigitalBrainJS" title="+4263/-293
([#7006](https://github.com/axios/axios/issues/7006)
[#7003](https://github.com/axios/axios/issues/7003) )">Dmitriy
Mozgovoy</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/mkhani01"
title="+111/-15 ([#6982](https://github.com/axios/axios/issues/6982)
)">khani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/AmeerAssadi"
title="+123/-0 ([#7011](https://github.com/axios/axios/issues/7011)
)">Ameer Assadi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/emiedonmokumo"
title="+55/-35 ([#6998](https://github.com/axios/axios/issues/6998)
)">Emiedonmokumo Dick-Boro</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/opsysdebug"
title="+8/-8 ([#6980](https://github.com/axios/axios/issues/6980)
)">Zeroday BYTE</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jasonsaayman"
title="+7/-7 ([#6985](https://github.com/axios/axios/issues/6985)
[#6985](https://github.com/axios/axios/issues/6985) )">Jason
Saayman</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/HealGaren"
title="+5/-7 ([#5715](https://github.com/axios/axios/issues/5715)
)">최예찬</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/gligorkot"
title="+3/-1 ([#5627](https://github.com/axios/axios/issues/5627)
)">Gligor Kotushevski</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/adimit"
title="+2/-1 ([#5595](https://github.com/axios/axios/issues/5595)
)">Aleksandar Dimitrov</a></li>
</ul>
<h1><a
href="https://github.com/axios/axios/compare/v1.10.0...v1.11.0">1.11.0</a>
(2025-07-22)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>form-data npm pakcage (<a
href="https://redirect.github.com/axios/axios/issues/6970">#6970</a>)
(<a
href="e72c193722">e72c193</a>)</li>
<li>prevent RangeError when using large Buffers (<a
href="https://redirect.github.com/axios/axios/issues/6961">#6961</a>)
(<a
href="a2214ca1bc">a2214ca</a>)</li>
<li><strong>types:</strong> resolve type discrepancies between ESM and
CJS TypeScript declaration files (<a
href="https://redirect.github.com/axios/axios/issues/6956">#6956</a>)
(<a
href="8517aa16f8">8517aa1</a>)</li>
</ul>
<h3>Contributors to this release</h3>
<ul>
<li><!-- raw HTML omitted --> <a href="https://github.com/izzygld"
title="+186/-93 ([#6970](https://github.com/axios/axios/issues/6970)
)">izzy goldman</a></li>
<li><!-- raw HTML omitted --> <a
href="https://github.com/manishsahanidev" title="+70/-0
([#6961](https://github.com/axios/axios/issues/6961) )">Manish
Sahani</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/noritaka1166"
title="+12/-10 ([#6938](https://github.com/axios/axios/issues/6938)
[#6939](https://github.com/axios/axios/issues/6939) )">Noritaka
Kobayashi</a></li>
<li><!-- raw HTML omitted --> <a href="https://github.com/jrnail23"
title="+13/-2 ([#6956](https://github.com/axios/axios/issues/6956)
)">James Nail</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0d8ad6e1de"><code>0d8ad6e</code></a>
chore(release): v1.12.0 (<a
href="https://redirect.github.com/axios/axios/issues/7013">#7013</a>)</li>
<li><a
href="fd7f404488"><code>fd7f404</code></a>
fix: release pr run</li>
<li><a
href="a2edc3606a"><code>a2edc36</code></a>
fix: dont add dist on release</li>
<li><a
href="9ec86de257"><code>9ec86de</code></a>
fix: adding build artifacts</li>
<li><a
href="945435fc51"><code>945435f</code></a>
fix(node): enforce maxContentLength for data: URLs (<a
href="https://redirect.github.com/axios/axios/issues/7011">#7011</a>)</li>
<li><a
href="28e5e3016d"><code>28e5e30</code></a>
chore(sponsor): update sponsor block (<a
href="https://redirect.github.com/axios/axios/issues/7005">#7005</a>)</li>
<li><a
href="d03f245a40"><code>d03f245</code></a>
chore(CI): fixed release info script to use npm registry instead of git
as fi...</li>
<li><a
href="a0bc911379"><code>a0bc911</code></a>
chore: removing dist files from src (<a
href="https://redirect.github.com/axios/axios/issues/7002">#7002</a>)</li>
<li><a
href="c959ff2901"><code>c959ff2</code></a>
feat(fetch): add fetch, Request, Response env config variables for the
adapte...</li>
<li><a
href="a9f47afbf3"><code>a9f47af</code></a>
fix(fetch-adapter): set correct Content-Type for Node FormData (<a
href="https://redirect.github.com/axios/axios/issues/6998">#6998</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/axios/axios/compare/v1.9.0...v1.12.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=axios&package-manager=npm_and_yarn&previous-version=1.9.0&new-version=1.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 09:41:24 +08:00
d353f7f7f8 Feat/parse audio (#10133)
### What problem does this PR solve?

Dataflow support audio.  And fix giteeAI's sequence2text model. 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-18 09:31:32 +08:00
f3738b06f1 Fixes session_id passing in agent_openai completion. (#10124)
### What problem does this PR solve?

An exception happens if you give session_id to agent_open_ai completion.
Because session_id is being given as well as **req so it tries to send
session_id twice. But also the logic seemed odd on picking one of
session_id, id, metadata.id. So cleaned it up a little.

See #10111 

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-09-17 17:54:06 +08:00
5a8bc88147 Docs: Removed /v1 from Ollama base URLs (#10067)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-09-17 13:48:29 +08:00
04ef5b2783 Fix: usage of postgresql -> postgres for db_type (#10120)
### What problem does this PR solve?

This PR fixes incorrect naming for PostgreSQL usage by replacing all
instances of `postgresql` with the correct `postgres` in the `db_type`
field. This resolves potential configuration errors and ensures
consistency when specifying the database type.

Also fixed handling of None for `get_queue_length`

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: cucusenok <BP-116: updated readme.md>
2025-09-17 10:30:45 +08:00
c9ea22ef69 Fix: set default chunk_token_num in html_parser (#10118)
### What problem does this PR solve?

issue:
[Bug]: Agent component (HTTP Request) "'>' not supported between
instances of 'int' and 'NoneType'"
[#10096](https://github.com/infiniflow/ragflow/issues/10096)

Change:
When the Invoke class instantiates HtmlParser without providing the
chunk_token_num parameter, the value defaults to None, leading to a
comparison error with block_token_count.

This change sets the default chunk_token_num to 512 to prevent such
errors.
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: BadwomanCraZY <511528396@qq.com>
2025-09-17 09:36:31 +08:00
152111fd9d Feat/parse img (#10112)
### What problem does this PR solve?

support parse image by OCR or VLM.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 17:53:37 +08:00
86f6da2f74 Feat: add support for the Ascend table structure recognizer (#10110)
### What problem does this PR solve?

Add support for the Ascend table structure recognizer.

Use the environment variable `TABLE_STRUCTURE_RECOGNIZER_TYPE=ascend` to
enable the Ascend table structure recognizer.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 13:57:06 +08:00
8c00cbc87a Fix(agent template): wrap template variables in curly braces (#10109)
### What problem does this PR solve?

Updated SQL assistant template to wrap variables like 'sys.query' and
'Agent:WickedGoatsDivide@content' in curly braces for better template
variable syntax consistency.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-16 13:56:56 +08:00
41e808f4e6 Docs: Added an Execute SQL tool reference (#10108)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-09-16 11:39:56 +08:00
bc0281040b Feat: add support for the Ascend layout recognizer (#10105)
### What problem does this PR solve?

Supports Ascend layout recognizer.

Use the environment variable `LAYOUT_RECOGNIZER_TYPE=ascend` to enable
the Ascend layout recognizer, and `ASCEND_LAYOUT_RECOGNIZER_DEVICE_ID=n`
(for example, n=0) to specify the Ascend device ID.

Ensure that you have installed the [ais
tools](https://gitee.com/ascend/tools/tree/master/ais-bench_workload/tool/ais_bench)
properly.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-16 09:51:15 +08:00
341a7b1473 Fix: judge not empty before delete (#10099)
### What problem does this PR solve?

judge not empty before delete session.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-15 17:49:52 +08:00
c29c395390 Fix: The same model appears twice in the drop-down box. #10102 (#10103)
### What problem does this PR solve?

Fix: The same model appears twice in the drop-down box. #10102

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-15 16:38:08 +08:00
a23a0f230c feat: add multiple docker tags (latest, latest_full, latest_slim) to … (#10040)
…release workflow (#10039)  
This change updates the GitHub Actions workflow to push additional
stable tags alongside version tags, enabling automated update tools like
Watchtower to detect and pull the latest images correctly.
Refs:
[https://github.com/infiniflow/ragflow/issues/10039](https://github.com/infiniflow/ragflow/issues/10039)

### What problem does this PR solve?  
Automated container update tools such as Watchtower rely on stable tags
like `latest` to identify the newest images. Previously, only
version-specific tags were pushed, which prevented these tools from
detecting new releases automatically. This PR adds multiple stable tags
(`latest-full`, `latest-slim`) alongside version tags to the Docker
image publishing workflow, ensuring smooth and reliable automated
updates without manual tag management.

### Type of change  
- [ ] Bug Fix (non-breaking change which fixes an issue)  
- [x] New Feature (non-breaking change which adds functionality)  
- [ ] Documentation Update  
- [ ] Refactoring  
- [ ] Performance Improvement  
- [ ] Other (please describe):

---------

Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-13 21:44:53 +08:00
2a88ce6be1 Fix: terminate onnx inference session manually (#10076)
### What problem does this PR solve?

terminate onnx inference session and release memory manually.

Issue #5050 
Issue #9992 
Issue #8805

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-12 17:18:26 +08:00
664b781d62 Feat: Translate the fields of the embedded dialog box on the agent page #3221 (#10072)
### What problem does this PR solve?

Feat: Translate the fields of the embedded dialog box on the agent page
#3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-09-12 16:01:12 +08:00
65571e5254 Feat: dataflow supports text (#10058)
### What problem does this PR solve?

dataflow supports text.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-11 19:03:51 +08:00
aa30f20730 Feat: Agent component support inserting variables(#10048) (#10055)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-11 19:03:19 +08:00
b9b278d441 Docs: How to connect to an MCP server as a client (#10043)
### What problem does this PR solve?

#9769 

### Type of change


- [x] Documentation Update
2025-09-11 19:02:50 +08:00
e1d86cfee3 Feat: add TokenPony model provider (#9932)
### What problem does this PR solve?

Add TokenPony as a LLM provider

Co-authored-by: huangzl <huangzl@shinemo.com>
2025-09-11 17:25:31 +08:00
8ebd07337f The chat dialog box cannot be fully displayed on a small screen #10034 (#10049)
### What problem does this PR solve?

The chat dialog box cannot be fully displayed on a small screen #10034

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 13:32:23 +08:00
dd584d57b0 Fix: Hide dataflow related functions #9869 (#10045)
### What problem does this PR solve?

Fix: Hide dataflow related functions #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 12:02:26 +08:00
3d39b96c6f Fix: token num exceed (#10046)
### What problem does this PR solve?

fix text input exceed token num limit when using siliconflow's embedding
model BAAI/bge-large-zh-v1.5 and BAAI/bge-large-en-v1.5, truncate before
input.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-11 12:02:12 +08:00
397 changed files with 14325 additions and 10218 deletions

View File

@ -25,7 +25,7 @@ jobs:
- name: Check out code
uses: actions/checkout@v4
with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable
fetch-depth: 0
fetch-tags: true
@ -69,7 +69,7 @@ jobs:
# https://github.com/actions/upload-release-asset has been replaced by https://github.com/softprops/action-gh-release
uses: softprops/action-gh-release@v2
with:
token: ${{ secrets.MY_GITHUB_TOKEN }} # Use the secret as an environment variable
token: ${{ secrets.GITHUB_TOKEN }} # Use the secret as an environment variable
prerelease: ${{ env.PRERELEASE }}
tag_name: ${{ env.RELEASE_TAG }}
# The body field does not support environment variable substitution directly.
@ -120,3 +120,17 @@ jobs:
packages-dir: sdk/python/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true
- name: Build ragflow-cli
if: startsWith(github.ref, 'refs/tags/v')
run: |
cd admin/client && \
uv build
- name: Publish client package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v')
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: admin/client/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true

View File

@ -34,12 +34,10 @@ jobs:
# https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2
- name: Show who triggered this workflow
- name: Ensure workspace ownership
run: |
echo "Workflow triggered by ${{ github.event_name }}"
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/issues/1781
- name: Check out code
@ -48,6 +46,44 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Check workflow duplication
if: ${{ !cancelled() && !failure() && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci')) }}
run: |
if [[ ${{ github.event_name }} != 'pull_request' ]]; then
HEAD=$(git rev-parse HEAD)
# Find a PR that introduced a given commit
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
PR_NUMBER=$(gh pr list --search ${HEAD} --state merged --json number --jq .[0].number)
echo "HEAD=${HEAD}"
echo "PR_NUMBER=${PR_NUMBER}"
if [[ -n ${PR_NUMBER} ]]; then
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
if [[ -f ${PR_SHA_FP} ]]; then
read -r PR_SHA PR_RUN_ID < "${PR_SHA_FP}"
# Calculate the hash of the current workspace content
HEAD_SHA=$(git rev-parse HEAD^{tree})
if [[ ${HEAD_SHA} == ${PR_SHA} ]]; then
echo "Cancel myself since the workspace content hash is the same with PR #${PR_NUMBER} merged. See ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${PR_RUN_ID} for details."
gh run cancel ${GITHUB_RUN_ID}
while true; do
status=$(gh run view ${GITHUB_RUN_ID} --json status -q .status)
[ "$status" = "completed" ] && break
sleep 5
done
exit 1
fi
fi
fi
else
PR_NUMBER=${{ github.event.pull_request.number }}
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
# Calculate the hash of the current workspace content
PR_SHA=$(git rev-parse HEAD^{tree})
echo "PR #${PR_NUMBER} workspace content hash: ${PR_SHA}"
mkdir -p ${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}
echo "${PR_SHA} ${GITHUB_RUN_ID}" > ${PR_SHA_FP}
fi
# https://github.com/astral-sh/ruff-action
- name: Static check with Ruff
uses: astral-sh/ruff-action@v3
@ -59,11 +95,11 @@ jobs:
run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
sudo docker pull ubuntu:22.04
sudo docker build --progress=plain --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
sudo DOCKER_BUILDKIT=1 docker build --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
- name: Build ragflow:nightly
run: |
sudo docker build --progress=plain --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
- name: Start ragflow:nightly-slim
run: |

2
.gitignore vendored
View File

@ -149,7 +149,7 @@ out
# Nuxt.js build / generate output
.nuxt
dist
ragflow_cli.egg-info
# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and not Next.js

View File

@ -191,6 +191,7 @@ ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/
COPY web web
COPY admin admin
COPY api api
COPY conf conf
COPY deepdoc deepdoc

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -84,8 +84,8 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2025-10-15 Supports orchestrable ingestion pipeline.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-04 Supports new models, including Kimi K2 and Grok 4.
- 2025-08-01 Supports agentic workflow and MCP.
- 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
- 2025-05-05 Supports cross-language query.
@ -135,7 +135,7 @@ releases! 🌟
## 🔎 System Architecture
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 Get Started
@ -187,7 +187,7 @@ releases! 🌟
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.20.5-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.20.5-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` for the full edition `v0.20.5`.
> The command below downloads the `v0.21.0-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.0-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` for the full edition `v0.21.0`.
```bash
$ cd ragflow/docker
@ -200,8 +200,8 @@ releases! 🌟
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------|
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
@ -341,11 +341,13 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
5. If your operating system does not have jemalloc, please install it as follows:
```bash
# ubuntu
# Ubuntu
sudo apt-get install libjemalloc-dev
# centos
# CentOS
sudo yum install jemalloc
# mac
# OpenSUSE
sudo zypper install jemalloc
# macOS
sudo brew install jemalloc
```

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="Logo ragflow">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="Logo ragflow">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
@ -80,8 +80,8 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru
- 2025-10-15 Dukungan untuk jalur data yang terorkestrasi.
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
- 2025-08-04 Mendukung model baru, termasuk Kimi K2 dan Grok 4.
- 2025-08-01 Mendukung alur kerja agen dan MCP.
- 2025-05-23 Menambahkan komponen pelaksana kode Python/JS ke Agen.
- 2025-05-05 Mendukung kueri lintas bahasa.
@ -129,7 +129,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔎 Arsitektur Sistem
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 Mulai
@ -181,7 +181,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.20.5-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.20.5-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5 untuk edisi lengkap v0.20.5.
> Perintah di bawah ini mengunduh edisi v0.21.0-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.0-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0 untuk edisi lengkap v0.21.0.
```bash
$ cd ragflow/docker
@ -194,8 +194,8 @@ $ docker compose -f docker-compose.yml up -d
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -60,8 +60,8 @@
## 🔥 最新情報
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
- 2025-08-04 新モデル、キミK2およびGrok 4をサポート。
- 2025-08-01 エージェントワークフローとMCPをサポート。
- 2025-05-23 エージェントに Python/JS コードエグゼキュータコンポーネントを追加しました。
- 2025-05-05 言語間クエリをサポートしました。
@ -109,7 +109,7 @@
## 🔎 システム構成
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 初期設定
@ -160,7 +160,7 @@
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.20.5-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.20.5-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.20.5 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5 と設定します。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.0-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.0-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.21.0 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0 と設定します。
```bash
$ cd ragflow/docker
@ -173,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -60,8 +60,8 @@
## 🔥 업데이트
- 2025-10-15 조정된 데이터 파이프라인 지원.
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
- 2025-08-04 새로운 모델인 Kimi K2와 Grok 4를 포함하여 지원합니다.
- 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다.
- 2025-05-23 Agent에 Python/JS 코드 실행기 구성 요소를 추가합니다.
- 2025-05-05 언어 간 쿼리를 지원합니다.
@ -109,7 +109,7 @@
## 🔎 시스템 아키텍처
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 시작하기
@ -160,7 +160,7 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.20.5-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.20.5-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.20.5을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5로 설정합니다.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.0-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.0-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.21.0을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0로 설정합니다.
```bash
$ cd ragflow/docker
@ -173,8 +173,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="ragflow logo">
<img src="web/src/assets/logo-with-text.svg" width="520" alt="ragflow logo">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Badge Estático" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Última%20Relese" alt="Última Versão">
@ -80,8 +80,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações
- 10-15-2025 Suporte para pipelines de dados orquestrados.
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
- 04-08-2025 Suporta novos modelos, incluindo Kimi K2 e Grok 4.
- 01-08-2025 Suporta fluxo de trabalho agente e MCP.
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
- 05-05-2025 Suporte a consultas entre idiomas.
@ -129,7 +129,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔎 Arquitetura do Sistema
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 Primeiros Passos
@ -180,7 +180,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição `v0.20.5-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.20.5-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` para a edição completa `v0.20.5`.
> O comando abaixo baixa a edição `v0.21.0-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.0-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` para a edição completa `v0.21.0`.
```bash
$ cd ragflow/docker
@ -193,8 +193,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.20.5 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.20.5-slim | ~2 | ❌ | Lançamento estável |
| v0.21.0 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.21.0-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -83,8 +83,8 @@
## 🔥 近期更新
- 2025-10-15 支援可編排的資料管道。
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-04 支援 Kimi K2 和 Grok 4 等模型.
- 2025-08-01 支援 agentic workflow 和 MCP
- 2025-05-23 為 Agent 新增 Python/JS 程式碼執行器元件。
- 2025-05-05 支援跨語言查詢。
@ -132,7 +132,7 @@
## 🔎 系統架構
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 快速開始
@ -183,7 +183,7 @@
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.20.5-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.20.5-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` 來下載 RAGFlow 鏡像的 `v0.20.5` 完整發行版。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.0-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.0-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` 來下載 RAGFlow 鏡像的 `v0.21.0` 完整發行版。
```bash
$ cd ragflow/docker
@ -196,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

View File

@ -1,6 +1,6 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="350" alt="ragflow logo">
<img src="web/src/assets/logo-with-text.svg" width="350" alt="ragflow logo">
</a>
</div>
@ -22,7 +22,7 @@
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.20.5">
<img src="https://img.shields.io/docker/pulls/infiniflow/ragflow?label=Docker%20Pulls&color=0db7ed&logo=docker&logoColor=white&style=flat-square" alt="docker pull infiniflow/ragflow:v0.21.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
@ -83,8 +83,8 @@
## 🔥 近期更新
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型.
- 2025-08-04 新增对 Kimi K2 和 Grok 4 等模型的支持.
- 2025-10-15 支持可编排的数据管道。
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支持 agentic workflow 和 MCP。
- 2025-05-23 Agent 新增 Python/JS 代码执行器组件。
- 2025-05-05 支持跨语言查询。
@ -132,7 +132,7 @@
## 🔎 系统架构
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
<img src="https://github.com/user-attachments/assets/31b0dd6f-ca4f-445a-9457-70cb44a381b2" width="1000"/>
</div>
## 🎬 快速开始
@ -183,7 +183,7 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.20.5-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.20.5-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5` 来下载 RAGFlow 镜像的 `v0.20.5` 完整发行版。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.0-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.0-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0` 来下载 RAGFlow 镜像的 `v0.21.0` 完整发行版。
```bash
$ cd ragflow/docker
@ -196,8 +196,8 @@
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.20.5 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.20.5-slim | &approx;2 | ❌ | Stable release |
| v0.21.0 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.0-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |

47
admin/build_cli_release.sh Executable file
View File

@ -0,0 +1,47 @@
#!/bin/bash
set -e
echo "🚀 Start building..."
echo "================================"
PROJECT_NAME="ragflow-cli"
RELEASE_DIR="release"
BUILD_DIR="dist"
SOURCE_DIR="src"
PACKAGE_DIR="ragflow_cli"
echo "🧹 Clean old build folder..."
rm -rf release/
echo "📁 Prepare source code..."
mkdir release/$PROJECT_NAME/$SOURCE_DIR -p
cp pyproject.toml release/$PROJECT_NAME/pyproject.toml
cp README.md release/$PROJECT_NAME/README.md
mkdir release/$PROJECT_NAME/$SOURCE_DIR/$PACKAGE_DIR -p
cp admin_client.py release/$PROJECT_NAME/$SOURCE_DIR/$PACKAGE_DIR/admin_client.py
if [ -d "release/$PROJECT_NAME/$SOURCE_DIR" ]; then
echo "✅ source dir: release/$PROJECT_NAME/$SOURCE_DIR"
else
echo "❌ source dir not exist: release/$PROJECT_NAME/$SOURCE_DIR"
exit 1
fi
echo "🔨 Make build file..."
cd release/$PROJECT_NAME
export PYTHONPATH=$(pwd)
python -m build
echo "✅ check build result..."
if [ -d "$BUILD_DIR" ]; then
echo "📦 Package generated:"
ls -la $BUILD_DIR/
else
echo "❌ Build Failed: $BUILD_DIR not exist."
exit 1
fi
echo "🎉 Build finished successfully!"

View File

@ -15,22 +15,55 @@ It consists of a server-side Service and a command-line client (CLI), both imple
- **Admin Service**: A backend service that interfaces with the RAGFlow system to execute administrative operations and monitor its status.
- **Admin CLI**: A command-line interface that allows users to connect to the Admin Service and issue commands for system management.
### Starting the Admin Service
1. Before start Admin Service, please make sure RAGFlow system is already started.
#### Launching from source code
1. Before start Admin Service, please make sure RAGFlow system is already started.
2. Launch from source code:
```bash
python admin/server/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
#### Using docker image
1. Before startup, please configure the `docker_compose.yml` file to enable admin server:
```bash
command:
- --enable-adminserver
```
2. Start the containers, the service will start and listen for incoming connections from the CLI on the configured port.
2. Run the service script:
```bash
python admin/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
### Using the Admin CLI
1. Ensure the Admin Service is running.
2. Launch the CLI client:
2. Install ragflow-cli.
```bash
python admin/admin_client.py -h 0.0.0.0 -p 9381
pip install ragflow-cli==0.21.0
```
3. Launch the CLI client:
```bash
ragflow-cli -h 127.0.0.1 -p 9381
```
You will be prompted to enter the superuser's password to log in.
The default password is admin.
**Parameters:**
- -h: RAGFlow admin server host address
- -p: RAGFlow admin server port
## Supported Commands
@ -42,12 +75,7 @@ Commands are case-insensitive and must be terminated with a semicolon (`;`).
- Lists all available services within the RAGFlow system.
- `SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by `<id>`.
- `STARTUP SERVICE <id>;`
- Attempts to start the service identified by `<id>`.
- `SHUTDOWN SERVICE <id>;`
- Attempts to gracefully shut down the service identified by `<id>`.
- `RESTART SERVICE <id>;`
- Attempts to restart the service identified by `<id>`.
### User Management Commands
@ -55,10 +83,17 @@ Commands are case-insensitive and must be terminated with a semicolon (`;`).
- Lists all users known to the system.
- `SHOW USER '<username>';`
- Shows details and permissions for the specified user. The username must be enclosed in single or double quotes.
- `CREATE USER <username> <password>;`
- Create user by username and password. The username and password must be enclosed in single or double quotes.
- `DROP USER '<username>';`
- Removes the specified user from the system. Use with caution.
- `ALTER USER PASSWORD '<username>' '<new_password>';`
- Changes the password for the specified user.
- `ALTER USER ACTIVE <username> <on/off>;`
- Changes the user to active or inactive.
### Data and Agent Commands

View File

@ -1,13 +1,29 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import argparse
import base64
from cmd import Cmd
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from typing import Dict, List, Any
from lark import Lark, Transformer, Tree
from lark import Lark, Transformer, Tree, Token
import requests
from requests.auth import HTTPBasicAuth
from api.common.base64 import encode_to_base64
GRAMMAR = r"""
start: command
@ -176,12 +192,55 @@ def encrypt(input_string):
return base64.b64encode(cipher_text).decode("utf-8")
class AdminCommandParser:
def encode_to_base64(input_string):
base64_encoded = base64.b64encode(input_string.encode('utf-8'))
return base64_encoded.decode('utf-8')
class AdminCLI(Cmd):
def __init__(self):
super().__init__()
self.parser = Lark(GRAMMAR, start='start', parser='lalr', transformer=AdminTransformer())
self.command_history = []
self.is_interactive = False
self.admin_account = "admin@ragflow.io"
self.admin_password: str = "admin"
self.host: str = ""
self.port: int = 0
def parse_command(self, command_str: str) -> Dict[str, Any]:
intro = r"""Type "\h" for help."""
prompt = "admin> "
def onecmd(self, command: str) -> bool:
try:
result = self.parse_command(command)
if isinstance(result, dict):
if 'type' in result and result.get('type') == 'empty':
return False
self.execute_command(result)
if isinstance(result, Tree):
return False
if result.get('type') == 'meta' and result.get('command') in ['q', 'quit', 'exit']:
return True
except KeyboardInterrupt:
print("\nUse '\\q' to quit")
except EOFError:
print("\nGoodbye!")
return True
return False
def emptyline(self) -> bool:
return False
def default(self, line: str) -> bool:
return self.onecmd(line)
def parse_command(self, command_str: str) -> dict[str, str] | Tree[Token]:
if not command_str.strip():
return {'type': 'empty'}
@ -193,16 +252,6 @@ class AdminCommandParser:
except Exception as e:
return {'type': 'error', 'message': f'Parse error: {str(e)}'}
class AdminCLI:
def __init__(self):
self.parser = AdminCommandParser()
self.is_interactive = False
self.admin_account = "admin@ragflow.io"
self.admin_password: str = "admin"
self.host: str = ""
self.port: int = 0
def verify_admin(self, args):
conn_info = self._parse_connection_args(args)
@ -251,10 +300,25 @@ class AdminCLI:
columns = list(data[0].keys())
col_widths = {}
def get_string_width(text):
half_width_chars = (
" !\"#$%&'()*+,-./0123456789:;<=>?@"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`"
"abcdefghijklmnopqrstuvwxyz{|}~"
"\t\n\r"
)
width = 0
for char in text:
if char in half_width_chars:
width += 1
else:
width += 2
return width
for col in columns:
max_width = len(str(col))
max_width = get_string_width(str(col))
for item in data:
value_len = len(str(item.get(col, '')))
value_len = get_string_width(str(item.get(col, '')))
if value_len > max_width:
max_width = value_len
col_widths[col] = max(2, max_width)
@ -273,9 +337,9 @@ class AdminCLI:
row = "|"
for col in columns:
value = str(item.get(col, ''))
if len(value) > col_widths[col]:
if get_string_width(value) > col_widths[col]:
value = value[:col_widths[col] - 3] + "..."
row += f" {value:<{col_widths[col]}} |"
row += f" {value:<{col_widths[col] - (get_string_width(value) - len(value))}} |"
print(row)
print(separator)
@ -292,7 +356,7 @@ class AdminCLI:
continue
print(f"command: {command}")
result = self.parser.parse_command(command)
result = self.parse_command(command)
self.execute_command(result)
if isinstance(result, Tree):
@ -384,12 +448,28 @@ class AdminCLI:
if response.status_code == 200:
self._print_table_simple(res_json['data'])
else:
print(f"Fail to get all users, code: {res_json['code']}, message: {res_json['message']}")
print(f"Fail to get all services, code: {res_json['code']}, message: {res_json['message']}")
def _handle_show_service(self, command):
service_id: int = command['number']
print(f"Showing service: {service_id}")
url = f'http://{self.host}:{self.port}/api/v1/admin/services/{service_id}'
response = requests.get(url, auth=HTTPBasicAuth(self.admin_account, self.admin_password))
res_json = response.json()
if response.status_code == 200:
res_data = res_json['data']
if 'status' in res_data and res_data['status'] == 'alive':
print(f"Service {res_data['service_name']} is alive, ")
if isinstance(res_data['message'], str):
print(res_data['message'])
else:
self._print_table_simple(res_data['message'])
else:
print(f"Service {res_data['service_name']} is down, {res_data['message']}")
else:
print(f"Fail to show service, code: {res_json['code']}, message: {res_json['message']}")
def _handle_restart_service(self, command):
service_id: int = command['number']
print(f"Restart service {service_id}")
@ -563,10 +643,17 @@ def main():
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
if cli.verify_admin(sys.argv):
cli.run_interactive()
cli.cmdloop()
else:
print(r"""
____ ___ ______________ ___ __ _
/ __ \/ | / ____/ ____/ /___ _ __ / | ____/ /___ ___ (_)___
/ /_/ / /| |/ / __/ /_ / / __ \ | /| / / / /| |/ __ / __ `__ \/ / __ \
/ _, _/ ___ / /_/ / __/ / / /_/ / |/ |/ / / ___ / /_/ / / / / / / / / / /
/_/ |_/_/ |_\____/_/ /_/\____/|__/|__/ /_/ |_\__,_/_/ /_/ /_/_/_/ /_/
""")
if cli.verify_admin(sys.argv):
cli.run_interactive()
cli.cmdloop()
# cli.run_single_command(sys.argv[1:])

View File

@ -0,0 +1,24 @@
[project]
name = "ragflow-cli"
version = "0.21.0"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"requests>=2.30.0,<3.0.0",
"beartype>=0.18.5,<0.19.0",
"pycryptodomex>=3.10.0",
"lark>=1.1.0",
]
[dependency-groups]
test = [
"pytest>=8.3.5",
"requests>=2.32.3",
"requests-toolbelt>=1.0.0",
]
[project.scripts]
ragflow-cli = "admin_client:main"

View File

24
admin/pyproject.toml Normal file
View File

@ -0,0 +1,24 @@
[project]
name = "ragflow-cli"
version = "0.21.0.dev2"
description = "Admin Service's client of [RAGFlow](https://github.com/infiniflow/ragflow). The Admin Service provides user management and system monitoring. "
authors = [{ name = "Lynn", email = "lynn_inf@hotmail.com" }]
license = { text = "Apache License, Version 2.0" }
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"requests>=2.30.0,<3.0.0",
"beartype>=0.18.5,<0.19.0",
"pycryptodomex>=3.10.0",
"lark>=1.1.0",
]
[dependency-groups]
test = [
"pytest>=8.3.5",
"requests>=2.32.3",
"requests-toolbelt>=1.0.0",
]
[project.scripts]
ragflow-cli = "ragflow_cli.admin_client:main"

View File

@ -1,15 +0,0 @@
from flask import jsonify
def success_response(data=None, message="Success", code = 0):
return jsonify({
"code": code,
"message": message,
"data": data
}), 200
def error_response(message="Error", code=-1, data=None):
return jsonify({
"code": code,
"message": message,
"data": data
}), 400

View File

@ -1,3 +1,18 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import signal

View File

@ -1,9 +1,26 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import uuid
from functools import wraps
from flask import request, jsonify
from exceptions import AdminException
from api.common.exceptions import AdminException
from api.db.init_data import encode_to_base64
from api.db.services import UserService

View File

@ -1,3 +1,20 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import threading
from enum import Enum
@ -9,6 +26,8 @@ from urllib.parse import urlparse
class ServiceConfigs:
configs = dict
def __init__(self):
self.configs = []
self.lock = threading.Lock()
@ -32,9 +51,11 @@ class BaseConfig(BaseModel):
host: str
port: int
service_type: str
detail_func_name: str
def to_dict(self) -> dict[str, Any]:
return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port, 'service_type': self.service_type}
return {'id': self.id, 'name': self.name, 'host': self.host, 'port': self.port,
'service_type': self.service_type}
class MetaConfig(BaseConfig):
@ -209,7 +230,9 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
name: str = f'ragflow_{ragflow_count}'
host: str = v['host']
http_port: int = v['http_port']
config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port, service_type="ragflow_server")
config = RAGFlowServerConfig(id=id_count, name=name, host=host, port=http_port,
service_type="ragflow_server",
detail_func_name="check_ragflow_server_alive")
configurations.append(config)
id_count += 1
case "es":
@ -222,7 +245,8 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password: str = v.get('password')
config = ElasticsearchConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval",
retrieval_type="elasticsearch",
username=username, password=password)
username=username, password=password,
detail_func_name="get_es_cluster_stats")
configurations.append(config)
id_count += 1
@ -233,8 +257,9 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
host = parts[0]
port = int(parts[1])
database: str = v.get('db_name', 'default_db')
config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval", retrieval_type="infinity",
db_name=database)
config = InfinityConfig(id=id_count, name=name, host=host, port=port, service_type="retrieval",
retrieval_type="infinity",
db_name=database, detail_func_name="get_infinity_status")
configurations.append(config)
id_count += 1
case "minio":
@ -245,8 +270,9 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
port = int(parts[1])
user = v.get('user')
password = v.get('password')
config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password, service_type="file_store",
store_type="minio")
config = MinioConfig(id=id_count, name=name, host=host, port=port, user=user, password=password,
service_type="file_store",
store_type="minio", detail_func_name="check_minio_alive")
configurations.append(config)
id_count += 1
case "redis":
@ -258,7 +284,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
password = v.get('password')
db: int = v.get('db')
config = RedisConfig(id=id_count, name=name, host=host, port=port, password=password, database=db,
service_type="message_queue", mq_type="redis")
service_type="message_queue", mq_type="redis", detail_func_name="get_redis_info")
configurations.append(config)
id_count += 1
case "mysql":
@ -268,7 +294,7 @@ def load_configurations(config_path: str) -> list[BaseConfig]:
username = v.get('user')
password = v.get('password')
config = MySQLConfig(id=id_count, name=name, host=host, port=port, username=username, password=password,
service_type="meta_data", meta_type="mysql")
service_type="meta_data", meta_type="mysql", detail_func_name="get_mysql_status")
configurations.append(config)
id_count += 1
case "admin":

View File

@ -12,4 +12,4 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

34
admin/server/responses.py Normal file
View File

@ -0,0 +1,34 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import jsonify
def success_response(data=None, message="Success", code=0):
return jsonify({
"code": code,
"message": message,
"data": data
}), 200
def error_response(message="Error", code=-1, data=None):
return jsonify({
"code": code,
"message": message,
"data": data
}), 400

View File

@ -1,9 +1,26 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import Blueprint, request
from auth import login_verify
from responses import success_response, error_response
from services import UserMgr, ServiceMgr, UserServiceMgr
from exceptions import AdminException
from api.common.exceptions import AdminException
admin_bp = Blueprint('admin', __name__, url_prefix='/api/v1/admin')
@ -42,7 +59,7 @@ def create_user():
res = UserMgr.create_user(username, password, role)
if res["success"]:
user_info = res["user_info"]
user_info.pop("password") # do not return password
user_info.pop("password") # do not return password
return success_response(user_info, "User created successfully")
else:
return error_response("create user failed")
@ -102,6 +119,7 @@ def alter_user_activate_status(username):
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>', methods=['GET'])
@login_verify
def get_user_details(username):
@ -114,6 +132,7 @@ def get_user_details(username):
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/datasets', methods=['GET'])
@login_verify
def get_user_datasets(username):

View File

@ -1,3 +1,20 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re
from werkzeug.security import check_password_hash
from api.db import ActiveEnum
@ -7,16 +24,20 @@ from api.db.services.canvas_service import UserCanvasService
from api.db.services.user_service import TenantService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.crypt import decrypt
from exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from api.utils import health_utils
from api.common.exceptions import AdminException, UserAlreadyExistsError, UserNotFoundError
from config import SERVICE_CONFIGS
class UserMgr:
@staticmethod
def get_all_users():
users = UserService.get_all_users()
result = []
for user in users:
result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date, 'is_active': user.is_active})
result.append({'email': user.email, 'nickname': user.nickname, 'create_date': user.create_date,
'is_active': user.is_active})
return result
@staticmethod
@ -110,6 +131,7 @@ class UserMgr:
UserService.update_user(usr.id, {"is_active": target_status})
return f"Turn {_activate_status} user activate status successfully!"
class UserServiceMgr:
@staticmethod
@ -148,14 +170,24 @@ class UserServiceMgr:
'canvas_category': r['canvas_category']
} for r in res]
class ServiceMgr:
@staticmethod
def get_all_services():
result = []
configs = SERVICE_CONFIGS.configs
for config in configs:
result.append(config.to_dict())
for service_id, config in enumerate(configs):
config_dict = config.to_dict()
try:
service_detail = ServiceMgr.get_service_details(service_id)
if "status" in service_detail:
config_dict['status'] = service_detail['status']
else:
config_dict['status'] = 'timeout'
except Exception:
config_dict['status'] = 'timeout'
result.append(config_dict)
return result
@staticmethod
@ -164,7 +196,22 @@ class ServiceMgr:
@staticmethod
def get_service_details(service_id: int):
raise AdminException("get_service_details: not implemented")
service_id = int(service_id)
configs = SERVICE_CONFIGS.configs
service_config_mapping = {
c.id: {
'name': c.name,
'detail_func_name': c.detail_func_name
} for c in configs
}
service_info = service_config_mapping.get(service_id, {})
if not service_info:
raise AdminException(f"invalid service_id: {service_id}")
detail_func = getattr(health_utils, service_info.get('detail_func_name'))
res = detail_func()
res.update({'service_name': service_info.get('name')})
return res
@staticmethod
def shutdown_service(service_id: int):

View File

@ -203,7 +203,6 @@ class Canvas(Graph):
self.history = []
self.retrieval = []
self.memory = []
for k in self.globals.keys():
if isinstance(self.globals[k], str):
self.globals[k] = ""
@ -292,7 +291,6 @@ class Canvas(Graph):
"thoughts": self.get_component_thoughts(self.path[i])
})
_run_batch(idx, to)
# post processing of components invocation
for i in range(idx, to):
cpn = self.get_component(self.path[i])
@ -393,7 +391,6 @@ class Canvas(Graph):
self.path = path
yield decorate("user_inputs", {"inputs": another_inputs, "tips": tips})
return
self.path = self.path[:idx]
if not self.error:
yield decorate("workflow_finished",

View File

@ -346,3 +346,11 @@ Respond immediately with your final comprehensive answer.
return "Error occurred."
def reset(self, temp=False):
"""
Reset all tools if they have a reset method. This avoids errors for tools like MCPToolCallSession.
"""
for k, cpn in self.tools.items():
if hasattr(cpn, "reset") and callable(cpn.reset):
cpn.reset()

View File

@ -19,11 +19,12 @@ import os
import re
import time
from abc import ABC
import requests
from agent.component.base import ComponentBase, ComponentParamBase
from api.utils.api_utils import timeout
from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase
class InvokeParam(ComponentParamBase):
@ -43,11 +44,11 @@ class InvokeParam(ComponentParamBase):
self.datatype = "json" # New parameter to determine data posting type
def check(self):
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ['get', 'post', 'put'])
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ["get", "post", "put"])
self.check_empty(self.url, "End point URL")
self.check_positive_integer(self.timeout, "Timeout time in second")
self.check_boolean(self.clean_html, "Clean HTML")
self.check_valid_value(self.datatype.lower(), "Data post type", ['json', 'formdata']) # Check for valid datapost value
self.check_valid_value(self.datatype.lower(), "Data post type", ["json", "formdata"]) # Check for valid datapost value
class Invoke(ComponentBase, ABC):
@ -63,6 +64,18 @@ class Invoke(ComponentBase, ABC):
args[para["key"]] = self._canvas.get_variable_value(para["ref"])
url = self._param.url.strip()
def replace_variable(match):
var_name = match.group(1)
try:
value = self._canvas.get_variable_value(var_name)
return str(value or "")
except Exception:
return ""
# {base_url} or {component_id@variable_name}
url = re.sub(r"\{([a-zA-Z_][a-zA-Z0-9_.@-]*)\}", replace_variable, url)
if url.find("http") != 0:
url = "http://" + url
@ -75,52 +88,32 @@ class Invoke(ComponentBase, ABC):
proxies = {"http": self._param.proxy, "https": self._param.proxy}
last_e = ""
for _ in range(self._param.max_retries+1):
for _ in range(self._param.max_retries + 1):
try:
if method == 'get':
response = requests.get(url=url,
params=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if method == "get":
response = requests.get(url=url, params=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections))
else:
self.set_output("result", response.text)
if method == 'put':
if self._param.datatype.lower() == 'json':
response = requests.put(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if method == "put":
if self._param.datatype.lower() == "json":
response = requests.put(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
else:
response = requests.put(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
response = requests.put(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
self.set_output("result", "\n".join(sections))
else:
self.set_output("result", response.text)
if method == 'post':
if self._param.datatype.lower() == 'json':
response = requests.post(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if method == "post":
if self._param.datatype.lower() == "json":
response = requests.post(url=url, json=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
else:
response = requests.post(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
response = requests.post(url=url, data=args, headers=headers, proxies=proxies, timeout=self._param.timeout)
if self._param.clean_html:
self.set_output("result", "\n".join(sections))
else:

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -156,8 +156,8 @@ class CodeExec(ToolBase, ABC):
self.set_output("_ERROR", "construct code request error: " + str(e))
try:
resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60))
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run", code_req, resp.status_code)
resp = requests.post(url=f"http://{settings.SANDBOX_HOST}:9385/run", json=code_req, timeout=int(os.environ.get("COMPONENT_EXEC_TIMEOUT", 10*60)))
logging.info(f"http://{settings.SANDBOX_HOST}:9385/run, code_req: {code_req}, resp.status_code {resp.status_code}:")
if resp.status_code != 200:
resp.raise_for_status()
body = resp.json()

View File

@ -53,12 +53,13 @@ class ExeSQLParam(ToolParamBase):
self.max_records = 1024
def check(self):
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgres', 'mariadb', 'mssql', 'IBM DB2'])
self.check_valid_value(self.db_type, "Choose DB type", ['mysql', 'postgres', 'mariadb', 'mssql', 'IBM DB2', 'trino'])
self.check_empty(self.database, "Database name")
self.check_empty(self.username, "database username")
self.check_empty(self.host, "IP Address")
self.check_positive_integer(self.port, "IP Port")
self.check_empty(self.password, "Database password")
if self.db_type != "trino":
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.max_records, "Maximum number of records")
if self.database == "rag_flow":
if self.host == "ragflow-mysql":
@ -123,6 +124,45 @@ class ExeSQL(ToolBase, ABC):
r'PWD=' + self._param.password
)
db = pyodbc.connect(conn_str)
elif self._param.db_type == 'trino':
try:
import trino
from trino.auth import BasicAuthentication
except Exception:
raise Exception("Missing dependency 'trino'. Please install: pip install trino")
def _parse_catalog_schema(db: str):
if not db:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
else:
c, s = db, "default"
return c, s
catalog, schema = _parse_catalog_schema(self._param.database)
if not catalog:
raise Exception("For Trino, `database` must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
if http_scheme == "https" and self._param.password:
auth = BasicAuthentication(self._param.username, self._param.password)
try:
db = trino.dbapi.connect(
host=self._param.host,
port=int(self._param.port or 8080),
user=self._param.username or "ragflow",
catalog=catalog,
schema=schema or "default",
http_scheme=http_scheme,
auth=auth
)
except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e))
elif self._param.db_type == 'IBM DB2':
import ibm_db
conn_str = (

View File

@ -85,13 +85,7 @@ class PubMed(ToolBase, ABC):
self._retrieve_chunks(pubmedcnt.findall("PubmedArticle"),
get_title=lambda child: child.find("MedlineCitation").find("Article").find("ArticleTitle").text,
get_url=lambda child: "https://pubmed.ncbi.nlm.nih.gov/" + child.find("MedlineCitation").find("PMID").text,
get_content=lambda child: child.find("MedlineCitation") \
.find("Article") \
.find("Abstract") \
.find("AbstractText").text \
if child.find("MedlineCitation")\
.find("Article").find("Abstract") \
else "No abstract available")
get_content=lambda child: self._format_pubmed_content(child),)
return self.output("formalized_content")
except Exception as e:
last_e = e
@ -104,5 +98,50 @@ class PubMed(ToolBase, ABC):
assert False, self.output()
def _format_pubmed_content(self, child):
"""Extract structured reference info from PubMed XML"""
def safe_find(path):
node = child
for p in path.split("/"):
if node is None:
return None
node = node.find(p)
return node.text if node is not None and node.text else None
title = safe_find("MedlineCitation/Article/ArticleTitle") or "No title"
abstract = safe_find("MedlineCitation/Article/Abstract/AbstractText") or "No abstract available"
journal = safe_find("MedlineCitation/Article/Journal/Title") or "Unknown Journal"
volume = safe_find("MedlineCitation/Article/Journal/JournalIssue/Volume") or "-"
issue = safe_find("MedlineCitation/Article/Journal/JournalIssue/Issue") or "-"
pages = safe_find("MedlineCitation/Article/Pagination/MedlinePgn") or "-"
# Authors
authors = []
for author in child.findall(".//AuthorList/Author"):
lastname = safe_find("LastName") or ""
forename = safe_find("ForeName") or ""
fullname = f"{forename} {lastname}".strip()
if fullname:
authors.append(fullname)
authors_str = ", ".join(authors) if authors else "Unknown Authors"
# DOI
doi = None
for eid in child.findall(".//ArticleId"):
if eid.attrib.get("IdType") == "doi":
doi = eid.text
break
return (
f"Title: {title}\n"
f"Authors: {authors_str}\n"
f"Journal: {journal}\n"
f"Volume: {volume}\n"
f"Issue: {issue}\n"
f"Pages: {pages}\n"
f"DOI: {doi or '-'}\n"
f"Abstract: {abstract.strip()}"
)
def thoughts(self) -> str:
return "Looking for scholarly papers on `{}`,” prioritising reputable sources.".format(self.get_input().get("query", "-_-!"))

View File

@ -57,6 +57,7 @@ class RetrievalParam(ToolParamBase):
self.empty_response = ""
self.use_kg = False
self.cross_languages = []
self.toc_enhance = False
def check(self):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
@ -121,7 +122,7 @@ class Retrieval(ToolBase, ABC):
if kbs:
query = re.sub(r"^user[:\s]*", "", query, flags=re.IGNORECASE)
kbinfos = settings.retrievaler.retrieval(
kbinfos = settings.retriever.retrieval(
query,
embd_mdl,
[kb.tenant_id for kb in kbs],
@ -134,8 +135,13 @@ class Retrieval(ToolBase, ABC):
rerank_mdl=rerank_mdl,
rank_feature=label_question(query, kbs),
)
if self._param.toc_enhance:
chat_mdl = LLMBundle(self._canvas._tenant_id, LLMType.CHAT)
cks = settings.retriever.retrieval_by_toc(query, kbinfos["chunks"], [kb.tenant_id for kb in kbs], chat_mdl, self._param.top_n)
if cks:
kbinfos["chunks"] = cks
if self._param.use_kg:
ck = settings.kg_retrievaler.retrieval(query,
ck = settings.kg_retriever.retrieval(query,
[kb.tenant_id for kb in kbs],
kb_ids,
embd_mdl,
@ -146,7 +152,7 @@ class Retrieval(ToolBase, ABC):
kbinfos = {"chunks": [], "doc_aggs": []}
if self._param.use_kg and kbs:
ck = settings.kg_retrievaler.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
ck = settings.kg_retriever.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl, LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ck["content"] = ck["content_with_weight"]
del ck["content_with_weight"]

View File

@ -85,7 +85,7 @@ class SearXNG(ToolBase, ABC):
self.set_output("formalized_content", "")
return ""
searxng_url = (kwargs.get("searxng_url") or getattr(self._param, "searxng_url", "") or "").strip()
searxng_url = (getattr(self._param, "searxng_url", "") or kwargs.get("searxng_url") or "").strip()
# In try-run, if no URL configured, just return empty instead of raising
if not searxng_url:
self.set_output("formalized_content", "")

View File

@ -536,7 +536,7 @@ def list_chunks():
)
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
res = settings.retrievaler.chunk_list(doc_id, tenant_id, kb_ids)
res = settings.retriever.chunk_list(doc_id, tenant_id, kb_ids)
res = [
{
"content": res_item["content_with_weight"],
@ -884,7 +884,7 @@ def retrieval():
if req.get("keyword", False):
chat_mdl = LLMBundle(kbs[0].tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
ranks = settings.retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
ranks = settings.retriever.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight= highlight,
rank_feature=label_question(question, kbs))

View File

@ -51,7 +51,7 @@ from rag.utils.redis_conn import REDIS_CONN
@manager.route('/templates', methods=['GET']) # noqa: F821
@login_required
def templates():
return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.query(canvas_category=CanvasCategory.Agent)])
return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.get_all()])
@manager.route('/rm', methods=['POST']) # noqa: F821
@ -409,6 +409,49 @@ def test_db_connect():
ibm_db.fetch_assoc(stmt)
ibm_db.close(conn)
return get_json_result(data="Database Connection Successful!")
elif req["db_type"] == 'trino':
def _parse_catalog_schema(db: str):
if not db:
return None, None
if "." in db:
c, s = db.split(".", 1)
elif "/" in db:
c, s = db.split("/", 1)
else:
c, s = db, "default"
return c, s
try:
import trino
import os
from trino.auth import BasicAuthentication
except Exception:
return server_error_response("Missing dependency 'trino'. Please install: pip install trino")
catalog, schema = _parse_catalog_schema(req["database"])
if not catalog:
return server_error_response("For Trino, 'database' must be 'catalog.schema' or at least 'catalog'.")
http_scheme = "https" if os.environ.get("TRINO_USE_TLS", "0") == "1" else "http"
auth = None
if http_scheme == "https" and req.get("password"):
auth = BasicAuthentication(req.get("username") or "ragflow", req["password"])
conn = trino.dbapi.connect(
host=req["host"],
port=int(req["port"] or 8080),
user=req["username"] or "ragflow",
catalog=catalog,
schema=schema or "default",
http_scheme=http_scheme,
auth=auth
)
cur = conn.cursor()
cur.execute("SELECT 1")
cur.fetchall()
cur.close()
conn.close()
return get_json_result(data="Database Connection Successful!")
else:
return server_error_response("Unsupported database type.")
if req["db_type"] != 'mssql':

View File

@ -60,7 +60,7 @@ def list_chunk():
}
if "available_int" in req:
query["available_int"] = int(req["available_int"])
sres = settings.retrievaler.search(query, search.index_name(tenant_id), kb_ids, highlight=True)
sres = settings.retriever.search(query, search.index_name(tenant_id), kb_ids, highlight=["content_ltks"])
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
for id in sres.ids:
d = {
@ -346,7 +346,7 @@ def retrieval_test():
question += keyword_extraction(chat_mdl, question)
labels = label_question(question, [kb])
ranks = settings.retrievaler.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
ranks = settings.retriever.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
float(req.get("similarity_threshold", 0.0)),
float(req.get("vector_similarity_weight", 0.3)),
top,
@ -354,7 +354,7 @@ def retrieval_test():
rank_feature=labels
)
if use_kg:
ck = settings.kg_retrievaler.retrieval(question,
ck = settings.kg_retriever.retrieval(question,
tenant_ids,
kb_ids,
embd_mdl,
@ -384,7 +384,7 @@ def knowledge_graph():
"doc_ids": [doc_id],
"knowledge_graph_kwd": ["graph", "mind_map"]
}
sres = settings.retrievaler.search(req, search.index_name(tenant_id), kb_ids)
sres = settings.retriever.search(req, search.index_name(tenant_id), kb_ids)
obj = {"graph": {}, "mind_map": {}}
for id in sres.ids[:2]:
ty = sres.field[id]["knowledge_graph_kwd"]

View File

@ -24,6 +24,7 @@ from flask import request
from flask_login import current_user, login_required
from api import settings
from api.common.check_team_permission import check_kb_team_permission
from api.constants import FILE_NAME_LEN_LIMIT, IMG_BASE64_PREFIX
from api.db import VALID_FILE_TYPES, VALID_TASK_STATUS, FileSource, FileType, ParserType, TaskStatus
from api.db.db_models import File, Task
@ -68,8 +69,10 @@ def upload():
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
raise LookupError("Can't find this knowledgebase!")
err, files = FileService.upload_document(kb, file_objs, current_user.id)
if not check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
err, files = FileService.upload_document(kb, file_objs, current_user.id)
if err:
return get_json_result(data=files, message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
@ -94,6 +97,8 @@ def web_crawl():
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
raise LookupError("Can't find this knowledgebase!")
if check_kb_team_permission(kb, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
blob = html2pdf(url)
if not blob:
@ -552,8 +557,8 @@ def get(doc_id):
@login_required
@validate_request("doc_id")
def change_parser():
req = request.json
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
@ -563,7 +568,7 @@ def change_parser():
def reset_doc():
nonlocal doc
e = DocumentService.update_by_id(doc.id, {"parser_id": req["parser_id"], "progress": 0, "progress_msg": "", "run": TaskStatus.UNSTART.value})
e = DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"], "parser_id": req["parser_id"], "progress": 0, "progress_msg": "", "run": TaskStatus.UNSTART.value})
if not e:
return get_data_error_result(message="Document not found!")
if doc.token_num > 0:
@ -577,7 +582,7 @@ def change_parser():
settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
try:
if "pipeline_id" in req:
if "pipeline_id" in req and req["pipeline_id"] != "":
if doc.pipeline_id == req["pipeline_id"]:
return get_json_result(data=True)
DocumentService.update_by_id(doc.id, {"pipeline_id": req["pipeline_id"]})

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License
#
import logging
import os
import pathlib
import re
@ -21,6 +22,7 @@ import flask
from flask import request
from flask_login import login_required, current_user
from api.common.check_team_permission import check_file_team_permission
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
@ -233,54 +235,63 @@ def get_all_parent_folders():
return server_error_response(e)
@manager.route('/rm', methods=['POST']) # noqa: F821
@manager.route("/rm", methods=["POST"]) # noqa: F821
@login_required
@validate_request("file_ids")
def rm():
req = request.json
file_ids = req["file_ids"]
def _delete_single_file(file):
try:
if file.location:
STORAGE_IMPL.rm(file.parent_id, file.location)
except Exception:
logging.exception(f"Fail to remove object: {file.parent_id}/{file.location}")
informs = File2DocumentService.get_by_file_id(file.id)
for inform in informs:
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if e and doc:
tenant_id = DocumentService.get_tenant_id(doc_id)
if tenant_id:
DocumentService.remove_document(doc, tenant_id)
File2DocumentService.delete_by_file_id(file.id)
FileService.delete(file)
def _delete_folder_recursive(folder, tenant_id):
sub_files = FileService.list_all_files_by_parent_id(folder.id)
for sub_file in sub_files:
if sub_file.type == FileType.FOLDER.value:
_delete_folder_recursive(sub_file, tenant_id)
else:
_delete_single_file(sub_file)
FileService.delete(folder)
try:
for file_id in file_ids:
e, file = FileService.get_by_id(file_id)
if not e:
if not e or not file:
return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id:
return get_data_error_result(message="Tenant not found!")
if file.tenant_id != current_user.id:
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message="No authorization.", code=settings.RetCode.AUTHENTICATION_ERROR)
if file.source_type == FileSource.KNOWLEDGEBASE:
continue
if file.type == FileType.FOLDER.value:
file_id_list = FileService.get_all_innermost_file_ids(file_id, [])
for inner_file_id in file_id_list:
e, file = FileService.get_by_id(inner_file_id)
if not e:
return get_data_error_result(message="File not found!")
STORAGE_IMPL.rm(file.parent_id, file.location)
FileService.delete_folder_by_pf_id(current_user.id, file_id)
else:
STORAGE_IMPL.rm(file.parent_id, file.location)
if not FileService.delete(file):
return get_data_error_result(
message="Database error (File removal)!")
_delete_folder_recursive(file, current_user.id)
continue
# delete file2document
informs = File2DocumentService.get_by_file_id(file_id)
for inform in informs:
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
message="Database error (Document removal)!")
File2DocumentService.delete_by_file_id(file_id)
_delete_single_file(file)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@ -294,7 +305,7 @@ def rename():
e, file = FileService.get_by_id(req["file_id"])
if not e:
return get_data_error_result(message="File not found!")
if file.tenant_id != current_user.id:
if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
@ -332,7 +343,7 @@ def get(file_id):
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(message="Document not found!")
if file.tenant_id != current_user.id:
if not check_file_team_permission(file, current_user.id):
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
blob = STORAGE_IMPL.get(file.parent_id, file.location)
@ -354,31 +365,89 @@ def get(file_id):
return server_error_response(e)
@manager.route('/mv', methods=['POST']) # noqa: F821
@manager.route("/mv", methods=["POST"]) # noqa: F821
@login_required
@validate_request("src_file_ids", "dest_file_id")
def move():
req = request.json
try:
file_ids = req["src_file_ids"]
parent_id = req["dest_file_id"]
dest_parent_id = req["dest_file_id"]
ok, dest_folder = FileService.get_by_id(dest_parent_id)
if not ok or not dest_folder:
return get_data_error_result(message="Parent Folder not found!")
files = FileService.get_by_ids(file_ids)
files_dict = {}
for file in files:
files_dict[file.id] = file
if not files:
return get_data_error_result(message="Source files not found!")
files_dict = {f.id: f for f in files}
for file_id in file_ids:
file = files_dict[file_id]
file = files_dict.get(file_id)
if not file:
return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id:
return get_data_error_result(message="Tenant not found!")
if file.tenant_id != current_user.id:
return get_json_result(data=False, message='No authorization.', code=settings.RetCode.AUTHENTICATION_ERROR)
fe, _ = FileService.get_by_id(parent_id)
if not fe:
return get_data_error_result(message="Parent Folder not found!")
FileService.move_file(file_ids, parent_id)
if not check_file_team_permission(file, current_user.id):
return get_json_result(
data=False,
message="No authorization.",
code=settings.RetCode.AUTHENTICATION_ERROR,
)
def _move_entry_recursive(source_file_entry, dest_folder):
if source_file_entry.type == FileType.FOLDER.value:
existing_folder = FileService.query(name=source_file_entry.name, parent_id=dest_folder.id)
if existing_folder:
new_folder = existing_folder[0]
else:
new_folder = FileService.insert(
{
"id": get_uuid(),
"parent_id": dest_folder.id,
"tenant_id": source_file_entry.tenant_id,
"created_by": current_user.id,
"name": source_file_entry.name,
"location": "",
"size": 0,
"type": FileType.FOLDER.value,
}
)
sub_files = FileService.list_all_files_by_parent_id(source_file_entry.id)
for sub_file in sub_files:
_move_entry_recursive(sub_file, new_folder)
FileService.delete_by_id(source_file_entry.id)
return
old_parent_id = source_file_entry.parent_id
old_location = source_file_entry.location
filename = source_file_entry.name
new_location = filename
while STORAGE_IMPL.obj_exist(dest_folder.id, new_location):
new_location += "_"
try:
STORAGE_IMPL.move(old_parent_id, old_location, dest_folder.id, new_location)
except Exception as storage_err:
raise RuntimeError(f"Move file failed at storage layer: {str(storage_err)}")
FileService.update_by_id(
source_file_entry.id,
{
"parent_id": dest_folder.id,
"location": new_location,
},
)
for file in files:
_move_entry_recursive(file, dest_folder)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -36,8 +36,10 @@ from api import settings
from rag.nlp import search
from api.constants import DATASET_NAME_LIMIT
from rag.settings import PAGERANK_FLD
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/create', methods=['post']) # noqa: F821
@login_required
@validate_request("name")
@ -186,6 +188,9 @@ def detail():
return get_data_error_result(
message="Can't find this knowledgebase!")
kb["size"] = DocumentService.get_total_size_by_kb_id(kb_id=kb["id"],keywords="", run_status=[], types=[])
for key in ["graphrag_task_finish_at", "raptor_task_finish_at", "mindmap_task_finish_at"]:
if finish_at := kb.get(key):
kb[key] = finish_at.strftime("%Y-%m-%d %H:%M:%S")
return get_json_result(data=kb)
except Exception as e:
return server_error_response(e)
@ -281,7 +286,7 @@ def list_tags(kb_id):
tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = []
for tenant in tenants:
tags += settings.retrievaler.all_tags(tenant["tenant_id"], [kb_id])
tags += settings.retriever.all_tags(tenant["tenant_id"], [kb_id])
return get_json_result(data=tags)
@ -300,7 +305,7 @@ def list_tags_from_kbs():
tenants = UserTenantService.get_tenants_by_user_id(current_user.id)
tags = []
for tenant in tenants:
tags += settings.retrievaler.all_tags(tenant["tenant_id"], kb_ids)
tags += settings.retriever.all_tags(tenant["tenant_id"], kb_ids)
return get_json_result(data=tags)
@ -361,7 +366,7 @@ def knowledge_graph(kb_id):
obj = {"graph": {}, "mind_map": {}}
if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), kb_id):
return get_json_result(data=obj)
sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [kb_id])
sres = settings.retriever.search(req, search.index_name(kb.tenant_id), [kb_id])
if not len(sres.ids):
return get_json_result(data=obj)
@ -759,18 +764,25 @@ def delete_kb_task():
match pipeline_task_type:
case PipelineTaskType.GRAPH_RAG:
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), kb_id)
kb_task_id = "graphrag_task_id"
kb_task_id_field = "graphrag_task_id"
task_id = kb.graphrag_task_id
kb_task_finish_at = "graphrag_task_finish_at"
case PipelineTaskType.RAPTOR:
kb_task_id = "raptor_task_id"
kb_task_id_field = "raptor_task_id"
task_id = kb.raptor_task_id
kb_task_finish_at = "raptor_task_finish_at"
case PipelineTaskType.MINDMAP:
kb_task_id = "mindmap_task_id"
kb_task_id_field = "mindmap_task_id"
task_id = kb.mindmap_task_id
kb_task_finish_at = "mindmap_task_finish_at"
case _:
return get_error_data_result(message="Internal Error: Invalid task type")
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id: "", kb_task_finish_at: None})
def cancel_task(task_id):
REDIS_CONN.set(f"{task_id}-cancel", "x")
cancel_task(task_id)
ok = KnowledgebaseService.update_by_id(kb_id, {kb_task_id_field: "", kb_task_finish_at: None})
if not ok:
return server_error_response(f"Internal error: cannot delete task {pipeline_task_type}")

View File

@ -194,6 +194,9 @@ def add_llm():
elif factory == "Azure-OpenAI":
api_key = apikey_json(["api_key", "api_version"])
elif factory == "OpenRouter":
api_key = apikey_json(["api_key", "provider_order"])
llm = {
"tenant_id": current_user.id,
"llm_factory": factory,

View File

@ -1,8 +1,26 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import Response
from flask_login import login_required
from api.utils.api_utils import get_json_result
from plugin import GlobalPluginManager
@manager.route('/llm_tools', methods=['GET']) # noqa: F821
@login_required
def llm_tools() -> Response:

View File

@ -25,6 +25,7 @@ from api.utils.api_utils import get_data_error_result, get_error_data_result, ge
from api.utils.api_utils import get_result
from flask import request
@manager.route('/agents', methods=['GET']) # noqa: F821
@token_required
def list_agents(tenant_id):
@ -41,7 +42,7 @@ def list_agents(tenant_id):
desc = False
else:
desc = True
canvas = UserCanvasService.get_list(tenant_id,page_number,items_per_page,orderby,desc,id,title)
canvas = UserCanvasService.get_list(tenant_id, page_number, items_per_page, orderby, desc, id, title)
return get_result(data=canvas)
@ -93,7 +94,7 @@ def update_agent(tenant_id: str, agent_id: str):
req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
req["dsl"] = json.loads(req["dsl"])
if req.get("title") is not None:
req["title"] = req["title"].strip()

View File

@ -215,7 +215,8 @@ def delete(tenant_id):
continue
kb_id_instance_pairs.append((kb_id, kb))
if len(error_kb_ids) > 0:
return get_error_permission_result(message=f"""User '{tenant_id}' lacks permission for datasets: '{", ".join(error_kb_ids)}'""")
return get_error_permission_result(
message=f"""User '{tenant_id}' lacks permission for datasets: '{", ".join(error_kb_ids)}'""")
errors = []
success_count = 0
@ -232,7 +233,8 @@ def delete(tenant_id):
]
)
File2DocumentService.delete_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kb.name])
FileService.filter_delete(
[File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kb.name])
if not KnowledgebaseService.delete_by_id(kb_id):
errors.append(f"Delete dataset error for {kb_id}")
continue
@ -329,7 +331,8 @@ def update(tenant_id, dataset_id):
try:
kb = KnowledgebaseService.get_or_none(id=dataset_id, tenant_id=tenant_id)
if kb is None:
return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{dataset_id}'")
return get_error_permission_result(
message=f"User '{tenant_id}' lacks permission for dataset '{dataset_id}'")
if req.get("parser_config"):
req["parser_config"] = deep_merge(kb.parser_config, req["parser_config"])
@ -341,7 +344,8 @@ def update(tenant_id, dataset_id):
del req["parser_config"]
if "name" in req and req["name"].lower() != kb.name.lower():
exists = KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)
exists = KnowledgebaseService.get_or_none(name=req["name"], tenant_id=tenant_id,
status=StatusEnum.VALID.value)
if exists:
return get_error_data_result(message=f"Dataset name '{req['name']}' already exists")
@ -349,7 +353,8 @@ def update(tenant_id, dataset_id):
if not req["embd_id"]:
req["embd_id"] = kb.embd_id
if kb.chunk_num != 0 and req["embd_id"] != kb.embd_id:
return get_error_data_result(message=f"When chunk_num ({kb.chunk_num}) > 0, embedding_model must remain {kb.embd_id}")
return get_error_data_result(
message=f"When chunk_num ({kb.chunk_num}) > 0, embedding_model must remain {kb.embd_id}")
ok, err = verify_embedding_availability(req["embd_id"], tenant_id)
if not ok:
return err
@ -359,10 +364,12 @@ def update(tenant_id, dataset_id):
return get_error_argument_result(message="'pagerank' can only be set when doc_engine is elasticsearch")
if req["pagerank"] > 0:
settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]}, search.index_name(kb.tenant_id), kb.id)
settings.docStoreConn.update({"kb_id": kb.id}, {PAGERANK_FLD: req["pagerank"]},
search.index_name(kb.tenant_id), kb.id)
else:
# Elasticsearch requires PAGERANK_FLD be non-zero!
settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD}, search.index_name(kb.tenant_id), kb.id)
settings.docStoreConn.update({"exists": PAGERANK_FLD}, {"remove": PAGERANK_FLD},
search.index_name(kb.tenant_id), kb.id)
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_error_data_result(message="Update dataset error.(Database error)")
@ -454,7 +461,7 @@ def list_datasets(tenant_id):
return get_error_permission_result(message=f"User '{tenant_id}' lacks permission for dataset '{name}'")
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs = KnowledgebaseService.get_list(
kbs, total = KnowledgebaseService.get_list(
[m["tenant_id"] for m in tenants],
tenant_id,
args["page"],
@ -468,14 +475,15 @@ def list_datasets(tenant_id):
response_data_list = []
for kb in kbs:
response_data_list.append(remap_dictionary_keys(kb))
return get_result(data=response_data_list)
return get_result(data=response_data_list, total=total)
except OperationalError as e:
logging.exception(e)
return get_error_data_result(message="Database operation failed")
@manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['GET']) # noqa: F821
@token_required
def knowledge_graph(tenant_id,dataset_id):
def knowledge_graph(tenant_id, dataset_id):
if not KnowledgebaseService.accessible(dataset_id, tenant_id):
return get_result(
data=False,
@ -491,7 +499,7 @@ def knowledge_graph(tenant_id,dataset_id):
obj = {"graph": {}, "mind_map": {}}
if not settings.docStoreConn.indexExist(search.index_name(kb.tenant_id), dataset_id):
return get_result(data=obj)
sres = settings.retrievaler.search(req, search.index_name(kb.tenant_id), [dataset_id])
sres = settings.retriever.search(req, search.index_name(kb.tenant_id), [dataset_id])
if not len(sres.ids):
return get_result(data=obj)
@ -507,14 +515,16 @@ def knowledge_graph(tenant_id,dataset_id):
if "nodes" in obj["graph"]:
obj["graph"]["nodes"] = sorted(obj["graph"]["nodes"], key=lambda x: x.get("pagerank", 0), reverse=True)[:256]
if "edges" in obj["graph"]:
node_id_set = { o["id"] for o in obj["graph"]["nodes"] }
filtered_edges = [o for o in obj["graph"]["edges"] if o["source"] != o["target"] and o["source"] in node_id_set and o["target"] in node_id_set]
node_id_set = {o["id"] for o in obj["graph"]["nodes"]}
filtered_edges = [o for o in obj["graph"]["edges"] if
o["source"] != o["target"] and o["source"] in node_id_set and o["target"] in node_id_set]
obj["graph"]["edges"] = sorted(filtered_edges, key=lambda x: x.get("weight", 0), reverse=True)[:128]
return get_result(data=obj)
@manager.route('/datasets/<dataset_id>/knowledge_graph', methods=['DELETE']) # noqa: F821
@token_required
def delete_knowledge_graph(tenant_id,dataset_id):
def delete_knowledge_graph(tenant_id, dataset_id):
if not KnowledgebaseService.accessible(dataset_id, tenant_id):
return get_result(
data=False,
@ -522,6 +532,7 @@ def delete_knowledge_graph(tenant_id,dataset_id):
code=settings.RetCode.AUTHENTICATION_ERROR
)
_, kb = KnowledgebaseService.get_by_id(dataset_id)
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]}, search.index_name(kb.tenant_id), dataset_id)
settings.docStoreConn.delete({"knowledge_graph_kwd": ["graph", "subgraph", "entity", "relation"]},
search.index_name(kb.tenant_id), dataset_id)
return get_result(data=True)

View File

@ -1,4 +1,4 @@
#
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -31,6 +31,89 @@ from api.db.services.dialog_service import meta_filter, convert_conditions
@apikey_required
@validate_request("knowledge_id", "query")
def retrieval(tenant_id):
"""
Dify-compatible retrieval API
---
tags:
- SDK
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
required: true
schema:
type: object
required:
- knowledge_id
- query
properties:
knowledge_id:
type: string
description: Knowledge base ID
query:
type: string
description: Query text
use_kg:
type: boolean
description: Whether to use knowledge graph
default: false
retrieval_setting:
type: object
description: Retrieval configuration
properties:
score_threshold:
type: number
description: Similarity threshold
default: 0.0
top_k:
type: integer
description: Number of results to return
default: 1024
metadata_condition:
type: object
description: Metadata filter condition
properties:
conditions:
type: array
items:
type: object
properties:
name:
type: string
description: Field name
comparison_operator:
type: string
description: Comparison operator
value:
type: string
description: Field value
responses:
200:
description: Retrieval succeeded
schema:
type: object
properties:
records:
type: array
items:
type: object
properties:
content:
type: string
description: Content text
score:
type: number
description: Similarity score
title:
type: string
description: Document title
metadata:
type: object
description: Metadata info
404:
description: Knowledge base or document not found
"""
req = request.json
question = req["query"]
kb_id = req["knowledge_id"]
@ -38,9 +121,9 @@ def retrieval(tenant_id):
retrieval_setting = req.get("retrieval_setting", {})
similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0))
top = int(retrieval_setting.get("top_k", 1024))
metadata_condition = req.get("metadata_condition",{})
metadata_condition = req.get("metadata_condition", {})
metas = DocumentService.get_meta_by_kbs([kb_id])
doc_ids = []
try:
@ -50,12 +133,12 @@ def retrieval(tenant_id):
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
print(metadata_condition)
print("after",convert_conditions(metadata_condition))
# print("after", convert_conditions(metadata_condition))
doc_ids.extend(meta_filter(metas, convert_conditions(metadata_condition)))
print("doc_ids",doc_ids)
# print("doc_ids", doc_ids)
if not doc_ids and metadata_condition is not None:
doc_ids = ['-999']
ranks = settings.retrievaler.retrieval(
ranks = settings.retriever.retrieval(
question,
embd_mdl,
kb.tenant_id,
@ -70,17 +153,17 @@ def retrieval(tenant_id):
)
if use_kg:
ck = settings.kg_retrievaler.retrieval(question,
[tenant_id],
[kb_id],
embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
ck = settings.kg_retriever.retrieval(question,
[tenant_id],
[kb_id],
embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)
records = []
for c in ranks["chunks"]:
e, doc = DocumentService.get_by_id( c["doc_id"])
e, doc = DocumentService.get_by_id(c["doc_id"])
c.pop("vector", None)
meta = getattr(doc, 'meta_fields', {})
meta["doc_id"] = c["doc_id"]
@ -100,5 +183,3 @@ def retrieval(tenant_id):
)
logging.exception(e)
return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)

View File

@ -458,7 +458,7 @@ def list_docs(dataset_id, tenant_id):
required: false
default: true
description: Order in descending.
- in: query
- in: query
name: create_time_from
type: integer
required: false
@ -982,7 +982,7 @@ def list_chunks(tenant_id, dataset_id, document_id):
_ = Chunk(**final_chunk)
elif settings.docStoreConn.indexExist(search.index_name(tenant_id), dataset_id):
sres = settings.retrievaler.search(query, search.index_name(tenant_id), [dataset_id], emb_mdl=None, highlight=True)
sres = settings.retriever.search(query, search.index_name(tenant_id), [dataset_id], emb_mdl=None, highlight=True)
res["total"] = sres.total
for id in sres.ids:
d = {
@ -1446,7 +1446,7 @@ def retrieval_test(tenant_id):
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
ranks = settings.retrievaler.retrieval(
ranks = settings.retriever.retrieval(
question,
embd_mdl,
tenant_ids,
@ -1462,7 +1462,7 @@ def retrieval_test(tenant_id):
rank_feature=label_question(question, kbs),
)
if use_kg:
ck = settings.kg_retrievaler.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
ck = settings.kg_retriever.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)

View File

@ -1,3 +1,20 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import pathlib
import re
@ -17,7 +34,8 @@ from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type
from rag.utils.storage_factory import STORAGE_IMPL
@manager.route('/file/upload', methods=['POST']) # noqa: F821
@manager.route('/file/upload', methods=['POST']) # noqa: F821
@token_required
def upload(tenant_id):
"""
@ -44,22 +62,22 @@ def upload(tenant_id):
type: object
properties:
data:
type: array
items:
type: object
properties:
id:
type: string
description: File ID
name:
type: string
description: File name
size:
type: integer
description: File size in bytes
type:
type: string
description: File type (e.g., document, folder)
type: array
items:
type: object
properties:
id:
type: string
description: File ID
name:
type: string
description: File name
size:
type: integer
description: File size in bytes
type:
type: string
description: File type (e.g., document, folder)
"""
pf_id = request.form.get("parent_id")
@ -97,12 +115,14 @@ def upload(tenant_id):
e, file = FileService.get_by_id(file_id_list[len_id_list - 1])
if not e:
return get_json_result(data=False, message="Folder not found!", code=404)
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names, len_id_list)
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names,
len_id_list)
else:
e, file = FileService.get_by_id(file_id_list[len_id_list - 2])
if not e:
return get_json_result(data=False, message="Folder not found!", code=404)
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names, len_id_list)
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names,
len_id_list)
filetype = filename_type(file_obj_names[file_len - 1])
location = file_obj_names[file_len - 1]
@ -129,7 +149,7 @@ def upload(tenant_id):
return server_error_response(e)
@manager.route('/file/create', methods=['POST']) # noqa: F821
@manager.route('/file/create', methods=['POST']) # noqa: F821
@token_required
def create(tenant_id):
"""
@ -207,7 +227,7 @@ def create(tenant_id):
return server_error_response(e)
@manager.route('/file/list', methods=['GET']) # noqa: F821
@manager.route('/file/list', methods=['GET']) # noqa: F821
@token_required
def list_files(tenant_id):
"""
@ -299,7 +319,7 @@ def list_files(tenant_id):
return server_error_response(e)
@manager.route('/file/root_folder', methods=['GET']) # noqa: F821
@manager.route('/file/root_folder', methods=['GET']) # noqa: F821
@token_required
def get_root_folder(tenant_id):
"""
@ -335,7 +355,7 @@ def get_root_folder(tenant_id):
return server_error_response(e)
@manager.route('/file/parent_folder', methods=['GET']) # noqa: F821
@manager.route('/file/parent_folder', methods=['GET']) # noqa: F821
@token_required
def get_parent_folder():
"""
@ -380,7 +400,7 @@ def get_parent_folder():
return server_error_response(e)
@manager.route('/file/all_parent_folder', methods=['GET']) # noqa: F821
@manager.route('/file/all_parent_folder', methods=['GET']) # noqa: F821
@token_required
def get_all_parent_folders(tenant_id):
"""
@ -428,7 +448,7 @@ def get_all_parent_folders(tenant_id):
return server_error_response(e)
@manager.route('/file/rm', methods=['POST']) # noqa: F821
@manager.route('/file/rm', methods=['POST']) # noqa: F821
@token_required
def rm(tenant_id):
"""
@ -502,7 +522,7 @@ def rm(tenant_id):
return server_error_response(e)
@manager.route('/file/rename', methods=['POST']) # noqa: F821
@manager.route('/file/rename', methods=['POST']) # noqa: F821
@token_required
def rename(tenant_id):
"""
@ -542,7 +562,8 @@ def rename(tenant_id):
if not e:
return get_json_result(message="File not found!", code=404)
if file.type != FileType.FOLDER.value and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(file.name.lower()).suffix:
if file.type != FileType.FOLDER.value and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix:
return get_json_result(data=False, message="The extension of file can't be changed", code=400)
for existing_file in FileService.query(name=req["name"], pf_id=file.parent_id):
@ -562,9 +583,9 @@ def rename(tenant_id):
return server_error_response(e)
@manager.route('/file/get/<file_id>', methods=['GET']) # noqa: F821
@manager.route('/file/get/<file_id>', methods=['GET']) # noqa: F821
@token_required
def get(tenant_id,file_id):
def get(tenant_id, file_id):
"""
Download a file.
---
@ -610,7 +631,7 @@ def get(tenant_id,file_id):
return server_error_response(e)
@manager.route('/file/mv', methods=['POST']) # noqa: F821
@manager.route('/file/mv', methods=['POST']) # noqa: F821
@token_required
def move(tenant_id):
"""
@ -669,6 +690,7 @@ def move(tenant_id):
except Exception as e:
return server_error_response(e)
@manager.route('/file/convert', methods=['POST']) # noqa: F821
@token_required
def convert(tenant_id):
@ -735,4 +757,4 @@ def convert(tenant_id):
file2documents.append(file2document.to_json())
return get_json_result(data=file2documents)
except Exception as e:
return server_error_response(e)
return server_error_response(e)

View File

@ -36,7 +36,8 @@ from api.db.services.llm_service import LLMBundle
from api.db.services.search_service import SearchService
from api.db.services.user_service import UserTenantService
from api.utils import get_uuid
from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_json_result, get_result, server_error_response, token_required, validate_request
from api.utils.api_utils import check_duplicate_ids, get_data_openai, get_error_data_result, get_json_result, \
get_result, server_error_response, token_required, validate_request
from rag.app.tag import label_question
from rag.prompts.template import load_prompt
from rag.prompts.generator import cross_languages, gen_meta_filter, keyword_extraction, chunks_format
@ -88,7 +89,8 @@ def create_agent_session(tenant_id, agent_id):
canvas.reset()
cvs.dsl = json.loads(str(canvas))
conv = {"id": session_id, "dialog_id": cvs.id, "user_id": user_id, "message": [{"role": "assistant", "content": canvas.get_prologue()}], "source": "agent", "dsl": cvs.dsl}
conv = {"id": session_id, "dialog_id": cvs.id, "user_id": user_id,
"message": [{"role": "assistant", "content": canvas.get_prologue()}], "source": "agent", "dsl": cvs.dsl}
API4ConversationService.save(**conv)
conv["agent_id"] = conv.pop("dialog_id")
return get_result(data=conv)
@ -279,7 +281,7 @@ def chat_completion_openai_like(tenant_id, chat_id):
reasoning_match = re.search(r"<think>(.*?)</think>", answer, flags=re.DOTALL)
if reasoning_match:
reasoning_part = reasoning_match.group(1)
content_part = answer[reasoning_match.end() :]
content_part = answer[reasoning_match.end():]
else:
reasoning_part = ""
content_part = answer
@ -324,7 +326,8 @@ def chat_completion_openai_like(tenant_id, chat_id):
response["choices"][0]["delta"]["content"] = None
response["choices"][0]["delta"]["reasoning_content"] = None
response["choices"][0]["finish_reason"] = "stop"
response["usage"] = {"prompt_tokens": len(prompt), "completion_tokens": token_used, "total_tokens": len(prompt) + token_used}
response["usage"] = {"prompt_tokens": len(prompt), "completion_tokens": token_used,
"total_tokens": len(prompt) + token_used}
if need_reference:
response["choices"][0]["delta"]["reference"] = chunks_format(last_ans.get("reference", []))
response["choices"][0]["delta"]["final_content"] = last_ans.get("answer", "")
@ -559,7 +562,8 @@ def list_agent_session(tenant_id, agent_id):
desc = True
# dsl defaults to True in all cases except for False and false
include_dsl = request.args.get("dsl") != "False" and request.args.get("dsl") != "false"
total, convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id, user_id, include_dsl)
total, convs = API4ConversationService.get_list(agent_id, tenant_id, page_number, items_per_page, orderby, desc, id,
user_id, include_dsl)
if not convs:
return get_result(data=[])
for conv in convs:
@ -581,7 +585,8 @@ def list_agent_session(tenant_id, agent_id):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
# Add boundary and type checks to prevent KeyError
if chunk_num < len(conv["reference"]) and conv["reference"][chunk_num] is not None and isinstance(conv["reference"][chunk_num], dict) and "chunks" in conv["reference"][chunk_num]:
if chunk_num < len(conv["reference"]) and conv["reference"][chunk_num] is not None and isinstance(
conv["reference"][chunk_num], dict) and "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
# Ensure chunk is a dictionary before calling get method
@ -639,13 +644,16 @@ def delete(tenant_id, chat_id):
if errors:
if success_count > 0:
return get_result(data={"success_count": success_count, "errors": errors}, message=f"Partially deleted {success_count} sessions with {len(errors)} errors")
return get_result(data={"success_count": success_count, "errors": errors},
message=f"Partially deleted {success_count} sessions with {len(errors)} errors")
else:
return get_error_data_result(message="; ".join(errors))
if duplicate_messages:
if success_count > 0:
return get_result(message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors", data={"success_count": success_count, "errors": duplicate_messages})
return get_result(
message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors",
data={"success_count": success_count, "errors": duplicate_messages})
else:
return get_error_data_result(message=";".join(duplicate_messages))
@ -691,13 +699,16 @@ def delete_agent_session(tenant_id, agent_id):
if errors:
if success_count > 0:
return get_result(data={"success_count": success_count, "errors": errors}, message=f"Partially deleted {success_count} sessions with {len(errors)} errors")
return get_result(data={"success_count": success_count, "errors": errors},
message=f"Partially deleted {success_count} sessions with {len(errors)} errors")
else:
return get_error_data_result(message="; ".join(errors))
if duplicate_messages:
if success_count > 0:
return get_result(message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors", data={"success_count": success_count, "errors": duplicate_messages})
return get_result(
message=f"Partially deleted {success_count} sessions with {len(duplicate_messages)} errors",
data={"success_count": success_count, "errors": duplicate_messages})
else:
return get_error_data_result(message=";".join(duplicate_messages))
@ -730,7 +741,9 @@ def ask_about(tenant_id):
for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps(
{"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream")
@ -882,7 +895,9 @@ def begin_inputs(agent_id):
return get_error_data_result(f"Can't find agent by ID: {agent_id}")
canvas = Canvas(json.dumps(cvs.dsl), objs[0].tenant_id)
return get_result(data={"title": cvs.title, "avatar": cvs.avatar, "inputs": canvas.get_component_input_form("begin"), "prologue": canvas.get_prologue(), "mode": canvas.get_mode()})
return get_result(
data={"title": cvs.title, "avatar": cvs.avatar, "inputs": canvas.get_component_input_form("begin"),
"prologue": canvas.get_prologue(), "mode": canvas.get_mode()})
@manager.route("/searchbots/ask", methods=["POST"]) # noqa: F821
@ -911,7 +926,9 @@ def ask_about_embedded():
for ans in ask(req["question"], req["kb_ids"], uid, search_config=search_config):
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps(
{"code": 500, "message": str(e), "data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream")
@ -978,7 +995,8 @@ def retrieval_test_embedded():
tenant_ids.append(tenant.tenant_id)
break
else:
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Only owner of knowledgebase authorized for this operation.",
code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e:
@ -998,11 +1016,13 @@ def retrieval_test_embedded():
question += keyword_extraction(chat_mdl, question)
labels = label_question(question, [kb])
ranks = settings.retrievaler.retrieval(
question, embd_mdl, tenant_ids, kb_ids, page, size, similarity_threshold, vector_similarity_weight, top, doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"), rank_feature=labels
ranks = settings.retriever.retrieval(
question, embd_mdl, tenant_ids, kb_ids, page, size, similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"), rank_feature=labels
)
if use_kg:
ck = settings.kg_retrievaler.retrieval(question, tenant_ids, kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
ck = settings.kg_retriever.retrieval(question, tenant_ids, kb_ids, embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)
@ -1013,7 +1033,8 @@ def retrieval_test_embedded():
return get_json_result(data=ranks)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, message="No chunk found! Check the chunk status please!", code=settings.RetCode.DATA_ERROR)
return get_json_result(data=False, message="No chunk found! Check the chunk status please!",
code=settings.RetCode.DATA_ERROR)
return server_error_response(e)
@ -1082,7 +1103,8 @@ def detail_share_embedded():
if SearchService.query(tenant_id=tenant.tenant_id, id=search_id):
break
else:
return get_json_result(data=False, message="Has no permission for this operation.", code=settings.RetCode.OPERATING_ERROR)
return get_json_result(data=False, message="Has no permission for this operation.",
code=settings.RetCode.OPERATING_ERROR)
search = SearchService.get_detail(search_id)
if not search:

View File

@ -162,7 +162,7 @@ def status():
task_executors = REDIS_CONN.smembers("TASKEXE")
now = datetime.now().timestamp()
for task_executor_id in task_executors:
heartbeats = REDIS_CONN.zrangebyscore(task_executor_id, now - 60*30, now)
heartbeats = REDIS_CONN.zrangebyscore(task_executor_id, now - 60 * 30, now)
heartbeats = [json.loads(heartbeat) for heartbeat in heartbeats]
task_executor_heartbeats[task_executor_id] = heartbeats
except Exception:
@ -178,6 +178,11 @@ def healthz():
return jsonify(result), (200 if all_ok else 500)
@manager.route("/ping", methods=["GET"]) # noqa: F821
def ping():
return "pong", 200
@manager.route("/new_token", methods=["POST"]) # noqa: F821
@login_required
def new_token():
@ -269,7 +274,8 @@ def token_list():
objs = [o.to_dict() for o in objs]
for o in objs:
if not o["beta"]:
o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace("ragflow-", "")[:32]
o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace(
"ragflow-", "")[:32]
APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o)
return get_json_result(data=objs)
except Exception as e:

View File

@ -70,7 +70,8 @@ def create(tenant_id):
return get_data_error_result(message=f"{invite_user_email} is already in the team.")
if user_tenant_role == UserTenantRole.OWNER:
return get_data_error_result(message=f"{invite_user_email} is the owner of the team.")
return get_data_error_result(message=f"{invite_user_email} is in the team, but the role: {user_tenant_role} is invalid.")
return get_data_error_result(
message=f"{invite_user_email} is in the team, but the role: {user_tenant_role} is invalid.")
UserTenantService.save(
id=get_uuid(),
@ -132,7 +133,8 @@ def tenant_list():
@login_required
def agree(tenant_id):
try:
UserTenantService.filter_update([UserTenant.tenant_id == tenant_id, UserTenant.user_id == current_user.id], {"role": UserTenantRole.NORMAL})
UserTenantService.filter_update([UserTenant.tenant_id == tenant_id, UserTenant.user_id == current_user.id],
{"role": UserTenantRole.NORMAL})
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -15,11 +15,14 @@
#
import json
import logging
import string
import os
import re
import secrets
import time
from datetime import datetime
from flask import redirect, request, session
from flask import redirect, request, session, Response
from flask_login import current_user, login_required, login_user, logout_user
from werkzeug.security import check_password_hash, generate_password_hash
@ -46,6 +49,19 @@ from api.utils.api_utils import (
validate_request,
)
from api.utils.crypt import decrypt
from rag.utils.redis_conn import REDIS_CONN
from api.apps import smtp_mail_server
from api.utils.web_utils import (
send_email_html,
OTP_LENGTH,
OTP_TTL_SECONDS,
ATTEMPT_LIMIT,
ATTEMPT_LOCK_SECONDS,
RESEND_COOLDOWN_SECONDS,
otp_keys,
hash_code,
captcha_key,
)
@manager.route("/login", methods=["POST", "GET"]) # noqa: F821
@ -825,3 +841,170 @@ def set_tenant_info():
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route("/forget/captcha", methods=["GET"]) # noqa: F821
def forget_get_captcha():
"""
GET /forget/captcha?email=<email>
- Generate an image captcha and cache it in Redis under key captcha:{email} with TTL = OTP_TTL_SECONDS.
- Returns the captcha as a PNG image.
"""
email = (request.args.get("email") or "")
if not email:
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="email is required")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False, code=settings.RetCode.DATA_ERROR, message="invalid email")
# Generate captcha text
allowed = string.ascii_uppercase + string.digits
captcha_text = "".join(secrets.choice(allowed) for _ in range(OTP_LENGTH))
REDIS_CONN.set(captcha_key(email), captcha_text, 60) # Valid for 60 seconds
from captcha.image import ImageCaptcha
image = ImageCaptcha(width=300, height=120, font_sizes=[50, 60, 70])
img_bytes = image.generate(captcha_text).read()
return Response(img_bytes, mimetype="image/png")
@manager.route("/forget/otp", methods=["POST"]) # noqa: F821
def forget_send_otp():
"""
POST /forget/otp
- Verify the image captcha stored at captcha:{email} (case-insensitive).
- On success, generate an email OTP (AZ with length = OTP_LENGTH), store hash + salt (and timestamp) in Redis with TTL, reset attempts and cooldown, and send the OTP via email.
"""
req = request.get_json()
email = req.get("email") or ""
captcha = (req.get("captcha") or "").strip()
if not email or not captcha:
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="email and captcha required")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False, code=settings.RetCode.DATA_ERROR, message="invalid email")
stored_captcha = REDIS_CONN.get(captcha_key(email))
if not stored_captcha:
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message="invalid or expired captcha")
if (stored_captcha or "").strip().lower() != captcha.lower():
return get_json_result(data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="invalid or expired captcha")
# Delete captcha to prevent reuse
REDIS_CONN.delete(captcha_key(email))
k_code, k_attempts, k_last, k_lock = otp_keys(email)
now = int(time.time())
last_ts = REDIS_CONN.get(k_last)
if last_ts:
try:
elapsed = now - int(last_ts)
except Exception:
elapsed = RESEND_COOLDOWN_SECONDS
remaining = RESEND_COOLDOWN_SECONDS - elapsed
if remaining > 0:
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message=f"you still have to wait {remaining} seconds")
# Generate OTP (uppercase letters only) and store hashed
otp = "".join(secrets.choice(string.ascii_uppercase) for _ in range(OTP_LENGTH))
salt = os.urandom(16)
code_hash = hash_code(otp, salt)
REDIS_CONN.set(k_code, f"{code_hash}:{salt.hex()}", OTP_TTL_SECONDS)
REDIS_CONN.set(k_attempts, 0, OTP_TTL_SECONDS)
REDIS_CONN.set(k_last, now, OTP_TTL_SECONDS)
REDIS_CONN.delete(k_lock)
ttl_min = OTP_TTL_SECONDS // 60
if not smtp_mail_server:
logging.warning("SMTP mail server not initialized; skip sending email.")
else:
try:
send_email_html(
subject="Your Password Reset Code",
to_email=email,
template_key="reset_code",
code=otp,
ttl_min=ttl_min,
)
except Exception:
return get_json_result(data=False, code=settings.RetCode.SERVER_ERROR, message="failed to send email")
return get_json_result(data=True, code=settings.RetCode.SUCCESS, message="verification passed, email sent")
@manager.route("/forget", methods=["POST"]) # noqa: F821
def forget():
"""
POST: Verify email + OTP and reset password, then log the user in.
Request JSON: { email, otp, new_password, confirm_new_password }
"""
req = request.get_json()
email = req.get("email") or ""
otp = (req.get("otp") or "").strip()
new_pwd = req.get("new_password")
new_pwd2 = req.get("confirm_new_password")
if not all([email, otp, new_pwd, new_pwd2]):
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="email, otp and passwords are required")
# For reset, passwords are provided as-is (no decrypt needed)
if new_pwd != new_pwd2:
return get_json_result(data=False, code=settings.RetCode.ARGUMENT_ERROR, message="passwords do not match")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False, code=settings.RetCode.DATA_ERROR, message="invalid email")
user = users[0]
# Verify OTP from Redis
k_code, k_attempts, k_last, k_lock = otp_keys(email)
if REDIS_CONN.get(k_lock):
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message="too many attempts, try later")
stored = REDIS_CONN.get(k_code)
if not stored:
return get_json_result(data=False, code=settings.RetCode.NOT_EFFECTIVE, message="expired otp")
try:
stored_hash, salt_hex = str(stored).split(":", 1)
salt = bytes.fromhex(salt_hex)
except Exception:
return get_json_result(data=False, code=settings.RetCode.EXCEPTION_ERROR, message="otp storage corrupted")
# Case-insensitive verification: OTP generated uppercase
calc = hash_code(otp.upper(), salt)
if calc != stored_hash:
# bump attempts
try:
attempts = int(REDIS_CONN.get(k_attempts) or 0) + 1
except Exception:
attempts = 1
REDIS_CONN.set(k_attempts, attempts, OTP_TTL_SECONDS)
if attempts >= ATTEMPT_LIMIT:
REDIS_CONN.set(k_lock, int(time.time()), ATTEMPT_LOCK_SECONDS)
return get_json_result(data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="expired otp")
# Success: consume OTP and reset password
REDIS_CONN.delete(k_code)
REDIS_CONN.delete(k_attempts)
REDIS_CONN.delete(k_last)
REDIS_CONN.delete(k_lock)
try:
UserService.update_user_password(user.id, new_pwd)
except Exception as e:
logging.exception(e)
return get_json_result(data=False, code=settings.RetCode.EXCEPTION_ERROR, message="failed to reset password")
# Auto login (reuse login flow)
user.access_token = get_uuid()
login_user(user)
user.update_time = (current_timestamp(),)
user.update_date = (datetime_format(datetime.now()),)
user.save()
msg = "Password reset successful. Logged in."
return construct_response(data=user.to_json(), auth=user.get_id(), message=msg)

View File

@ -0,0 +1,59 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from api.db import TenantPermission
from api.db.db_models import File, Knowledgebase
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
def check_kb_team_permission(kb: dict | Knowledgebase, other: str) -> bool:
kb = kb.to_dict() if isinstance(kb, Knowledgebase) else kb
kb_tenant_id = kb["tenant_id"]
if kb_tenant_id == other:
return True
if kb["permission"] != TenantPermission.TEAM:
return False
joined_tenants = TenantService.get_joined_tenants_by_user_id(other)
return any(tenant["tenant_id"] == kb_tenant_id for tenant in joined_tenants)
def check_file_team_permission(file: dict | File, other: str) -> bool:
file = file.to_dict() if isinstance(file, File) else file
file_tenant_id = file["tenant_id"]
if file_tenant_id == other:
return True
file_id = file["id"]
kb_ids = [kb_info["kb_id"] for kb_info in FileService.get_kb_id_by_file_id(file_id)]
for kb_id in kb_ids:
ok, kb = KnowledgebaseService.get_by_id(kb_id)
if not ok:
continue
if check_kb_team_permission(kb, other):
return True
return False

38
api/common/exceptions.py Normal file
View File

@ -0,0 +1,38 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
class AdminException(Exception):
def __init__(self, message, code=400):
super().__init__(message)
self.type = "admin"
self.code = code
self.message = message
class UserNotFoundError(AdminException):
def __init__(self, username):
super().__init__(f"User '{username}' not found", 404)
class UserAlreadyExistsError(AdminException):
def __init__(self, username):
super().__init__(f"User '{username}' already exists", 409)
class CannotDeleteAdminError(AdminException):
def __init__(self):
super().__init__("Cannot delete admin account", 403)

View File

@ -313,9 +313,75 @@ class RetryingPooledMySQLDatabase(PooledMySQLDatabase):
raise
class RetryingPooledPostgresqlDatabase(PooledPostgresqlDatabase):
def __init__(self, *args, **kwargs):
self.max_retries = kwargs.pop("max_retries", 5)
self.retry_delay = kwargs.pop("retry_delay", 1)
super().__init__(*args, **kwargs)
def execute_sql(self, sql, params=None, commit=True):
for attempt in range(self.max_retries + 1):
try:
return super().execute_sql(sql, params, commit)
except (OperationalError, InterfaceError) as e:
# PostgreSQL specific error codes
# 57P01: admin_shutdown
# 57P02: crash_shutdown
# 57P03: cannot_connect_now
# 08006: connection_failure
# 08003: connection_does_not_exist
# 08000: connection_exception
error_messages = ['connection', 'server closed', 'connection refused',
'no connection to the server', 'terminating connection']
should_retry = any(msg in str(e).lower() for msg in error_messages)
if should_retry and attempt < self.max_retries:
logging.warning(
f"PostgreSQL connection issue (attempt {attempt+1}/{self.max_retries}): {e}"
)
self._handle_connection_loss()
time.sleep(self.retry_delay * (2 ** attempt))
else:
logging.error(f"PostgreSQL execution failure: {e}")
raise
return None
def _handle_connection_loss(self):
try:
self.close()
except Exception:
pass
try:
self.connect()
except Exception as e:
logging.error(f"Failed to reconnect to PostgreSQL: {e}")
time.sleep(0.1)
self.connect()
def begin(self):
for attempt in range(self.max_retries + 1):
try:
return super().begin()
except (OperationalError, InterfaceError) as e:
error_messages = ['connection', 'server closed', 'connection refused',
'no connection to the server', 'terminating connection']
should_retry = any(msg in str(e).lower() for msg in error_messages)
if should_retry and attempt < self.max_retries:
logging.warning(
f"PostgreSQL connection lost during transaction (attempt {attempt+1}/{self.max_retries})"
)
self._handle_connection_loss()
time.sleep(self.retry_delay * (2 ** attempt))
else:
raise
class PooledDatabase(Enum):
MYSQL = RetryingPooledMySQLDatabase
POSTGRES = PooledPostgresqlDatabase
POSTGRES = RetryingPooledPostgresqlDatabase
class DatabaseMigrator(Enum):
@ -641,7 +707,7 @@ class TenantLLM(DataBaseModel):
llm_factory = CharField(max_length=128, null=False, help_text="LLM factory name", index=True)
model_type = CharField(max_length=128, null=True, help_text="LLM, Text Embedding, Image2Text, ASR", index=True)
llm_name = CharField(max_length=128, null=True, help_text="LLM name", default="", index=True)
api_key = CharField(max_length=2048, null=True, help_text="API KEY", index=True)
api_key = TextField(null=True, help_text="API KEY")
api_base = CharField(max_length=255, null=True, help_text="API Base")
max_tokens = IntegerField(default=8192, index=True)
used_tokens = IntegerField(default=0, index=True)
@ -1142,4 +1208,8 @@ def migrate_db():
migrate(migrator.add_column("knowledgebase", "mindmap_task_finish_at", CharField(null=True)))
except Exception:
pass
try:
migrate(migrator.alter_column_type("tenant_llm", "api_key", TextField(null=True, help_text="API KEY")))
except Exception:
pass
logging.disable(logging.NOTSET)

View File

@ -143,15 +143,12 @@ class UserCanvasService(CommonService):
]
if keywords:
agents = cls.model.select(*fields).join(User, on=(cls.model.user_id == User.id)).where(
cls.model.user_id.in_(joined_tenant_ids),
fn.LOWER(cls.model.title).contains(keywords.lower())
#(((cls.model.user_id.in_(joined_tenant_ids)) & (cls.model.permission == TenantPermission.TEAM.value)) | (cls.model.user_id == user_id)),
#(fn.LOWER(cls.model.title).contains(keywords.lower()))
(((cls.model.user_id.in_(joined_tenant_ids)) & (cls.model.permission == TenantPermission.TEAM.value)) | (cls.model.user_id == user_id)),
(fn.LOWER(cls.model.title).contains(keywords.lower()))
)
else:
agents = cls.model.select(*fields).join(User, on=(cls.model.user_id == User.id)).where(
cls.model.user_id.in_(joined_tenant_ids)
#(((cls.model.user_id.in_(joined_tenant_ids)) & (cls.model.permission == TenantPermission.TEAM.value)) | (cls.model.user_id == user_id))
(((cls.model.user_id.in_(joined_tenant_ids)) & (cls.model.permission == TenantPermission.TEAM.value)) | (cls.model.user_id == user_id))
)
if canvas_category:
agents = agents.where(cls.model.canvas_category == canvas_category)

View File

@ -370,7 +370,7 @@ def chat(dialog, messages, stream=True, **kwargs):
chat_mdl.bind_tools(toolcall_session, tools)
bind_models_ts = timer()
retriever = settings.retrievaler
retriever = settings.retriever
questions = [m["content"] for m in messages if m["role"] == "user"][-3:]
attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else []
if "doc_ids" in messages[-1]:
@ -466,13 +466,17 @@ def chat(dialog, messages, stream=True, **kwargs):
rerank_mdl=rerank_mdl,
rank_feature=label_question(" ".join(questions), kbs),
)
if prompt_config.get("toc_enhance"):
cks = retriever.retrieval_by_toc(" ".join(questions), kbinfos["chunks"], tenant_ids, chat_mdl, dialog.top_n)
if cks:
kbinfos["chunks"] = cks
if prompt_config.get("tavily_api_key"):
tav = Tavily(prompt_config["tavily_api_key"])
tav_res = tav.retrieve_chunks(" ".join(questions))
kbinfos["chunks"].extend(tav_res["chunks"])
kbinfos["doc_aggs"].extend(tav_res["doc_aggs"])
if prompt_config.get("use_kg"):
ck = settings.kg_retrievaler.retrieval(" ".join(questions), tenant_ids, dialog.kb_ids, embd_mdl,
ck = settings.kg_retriever.retrieval(" ".join(questions), tenant_ids, dialog.kb_ids, embd_mdl,
LLMBundle(dialog.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
kbinfos["chunks"].insert(0, ck)
@ -658,7 +662,7 @@ Please write the SQL, only SQL, without any other explanations or text.
logging.debug(f"{question} get SQL(refined): {sql}")
tried_times += 1
return settings.retrievaler.sql_retrieval(sql, format="json"), sql
return settings.retriever.sql_retrieval(sql, format="json"), sql
tbl, sql = get_table()
if tbl is None:
@ -752,7 +756,7 @@ def ask(question, kb_ids, tenant_id, chat_llm_name=None, search_config={}):
embedding_list = list(set([kb.embd_id for kb in kbs]))
is_knowledge_graph = all([kb.parser_id == ParserType.KG for kb in kbs])
retriever = settings.retrievaler if not is_knowledge_graph else settings.kg_retrievaler
retriever = settings.retriever if not is_knowledge_graph else settings.kg_retriever
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embedding_list[0])
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, chat_llm_name)
@ -848,7 +852,7 @@ def gen_mindmap(question, kb_ids, tenant_id, search_config={}):
if not doc_ids:
doc_ids = None
ranks = settings.retrievaler.retrieval(
ranks = settings.retriever.retrieval(
question=question,
embd_mdl=embd_mdl,
tenant_ids=tenant_ids,

View File

@ -476,6 +476,16 @@ class FileService(CommonService):
return err, files
@classmethod
@DB.connection_context()
def list_all_files_by_parent_id(cls, parent_id):
try:
files = cls.model.select().where((cls.model.parent_id == parent_id) & (cls.model.id != parent_id))
return list(files)
except Exception:
logging.exception("list_by_parent_id failed")
raise RuntimeError("Database error (list_by_parent_id)!")
@staticmethod
def parse_docs(file_objs, user_id):
exe = ThreadPoolExecutor(max_workers=12)

View File

@ -379,6 +379,7 @@ class KnowledgebaseService(CommonService):
# name: Optional name filter
# Returns:
# List of knowledge bases
# Total count of knowledge bases
kbs = cls.model.select()
if id:
kbs = kbs.where(cls.model.id == id)
@ -390,14 +391,16 @@ class KnowledgebaseService(CommonService):
cls.model.tenant_id == user_id))
& (cls.model.status == StatusEnum.VALID.value)
)
if desc:
kbs = kbs.order_by(cls.model.getter_by(orderby).desc())
else:
kbs = kbs.order_by(cls.model.getter_by(orderby).asc())
total = kbs.count()
kbs = kbs.paginate(page_number, items_per_page)
return list(kbs.dicts())
return list(kbs.dicts()), total
@classmethod
@DB.connection_context()

View File

@ -33,7 +33,8 @@ class MCPServerService(CommonService):
@classmethod
@DB.connection_context()
def get_servers(cls, tenant_id: str, id_list: list[str] | None, page_number, items_per_page, orderby, desc, keywords):
def get_servers(cls, tenant_id: str, id_list: list[str] | None, page_number, items_per_page, orderby, desc,
keywords):
"""Retrieve all MCP servers associated with a tenant.
This method fetches all MCP servers for a given tenant, ordered by creation time.

View File

@ -94,7 +94,8 @@ class SearchService(CommonService):
query = (
cls.model.select(*fields)
.join(User, on=(cls.model.tenant_id == User.id))
.where(((cls.model.tenant_id.in_(joined_tenant_ids)) | (cls.model.tenant_id == user_id)) & (cls.model.status == StatusEnum.VALID.value))
.where(((cls.model.tenant_id.in_(joined_tenant_ids)) | (cls.model.tenant_id == user_id)) & (
cls.model.status == StatusEnum.VALID.value))
)
if keywords:

View File

@ -165,7 +165,7 @@ class TaskService(CommonService):
]
tasks = (
cls.model.select(*fields).order_by(cls.model.from_page.asc(), cls.model.create_time.desc())
.where(cls.model.doc_id == doc_id)
.where(cls.model.doc_id == doc_id)
)
tasks = list(tasks.dicts())
if not tasks:
@ -205,18 +205,18 @@ class TaskService(CommonService):
cls.model.select(
*[Document.id, Document.kb_id, Document.location, File.parent_id]
)
.join(Document, on=(cls.model.doc_id == Document.id))
.join(
.join(Document, on=(cls.model.doc_id == Document.id))
.join(
File2Document,
on=(File2Document.document_id == Document.id),
join_type=JOIN.LEFT_OUTER,
)
.join(
.join(
File,
on=(File2Document.file_id == File.id),
join_type=JOIN.LEFT_OUTER,
)
.where(
.where(
Document.status == StatusEnum.VALID.value,
Document.run == TaskStatus.RUNNING.value,
~(Document.type == FileType.VIRTUAL.value),
@ -294,8 +294,8 @@ class TaskService(CommonService):
cls.model.update(progress=prog).where(
(cls.model.id == id) &
(
(cls.model.progress != -1) &
((prog == -1) | (prog > cls.model.progress))
(cls.model.progress != -1) &
((prog == -1) | (prog > cls.model.progress))
)
).execute()
else:
@ -343,6 +343,7 @@ def queue_tasks(doc: dict, bucket: str, name: str, priority: int):
- Task digests are calculated for optimization and reuse
- Previous task chunks may be reused if available
"""
def new_task():
return {
"id": get_uuid(),
@ -515,7 +516,7 @@ def queue_dataflow(tenant_id:str, flow_id:str, task_id:str, doc_id:str=CANVAS_DE
task["file"] = file
if not REDIS_CONN.queue_product(
get_svr_queue_name(priority), message=task
get_svr_queue_name(priority), message=task
):
return False, "Can't access Redis. Please check the Redis' status."

View File

@ -57,8 +57,10 @@ class TenantLLMService(CommonService):
@classmethod
@DB.connection_context()
def get_my_llms(cls, tenant_id):
fields = [cls.model.llm_factory, LLMFactories.logo, LLMFactories.tags, cls.model.model_type, cls.model.llm_name, cls.model.used_tokens]
objs = cls.model.select(*fields).join(LLMFactories, on=(cls.model.llm_factory == LLMFactories.name)).where(cls.model.tenant_id == tenant_id, ~cls.model.api_key.is_null()).dicts()
fields = [cls.model.llm_factory, LLMFactories.logo, LLMFactories.tags, cls.model.model_type, cls.model.llm_name,
cls.model.used_tokens]
objs = cls.model.select(*fields).join(LLMFactories, on=(cls.model.llm_factory == LLMFactories.name)).where(
cls.model.tenant_id == tenant_id, ~cls.model.api_key.is_null()).dicts()
return list(objs)
@ -122,7 +124,8 @@ class TenantLLMService(CommonService):
model_config = {"llm_factory": llm[0].fid, "api_key": "", "llm_name": mdlnm, "api_base": ""}
if not model_config:
if mdlnm == "flag-embedding":
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "", "llm_name": llm_name, "api_base": ""}
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "", "llm_name": llm_name,
"api_base": ""}
else:
if not mdlnm:
raise LookupError(f"Type of {llm_type} model is not set.")
@ -137,27 +140,33 @@ class TenantLLMService(CommonService):
if llm_type == LLMType.EMBEDDING.value:
if model_config["llm_factory"] not in EmbeddingModel:
return
return EmbeddingModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"])
return EmbeddingModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"],
base_url=model_config["api_base"])
if llm_type == LLMType.RERANK:
if model_config["llm_factory"] not in RerankModel:
return
return RerankModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"])
return RerankModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"],
base_url=model_config["api_base"])
if llm_type == LLMType.IMAGE2TEXT.value:
if model_config["llm_factory"] not in CvModel:
return
return CvModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], lang, base_url=model_config["api_base"], **kwargs)
return CvModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], lang,
base_url=model_config["api_base"], **kwargs)
if llm_type == LLMType.CHAT.value:
if model_config["llm_factory"] not in ChatModel:
return
return ChatModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"], **kwargs)
return ChatModel[model_config["llm_factory"]](model_config["api_key"], model_config["llm_name"],
base_url=model_config["api_base"], **kwargs)
if llm_type == LLMType.SPEECH2TEXT:
if model_config["llm_factory"] not in Seq2txtModel:
return
return Seq2txtModel[model_config["llm_factory"]](key=model_config["api_key"], model_name=model_config["llm_name"], lang=lang, base_url=model_config["api_base"])
return Seq2txtModel[model_config["llm_factory"]](key=model_config["api_key"],
model_name=model_config["llm_name"], lang=lang,
base_url=model_config["api_base"])
if llm_type == LLMType.TTS:
if model_config["llm_factory"] not in TTSModel:
return
@ -194,11 +203,14 @@ class TenantLLMService(CommonService):
try:
num = (
cls.model.update(used_tokens=cls.model.used_tokens + used_tokens)
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == llm_name, cls.model.llm_factory == llm_factory if llm_factory else True)
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == llm_name,
cls.model.llm_factory == llm_factory if llm_factory else True)
.execute()
)
except Exception:
logging.exception("TenantLLMService.increase_usage got exception,Failed to update used_tokens for tenant_id=%s, llm_name=%s", tenant_id, llm_name)
logging.exception(
"TenantLLMService.increase_usage got exception,Failed to update used_tokens for tenant_id=%s, llm_name=%s",
tenant_id, llm_name)
return 0
return num
@ -206,7 +218,9 @@ class TenantLLMService(CommonService):
@classmethod
@DB.connection_context()
def get_openai_models(cls):
objs = cls.model.select().where((cls.model.llm_factory == "OpenAI"), ~(cls.model.llm_name == "text-embedding-3-small"), ~(cls.model.llm_name == "text-embedding-3-large")).dicts()
objs = cls.model.select().where((cls.model.llm_factory == "OpenAI"),
~(cls.model.llm_name == "text-embedding-3-small"),
~(cls.model.llm_name == "text-embedding-3-large")).dicts()
return list(objs)
@classmethod
@ -250,8 +264,9 @@ class LLM4Tenant:
langfuse_keys = TenantLangfuseService.filter_by_tenant(tenant_id=tenant_id)
self.langfuse = None
if langfuse_keys:
langfuse = Langfuse(public_key=langfuse_keys.public_key, secret_key=langfuse_keys.secret_key, host=langfuse_keys.host)
langfuse = Langfuse(public_key=langfuse_keys.public_key, secret_key=langfuse_keys.secret_key,
host=langfuse_keys.host)
if langfuse.auth_check():
self.langfuse = langfuse
trace_id = self.langfuse.create_trace_id()
self.trace_context = {"trace_id": trace_id}
self.trace_context = {"trace_id": trace_id}

View File

@ -2,22 +2,22 @@ from api.db.db_models import UserCanvasVersion, DB
from api.db.services.common_service import CommonService
from peewee import DoesNotExist
class UserCanvasVersionService(CommonService):
model = UserCanvasVersion
@classmethod
@DB.connection_context()
def list_by_canvas_id(cls, user_canvas_id):
try:
user_canvas_version = cls.model.select(
*[cls.model.id,
cls.model.create_time,
cls.model.title,
cls.model.create_date,
cls.model.update_date,
cls.model.user_canvas_id,
cls.model.update_time]
*[cls.model.id,
cls.model.create_time,
cls.model.title,
cls.model.create_date,
cls.model.update_date,
cls.model.user_canvas_id,
cls.model.update_time]
).where(cls.model.user_canvas_id == user_canvas_id)
return user_canvas_version
except DoesNotExist:
@ -46,18 +46,16 @@ class UserCanvasVersionService(CommonService):
@DB.connection_context()
def delete_all_versions(cls, user_canvas_id):
try:
user_canvas_version = cls.model.select().where(cls.model.user_canvas_id == user_canvas_id).order_by(cls.model.create_time.desc())
user_canvas_version = cls.model.select().where(cls.model.user_canvas_id == user_canvas_id).order_by(
cls.model.create_time.desc())
if user_canvas_version.count() > 20:
delete_ids = []
for i in range(20, user_canvas_version.count()):
delete_ids.append(user_canvas_version[i].id)
cls.delete_by_ids(delete_ids)
return True
except DoesNotExist:
return None
except Exception:
return None

View File

@ -315,4 +315,4 @@ class UserTenantService(CommonService):
).first()
return user_tenant
except peewee.DoesNotExist:
return None
return None

View File

@ -65,8 +65,8 @@ OAUTH_CONFIG = None
DOC_ENGINE = None
docStoreConn = None
retrievaler = None
kg_retrievaler = None
retriever = None
kg_retriever = None
# user registration switch
REGISTER_ENABLED = 1
@ -174,7 +174,7 @@ def init_settings():
OAUTH_CONFIG = get_base_config("oauth", {})
global DOC_ENGINE, docStoreConn, retrievaler, kg_retrievaler
global DOC_ENGINE, docStoreConn, retriever, kg_retriever
DOC_ENGINE = os.environ.get("DOC_ENGINE", "elasticsearch")
# DOC_ENGINE = os.environ.get('DOC_ENGINE', "opensearch")
lower_case_doc_engine = DOC_ENGINE.lower()
@ -187,10 +187,10 @@ def init_settings():
else:
raise Exception(f"Not supported doc engine: {DOC_ENGINE}")
retrievaler = search.Dealer(docStoreConn)
retriever = search.Dealer(docStoreConn)
from graphrag import search as kg_search
kg_retrievaler = kg_search.KGSearch(docStoreConn)
kg_retriever = kg_search.KGSearch(docStoreConn)
if int(os.environ.get("SANDBOX_ENABLED", "0")):
global SANDBOX_HOST

View File

@ -51,15 +51,13 @@ from api import settings
from api.constants import REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC
from api.db import ActiveEnum
from api.db.db_models import APIToken
from api.db.services import UserService
from api.db.services.llm_service import LLMService
from api.db.services.tenant_llm_service import TenantLLMService
from api.utils.json import CustomJSONEncoder, json_dumps
from api.utils import get_uuid
from rag.utils.mcp_tool_call_conn import MCPToolCallSession, close_multiple_mcp_toolcall_sessions
requests.models.complexjson.dumps = functools.partial(json.dumps, cls=CustomJSONEncoder)
def serialize_for_json(obj):
"""
Recursively serialize objects to make them JSON serializable.
@ -68,8 +66,8 @@ def serialize_for_json(obj):
if hasattr(obj, '__dict__'):
# For objects with __dict__, try to serialize their attributes
try:
return {key: serialize_for_json(value) for key, value in obj.__dict__.items()
if not key.startswith('_')}
return {key: serialize_for_json(value) for key, value in obj.__dict__.items()
if not key.startswith('_')}
except (AttributeError, TypeError):
return str(obj)
elif hasattr(obj, '__name__'):
@ -85,6 +83,7 @@ def serialize_for_json(obj):
# Fallback: convert to string representation
return str(obj)
def request(**kwargs):
sess = requests.Session()
stream = kwargs.pop("stream", sess.stream)
@ -105,7 +104,8 @@ def request(**kwargs):
settings.HTTP_APP_KEY.encode("ascii"),
prepped.path_url.encode("ascii"),
prepped.body if kwargs.get("json") else b"",
urlencode(sorted(kwargs["data"].items()), quote_via=quote, safe="-._~").encode("ascii") if kwargs.get("data") and isinstance(kwargs["data"], dict) else b"",
urlencode(sorted(kwargs["data"].items()), quote_via=quote, safe="-._~").encode(
"ascii") if kwargs.get("data") and isinstance(kwargs["data"], dict) else b"",
]
),
"sha1",
@ -127,7 +127,7 @@ def request(**kwargs):
def get_exponential_backoff_interval(retries, full_jitter=False):
"""Calculate the exponential backoff wait time."""
# Will be zero if factor equals 0
countdown = min(REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC * (2**retries))
countdown = min(REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC * (2 ** retries))
# Full jitter according to
# https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
if full_jitter:
@ -151,18 +151,21 @@ def get_data_error_result(code=settings.RetCode.DATA_ERROR, message="Sorry! Data
def server_error_response(e):
logging.exception(e)
try:
if e.code == 401:
return get_json_result(code=401, message=repr(e))
except BaseException:
pass
msg = repr(e).lower()
if getattr(e, "code", None) == 401 or ("unauthorized" in msg) or ("401" in msg):
return get_json_result(code=settings.RetCode.UNAUTHORIZED, message=repr(e))
except Exception as ex:
logging.warning(f"error checking authorization: {ex}")
if len(e.args) > 1:
try:
serialized_data = serialize_for_json(e.args[1])
return get_json_result(code= settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=serialized_data)
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=serialized_data)
except Exception:
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=None)
if repr(e).find("index_not_found_exception") >= 0:
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message="No chunk found, please upload file and parse it.")
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR,
message="No chunk found, please upload file and parse it.")
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e))
@ -207,7 +210,8 @@ def validate_request(*args, **kwargs):
if no_arguments:
error_string += "required argument are missing: {}; ".format(",".join(no_arguments))
if error_arguments:
error_string += "required argument values: {}".format(",".join(["{}={}".format(a[0], a[1]) for a in error_arguments]))
error_string += "required argument values: {}".format(
",".join(["{}={}".format(a[0], a[1]) for a in error_arguments]))
return get_json_result(code=settings.RetCode.ARGUMENT_ERROR, message=error_string)
return func(*_args, **_kwargs)
@ -222,7 +226,8 @@ def not_allowed_parameters(*params):
input_arguments = flask_request.json or flask_request.form.to_dict()
for param in params:
if param in input_arguments:
return get_json_result(code=settings.RetCode.ARGUMENT_ERROR, message=f"Parameter {param} isn't allowed")
return get_json_result(code=settings.RetCode.ARGUMENT_ERROR,
message=f"Parameter {param} isn't allowed")
return f(*args, **kwargs)
return wrapper
@ -233,12 +238,14 @@ def not_allowed_parameters(*params):
def active_required(f):
@wraps(f)
def wrapper(*args, **kwargs):
from api.db.services import UserService
user_id = current_user.id
usr = UserService.filter_by_id(user_id)
# check is_active
if not usr or not usr.is_active == ActiveEnum.ACTIVE.value:
return get_json_result(code=settings.RetCode.FORBIDDEN, message="User isn't active, please activate first.")
return f(*args, **kwargs)
return wrapper
@ -259,7 +266,7 @@ def send_file_in_mem(data, filename):
return send_file(f, as_attachment=True, attachment_filename=filename)
def get_json_result(code=settings.RetCode.SUCCESS, message="success", data=None):
def get_json_result(code: settings.RetCode = settings.RetCode.SUCCESS, message="success", data=None):
response = {"code": code, "message": message, "data": data}
return jsonify(response)
@ -314,7 +321,7 @@ def construct_result(code=settings.RetCode.DATA_ERROR, message="data is missing"
return jsonify(response)
def construct_json_result(code=settings.RetCode.SUCCESS, message="success", data=None):
def construct_json_result(code: settings.RetCode = settings.RetCode.SUCCESS, message="success", data=None):
if data is None:
return jsonify({"code": code, "message": message})
else:
@ -347,27 +354,39 @@ def token_required(func):
token = authorization_list[1]
objs = APIToken.query(token=token)
if not objs:
return get_json_result(data=False, message="Authentication error: API key is invalid!", code=settings.RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message="Authentication error: API key is invalid!",
code=settings.RetCode.AUTHENTICATION_ERROR)
kwargs["tenant_id"] = objs[0].tenant_id
return func(*args, **kwargs)
return decorated_function
def get_result(code=settings.RetCode.SUCCESS, message="", data=None):
if code == 0:
def get_result(code=settings.RetCode.SUCCESS, message="", data=None, total=None):
"""
Standard API response format:
{
"code": 0,
"data": [...], # List or object, backward compatible
"total": 47, # Optional field for pagination
"message": "..." # Error or status message
}
"""
response = {"code": code}
if code == settings.RetCode.SUCCESS:
if data is not None:
response = {"code": code, "data": data}
else:
response = {"code": code}
response["data"] = data
if total is not None:
response["total_datasets"] = total
else:
response = {"code": code, "message": message}
response["message"] = message or "Error"
return jsonify(response)
def get_error_data_result(
message="Sorry! Data missing!",
code=settings.RetCode.DATA_ERROR,
message="Sorry! Data missing!",
code=settings.RetCode.DATA_ERROR,
):
result_dict = {"code": code, "message": message}
response = {}
@ -402,7 +421,8 @@ def get_parser_config(chunk_method, parser_config):
# Define default configurations for each chunking method
key_mapping = {
"naive": {"chunk_token_num": 512, "delimiter": r"\n", "html4excel": False, "layout_recognize": "DeepDOC", "raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}},
"naive": {"chunk_token_num": 512, "delimiter": r"\n", "html4excel": False, "layout_recognize": "DeepDOC",
"raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}},
"qa": {"raptor": {"use_raptor": False}, "graphrag": {"use_graphrag": False}},
"tag": None,
"resume": None,
@ -441,16 +461,16 @@ def get_parser_config(chunk_method, parser_config):
def get_data_openai(
id=None,
created=None,
model=None,
prompt_tokens=0,
completion_tokens=0,
content=None,
finish_reason=None,
object="chat.completion",
param=None,
stream=False
id=None,
created=None,
model=None,
prompt_tokens=0,
completion_tokens=0,
content=None,
finish_reason=None,
object="chat.completion",
param=None,
stream=False
):
total_tokens = prompt_tokens + completion_tokens
@ -524,6 +544,8 @@ def check_duplicate_ids(ids, id_type="item"):
def verify_embedding_availability(embd_id: str, tenant_id: str) -> tuple[bool, Response | None]:
from api.db.services.llm_service import LLMService
from api.db.services.tenant_llm_service import TenantLLMService
"""
Verifies availability of an embedding model for a specific tenant.
@ -562,7 +584,9 @@ def verify_embedding_availability(embd_id: str, tenant_id: str) -> tuple[bool, R
in_llm_service = bool(LLMService.query(llm_name=llm_name, fid=llm_factory, model_type="embedding"))
tenant_llms = TenantLLMService.get_my_llms(tenant_id=tenant_id)
is_tenant_model = any(llm["llm_name"] == llm_name and llm["llm_factory"] == llm_factory and llm["model_type"] == "embedding" for llm in tenant_llms)
is_tenant_model = any(
llm["llm_name"] == llm_name and llm["llm_factory"] == llm_factory and llm["model_type"] == "embedding" for
llm in tenant_llms)
is_builtin_model = embd_id in settings.BUILTIN_EMBEDDING_MODELS
if not (is_builtin_model or is_tenant_model or in_llm_service):
@ -793,7 +817,9 @@ async def is_strong_enough(chat_model, embedding_model):
_ = await trio.to_thread.run_sync(lambda: embedding_model.encode(["Are you strong enough!?"]))
if chat_model:
with trio.fail_after(30):
res = await trio.to_thread.run_sync(lambda: chat_model.chat("Nothing special.", [{"role": "user", "content": "Are you strong enough!?"}], {}))
res = await trio.to_thread.run_sync(lambda: chat_model.chat("Nothing special.", [{"role": "user",
"content": "Are you strong enough!?"}],
{}))
if res.find("**ERROR**") >= 0:
raise Exception(res)

View File

@ -21,3 +21,26 @@ def string_to_bytes(string):
def bytes_to_string(byte):
return byte.decode(encoding="utf-8")
def convert_bytes(size_in_bytes: int) -> str:
"""
Format size in bytes.
"""
if size_in_bytes == 0:
return "0 B"
units = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']
i = 0
size = float(size_in_bytes)
while size >= 1024 and i < len(units) - 1:
size /= 1024
i += 1
if i == 0 or size >= 100:
return f"{size:.0f} {units[i]}"
elif size >= 10:
return f"{size:.1f} {units[i]}"
else:
return f"{size:.2f} {units[i]}"

View File

@ -0,0 +1,25 @@
"""
Reusable HTML email templates and registry.
"""
# Invitation email template
INVITE_EMAIL_TMPL = """
<p>Hi {{email}},</p>
<p>{{inviter}} has invited you to join their team (ID: {{tenant_id}}).</p>
<p>Click the link below to complete your registration:<br>
<a href="{{invite_url}}">{{invite_url}}</a></p>
<p>If you did not request this, please ignore this email.</p>
"""
# Password reset code template
RESET_CODE_EMAIL_TMPL = """
<p>Hello,</p>
<p>Your password reset code is: <b>{{ code }}</b></p>
<p>This code will expire in {{ ttl_min }} minutes.</p>
"""
# Template registry
EMAIL_TEMPLATES = {
"invite": INVITE_EMAIL_TMPL,
"reset_code": RESET_CODE_EMAIL_TMPL,
}

View File

@ -13,14 +13,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import requests
from timeit import default_timer as timer
from api import settings
from api.db.db_models import DB
from rag import settings as rag_settings
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.es_conn import ESConnection
from rag.utils.infinity_conn import InfinityConnection
def _ok_nok(ok: bool) -> str:
@ -65,6 +68,96 @@ def check_storage() -> tuple[bool, dict]:
return False, {"elapsed": f"{(timer() - st) * 1000.0:.1f}", "error": str(e)}
def get_es_cluster_stats() -> dict:
doc_engine = os.getenv('DOC_ENGINE', 'elasticsearch')
if doc_engine != 'elasticsearch':
raise Exception("Elasticsearch is not in use.")
try:
return {
"status": "alive",
"message": ESConnection().get_cluster_stats()
}
except Exception as e:
return {
"status": "timeout",
"message": f"error: {str(e)}",
}
def get_infinity_status():
doc_engine = os.getenv('DOC_ENGINE', 'elasticsearch')
if doc_engine != 'infinity':
raise Exception("Infinity is not in use.")
try:
return {
"status": "alive",
"message": InfinityConnection().health()
}
except Exception as e:
return {
"status": "timeout",
"message": f"error: {str(e)}",
}
def get_mysql_status():
try:
cursor = DB.execute_sql("SHOW PROCESSLIST;")
res_rows = cursor.fetchall()
headers = ['id', 'user', 'host', 'db', 'command', 'time', 'state', 'info']
cursor.close()
return {
"status": "alive",
"message": [dict(zip(headers, r)) for r in res_rows]
}
except Exception as e:
return {
"status": "timeout",
"message": f"error: {str(e)}",
}
def check_minio_alive():
start_time = timer()
try:
response = requests.get(f'http://{rag_settings.MINIO["host"]}/minio/health/live')
if response.status_code == 200:
return {"status": "alive", "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
else:
return {"status": "timeout", "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
except Exception as e:
return {
"status": "timeout",
"message": f"error: {str(e)}",
}
def get_redis_info():
try:
return {
"status": "alive",
"message": REDIS_CONN.info()
}
except Exception as e:
return {
"status": "timeout",
"message": f"error: {str(e)}",
}
def check_ragflow_server_alive():
start_time = timer()
try:
response = requests.get(f'http://{settings.HOST_IP}:{settings.HOST_PORT}/v1/system/ping')
if response.status_code == 200:
return {"status": "alive", "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
else:
return {"status": "timeout", "message": f"Confirm elapsed: {(timer() - start_time) * 1000.0:.1f} ms."}
except Exception as e:
return {
"status": "timeout",
"message": f"error: {str(e)}",
}
def run_health_checks() -> tuple[dict, bool]:
@ -99,9 +192,7 @@ def run_health_checks() -> tuple[dict, bool]:
except Exception:
result["storage"] = "nok"
all_ok = (result.get("db") == "ok") and (result.get("redis") == "ok") and (result.get("doc_engine") == "ok") and (result.get("storage") == "ok")
all_ok = (result.get("db") == "ok") and (result.get("redis") == "ok") and (result.get("doc_engine") == "ok") and (
result.get("storage") == "ok")
result["status"] = "ok" if all_ok else "nok"
return result, all_ok

View File

@ -24,6 +24,7 @@ from urllib.parse import urlparse
from api.apps import smtp_mail_server
from flask_mail import Message
from flask import render_template_string
from api.utils.email_templates import EMAIL_TEMPLATES
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.chrome.options import Options
@ -34,6 +35,12 @@ from selenium.webdriver.support.ui import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
OTP_LENGTH = 8
OTP_TTL_SECONDS = 5 * 60
ATTEMPT_LIMIT = 5
ATTEMPT_LOCK_SECONDS = 30 * 60
RESEND_COOLDOWN_SECONDS = 60
CONTENT_TYPE_MAP = {
# Office
@ -178,24 +185,49 @@ def get_float(req: dict, key: str, default: float | int = 10.0) -> float:
return default
INVITE_EMAIL_TMPL = """
<p>Hi {{email}},</p>
<p>{{inviter}} has invited you to join their team (ID: {{tenant_id}}).</p>
<p>Click the link below to complete your registration:<br>
<a href="{{invite_url}}">{{invite_url}}</a></p>
<p>If you did not request this, please ignore this email.</p>
"""
def send_email_html(subject: str, to_email: str, template_key: str, **context):
"""Generic HTML email sender using shared templates.
template_key must exist in EMAIL_TEMPLATES.
"""
from api.apps import app
tmpl = EMAIL_TEMPLATES.get(template_key)
if not tmpl:
raise ValueError(f"Unknown email template: {template_key}")
with app.app_context():
msg = Message(subject=subject, recipients=[to_email])
msg.html = render_template_string(tmpl, **context)
smtp_mail_server.send(msg)
def send_invite_email(to_email, invite_url, tenant_id, inviter):
from api.apps import app
with app.app_context():
msg = Message(subject="RAGFlow Invitation",
recipients=[to_email])
msg.html = render_template_string(
INVITE_EMAIL_TMPL,
email=to_email,
invite_url=invite_url,
tenant_id=tenant_id,
inviter=inviter,
)
smtp_mail_server.send(msg)
# Reuse the generic HTML sender with 'invite' template
send_email_html(
subject="RAGFlow Invitation",
to_email=to_email,
template_key="invite",
email=to_email,
invite_url=invite_url,
tenant_id=tenant_id,
inviter=inviter,
)
def otp_keys(email: str):
email = (email or "").strip().lower()
return (
f"otp:{email}",
f"otp_attempts:{email}",
f"otp_last_sent:{email}",
f"otp_lock:{email}",
)
def hash_code(code: str, salt: bytes) -> str:
import hashlib
import hmac
return hmac.new(salt, (code or "").encode("utf-8"), hashlib.sha256).hexdigest()
def captcha_key(email: str) -> str:
return f"captcha:{email}"

View File

@ -31,7 +31,6 @@
"entities_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
"pagerank_fea": {"type": "integer", "default": 0},
"tag_feas": {"type": "varchar", "default": "", "analyzer": "rankfeatures"},
"from_entity_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
"to_entity_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
"entity_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
@ -39,6 +38,6 @@
"source_id": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
"n_hop_with_weight": {"type": "varchar", "default": ""},
"removed_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
"doc_type_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"}
"doc_type_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"},
"toc_kwd": {"type": "varchar", "default": "", "analyzer": "whitespace-#"}
}

View File

@ -803,6 +803,12 @@
"tags": "TEXT EMBEDDING",
"max_tokens": 512,
"model_type": "embedding"
},
{
"llm_name": "glm-asr",
"tags": "SPEECH2TEXT",
"max_tokens": 4096,
"model_type": "speech2text"
}
]
},
@ -2816,6 +2822,13 @@
"tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,IMAGE2TEXT",
"status": "1",
"llm": [
{
"llm_name":"THUDM/GLM-4.1V-9B-Thinking",
"tags":"LLM,CHAT,IMAGE2TEXT, 64k",
"max_tokens":64000,
"model_type":"chat",
"is_tools": false
},
{
"llm_name": "Qwen/Qwen3-Embedding-8B",
"tags": "TEXT EMBEDDING,TEXT RE-RANK,32k",
@ -3145,13 +3158,6 @@
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Qwen/Qwen2-1.5B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "Pro/Qwen/Qwen2.5-Coder-7B-Instruct",
"tags": "LLM,CHAT,32k",
@ -3159,13 +3165,6 @@
"model_type": "chat",
"is_tools": false
},
{
"llm_name": "Pro/Qwen/Qwen2-VL-7B-Instruct",
"tags": "LLM,CHAT,IMAGE2TEXT,32k",
"max_tokens": 32000,
"model_type": "image2text",
"is_tools": false
},
{
"llm_name": "Pro/Qwen/Qwen2.5-7B-Instruct",
"tags": "LLM,CHAT,32k",
@ -3533,6 +3532,13 @@
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-5-20250929",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
"max_tokens": 204800,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514",
"tags": "LLM,CHAT,IMAGE2TEXT,200k",
@ -4862,8 +4868,282 @@
"max_tokens": 8000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "LongCat-Flash-Thinking",
"tags": "LLM,CHAT,8000",
"max_tokens": 8000,
"model_type": "chat",
"is_tools": true
}
]
},
{
"name": "DeerAPI",
"logo": "",
"tags": "LLM,TEXT EMBEDDING,IMAGE2TEXT",
"status": "1",
"llm": [
{
"llm_name": "gpt-5-chat-latest",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "chatgpt-4o-latest",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-mini",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5-nano",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-5",
"tags": "LLM,CHAT,400k",
"max_tokens": 400000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4.1-mini",
"tags": "LLM,CHAT,1M",
"max_tokens": 1047576,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4.1-nano",
"tags": "LLM,CHAT,1M",
"max_tokens": 1047576,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4.1",
"tags": "LLM,CHAT,1M",
"max_tokens": 1047576,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gpt-4o-mini",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "o4-mini-2025-04-16",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "o3-pro-2025-06-10",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-opus-4-1-20250805",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-opus-4-1-20250805-thinking",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-sonnet-4-20250514-thinking",
"tags": "LLM,CHAT,200k,IMAGE2TEXT",
"max_tokens": 200000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "claude-3-7-sonnet-latest",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "claude-3-5-haiku-latest",
"tags": "LLM,CHAT,200k",
"max_tokens": 200000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "gemini-2.5-pro",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "gemini-2.5-flash",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "gemini-2.5-flash-lite",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "gemini-2.0-flash",
"tags": "LLM,CHAT,1M,IMAGE2TEXT",
"max_tokens": 1000000,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "grok-4-0709",
"tags": "LLM,CHAT,131k",
"max_tokens": 131072,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "grok-3",
"tags": "LLM,CHAT,131k",
"max_tokens": 131072,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "grok-3-mini",
"tags": "LLM,CHAT,131k",
"max_tokens": 131072,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "grok-2-image-1212",
"tags": "LLM,CHAT,32k,IMAGE2TEXT",
"max_tokens": 32768,
"model_type": "image2text",
"is_tools": true
},
{
"llm_name": "deepseek-v3.1",
"tags": "LLM,CHAT,64k",
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-v3",
"tags": "LLM,CHAT,64k",
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-r1-0528",
"tags": "LLM,CHAT,164k",
"max_tokens": 164000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-chat",
"tags": "LLM,CHAT,32k",
"max_tokens": 32000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-reasoner",
"tags": "LLM,CHAT,64k",
"max_tokens": 64000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-30b-a3b",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-coder-plus-2025-07-22",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "text-embedding-ada-002",
"tags": "TEXT EMBEDDING,8K",
"max_tokens": 8191,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "text-embedding-3-small",
"tags": "TEXT EMBEDDING,8K",
"max_tokens": 8191,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "text-embedding-3-large",
"tags": "TEXT EMBEDDING,8K",
"max_tokens": 8191,
"model_type": "embedding",
"is_tools": false
},
{
"llm_name": "whisper-1",
"tags": "SPEECH2TEXT",
"max_tokens": 26214400,
"model_type": "speech2text",
"is_tools": false
},
{
"llm_name": "tts-1",
"tags": "TTS",
"max_tokens": 2048,
"model_type": "tts",
"is_tools": false
}
]
}
]
}
}

View File

@ -200,6 +200,61 @@
}
}
},
{
"knn_vector": {
"match": "*_2048_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 2048
}
}
},
{
"knn_vector": {
"match": "*_4096_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 4096
}
}
},
{
"knn_vector": {
"match": "*_6144_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 6144
}
}
},
{
"knn_vector": {
"match": "*_8192_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 8192
}
}
},
{
"knn_vector": {
"match": "*_10240_vec",
"mapping": {
"type": "knn_vector",
"index": true,
"space_type": "cosinesimil",
"dimension": 10240
}
}
},
{
"binary": {
"match": "*_bin",

View File

@ -17,7 +17,6 @@
import re
import mistune
from markdown import markdown
@ -117,8 +116,6 @@ class MarkdownElementExtractor:
def __init__(self, markdown_content):
self.markdown_content = markdown_content
self.lines = markdown_content.split("\n")
self.ast_parser = mistune.create_markdown(renderer="ast")
self.ast_nodes = self.ast_parser(markdown_content)
def extract_elements(self):
"""Extract individual elements (headers, code blocks, lists, etc.)"""

View File

@ -15,11 +15,13 @@
#
import logging
import math
import os
import random
import re
import sys
import threading
from collections import Counter, defaultdict
from copy import deepcopy
from io import BytesIO
from timeit import default_timer as timer
@ -349,9 +351,78 @@ class RAGFlowPdfParser:
self.boxes[i]["top"] += self.page_cum_height[self.boxes[i]["page_number"] - 1]
self.boxes[i]["bottom"] += self.page_cum_height[self.boxes[i]["page_number"] - 1]
def _text_merge(self):
def _assign_column(self, boxes, zoomin=3):
if not boxes:
return boxes
if all("col_id" in b for b in boxes):
return boxes
by_page = defaultdict(list)
for b in boxes:
by_page[b["page_number"]].append(b)
page_info = {} # pg -> dict(page_w, left_edge, cand_cols)
counter = Counter()
for pg, bxs in by_page.items():
if not bxs:
page_info[pg] = {"page_w": 1.0, "left_edge": 0.0, "cand": 1}
counter[1] += 1
continue
if hasattr(self, "page_images") and self.page_images and len(self.page_images) >= pg:
page_w = self.page_images[pg - 1].size[0] / max(1, zoomin)
left_edge = 0.0
else:
xs0 = [box["x0"] for box in bxs]
xs1 = [box["x1"] for box in bxs]
left_edge = float(min(xs0))
page_w = max(1.0, float(max(xs1) - left_edge))
widths = [max(1.0, (box["x1"] - box["x0"])) for box in bxs]
median_w = float(np.median(widths)) if widths else 1.0
raw_cols = int(page_w / max(1.0, median_w))
# cand = raw_cols if (raw_cols >= 2 and median_w < page_w / raw_cols * 0.8) else 1
cand = raw_cols
page_info[pg] = {"page_w": page_w, "left_edge": left_edge, "cand": cand}
counter[cand] += 1
logging.info(f"[Page {pg}] median_w={median_w:.2f}, page_w={page_w:.2f}, raw_cols={raw_cols}, cand={cand}")
global_cols = counter.most_common(1)[0][0]
logging.info(f"Global column_num decided by majority: {global_cols}")
for pg, bxs in by_page.items():
if not bxs:
continue
page_w = page_info[pg]["page_w"]
left_edge = page_info[pg]["left_edge"]
if global_cols == 1:
for box in bxs:
box["col_id"] = 0
continue
for box in bxs:
w = box["x1"] - box["x0"]
if w >= 0.8 * page_w:
box["col_id"] = 0
continue
cx = 0.5 * (box["x0"] + box["x1"])
norm_cx = (cx - left_edge) / page_w
norm_cx = max(0.0, min(norm_cx, 0.999999))
box["col_id"] = int(min(global_cols - 1, norm_cx * global_cols))
return boxes
def _text_merge(self, zoomin=3):
# merge adjusted boxes
bxs = self.boxes
bxs = self._assign_column(self.boxes, zoomin)
def end_with(b, txt):
txt = txt.strip()
@ -367,9 +438,15 @@ class RAGFlowPdfParser:
while i < len(bxs) - 1:
b = bxs[i]
b_ = bxs[i + 1]
if b["page_number"] != b_["page_number"] or b.get("col_id") != b_.get("col_id"):
i += 1
continue
if b.get("layoutno", "0") != b_.get("layoutno", "1") or b.get("layout_type", "") in ["table", "figure", "equation"]:
i += 1
continue
if abs(self._y_dis(b, b_)) < self.mean_height[bxs[i]["page_number"] - 1] / 3:
# merge
bxs[i]["x1"] = b_["x1"]
@ -379,83 +456,108 @@ class RAGFlowPdfParser:
bxs.pop(i + 1)
continue
i += 1
continue
dis_thr = 1
dis = b["x1"] - b_["x0"]
if b.get("layout_type", "") != "text" or b_.get("layout_type", "") != "text":
if end_with(b, "") or start_with(b_, ""):
dis_thr = -8
else:
i += 1
continue
if abs(self._y_dis(b, b_)) < self.mean_height[bxs[i]["page_number"] - 1] / 5 and dis >= dis_thr and b["x1"] < b_["x1"]:
# merge
bxs[i]["x1"] = b_["x1"]
bxs[i]["top"] = (b["top"] + b_["top"]) / 2
bxs[i]["bottom"] = (b["bottom"] + b_["bottom"]) / 2
bxs[i]["text"] += b_["text"]
bxs.pop(i + 1)
continue
i += 1
self.boxes = bxs
def _naive_vertical_merge(self, zoomin=3):
import math
bxs = Recognizer.sort_Y_firstly(self.boxes, np.median(self.mean_height) / 3)
bxs = self._assign_column(self.boxes, zoomin)
column_width = np.median([b["x1"] - b["x0"] for b in self.boxes])
if not column_width or math.isnan(column_width):
column_width = self.mean_width[0]
self.column_num = int(self.page_images[0].size[0] / zoomin / column_width)
if column_width < self.page_images[0].size[0] / zoomin / self.column_num:
logging.info("Multi-column................... {} {}".format(column_width, self.page_images[0].size[0] / zoomin / self.column_num))
self.boxes = self.sort_X_by_page(self.boxes, column_width / self.column_num)
grouped = defaultdict(list)
for b in bxs:
grouped[(b["page_number"], b.get("col_id", 0))].append(b)
i = 0
while i + 1 < len(bxs):
b = bxs[i]
b_ = bxs[i + 1]
if b["page_number"] < b_["page_number"] and re.match(r"[0-9 •一—-]+$", b["text"]):
bxs.pop(i)
merged_boxes = []
for (pg, col), bxs in grouped.items():
bxs = sorted(bxs, key=lambda x: (x["top"], x["x0"]))
if not bxs:
continue
if not b["text"].strip():
bxs.pop(i)
continue
concatting_feats = [
b["text"].strip()[-1] in ",;:'\",、‘“;:-",
len(b["text"].strip()) > 1 and b["text"].strip()[-2] in ",;:'\",‘“、;:",
b_["text"].strip() and b_["text"].strip()[0] in "。;?!?”)),,、:",
]
# features for not concating
feats = [
b.get("layoutno", 0) != b_.get("layoutno", 0),
b["text"].strip()[-1] in "。?!?",
self.is_english and b["text"].strip()[-1] in ".!?",
b["page_number"] == b_["page_number"] and b_["top"] - b["bottom"] > self.mean_height[b["page_number"] - 1] * 1.5,
b["page_number"] < b_["page_number"] and abs(b["x0"] - b_["x0"]) > self.mean_width[b["page_number"] - 1] * 4,
]
# split features
detach_feats = [b["x1"] < b_["x0"], b["x0"] > b_["x1"]]
if (any(feats) and not any(concatting_feats)) or any(detach_feats):
logging.debug(
"{} {} {} {}".format(
b["text"],
b_["text"],
any(feats),
any(concatting_feats),
mh = self.mean_height[pg - 1] if self.mean_height else np.median([b["bottom"] - b["top"] for b in bxs]) or 10
i = 0
while i + 1 < len(bxs):
b = bxs[i]
b_ = bxs[i + 1]
if b["page_number"] < b_["page_number"] and re.match(r"[0-9 •一—-]+$", b["text"]):
bxs.pop(i)
continue
if not b["text"].strip():
bxs.pop(i)
continue
if not b["text"].strip() or b.get("layoutno") != b_.get("layoutno"):
i += 1
continue
if b_["top"] - b["bottom"] > mh * 1.5:
i += 1
continue
overlap = max(0, min(b["x1"], b_["x1"]) - max(b["x0"], b_["x0"]))
if overlap / max(1, min(b["x1"] - b["x0"], b_["x1"] - b_["x0"])) < 0.3:
i += 1
continue
concatting_feats = [
b["text"].strip()[-1] in ",;:'\",、‘“;:-",
len(b["text"].strip()) > 1 and b["text"].strip()[-2] in ",;:'\",‘“、;:",
b_["text"].strip() and b_["text"].strip()[0] in "。;?!?”)),,、:",
]
# features for not concating
feats = [
b.get("layoutno", 0) != b_.get("layoutno", 0),
b["text"].strip()[-1] in "。?!?",
self.is_english and b["text"].strip()[-1] in ".!?",
b["page_number"] == b_["page_number"] and b_["top"] - b["bottom"] > self.mean_height[b["page_number"] - 1] * 1.5,
b["page_number"] < b_["page_number"] and abs(b["x0"] - b_["x0"]) > self.mean_width[b["page_number"] - 1] * 4,
]
# split features
detach_feats = [b["x1"] < b_["x0"], b["x0"] > b_["x1"]]
if (any(feats) and not any(concatting_feats)) or any(detach_feats):
logging.debug(
"{} {} {} {}".format(
b["text"],
b_["text"],
any(feats),
any(concatting_feats),
)
)
)
i += 1
continue
# merge up and down
b["bottom"] = b_["bottom"]
b["text"] += b_["text"]
b["x0"] = min(b["x0"], b_["x0"])
b["x1"] = max(b["x1"], b_["x1"])
bxs.pop(i + 1)
self.boxes = bxs
i += 1
continue
b["text"] = (b["text"].rstrip() + " " + b_["text"].lstrip()).strip()
b["bottom"] = b_["bottom"]
b["x0"] = min(b["x0"], b_["x0"])
b["x1"] = max(b["x1"], b_["x1"])
bxs.pop(i + 1)
merged_boxes.extend(bxs)
self.boxes = sorted(merged_boxes, key=lambda x: (x["page_number"], x.get("col_id", 0), x["top"]))
def _final_reading_order_merge(self, zoomin=3):
if not self.boxes:
return
self.boxes = self._assign_column(self.boxes, zoomin=zoomin)
pages = defaultdict(lambda: defaultdict(list))
for b in self.boxes:
pg = b["page_number"]
col = b.get("col_id", 0)
pages[pg][col].append(b)
for pg in pages:
for col in pages[pg]:
pages[pg][col].sort(key=lambda x: (x["top"], x["x0"]))
new_boxes = []
for pg in sorted(pages.keys()):
for col in sorted(pages[pg].keys()):
new_boxes.extend(pages[pg][col])
self.boxes = new_boxes
def _concat_downward(self, concat_between_pages=True):
self.boxes = Recognizer.sort_Y_firstly(self.boxes, 0)
@ -997,7 +1099,7 @@ class RAGFlowPdfParser:
self.__ocr(i + 1, img, chars, zoomin, id)
if callback and i % 6 == 5:
callback(prog=(i + 1) * 0.6 / len(self.page_images), msg="")
callback((i + 1) * 0.6 / len(self.page_images))
async def __img_ocr_launcher():
def __ocr_preprocess():
@ -1048,7 +1150,7 @@ class RAGFlowPdfParser:
def parse_into_bboxes(self, fnm, callback=None, zoomin=3):
start = timer()
self.__images__(fnm, zoomin)
self.__images__(fnm, zoomin, callback=callback)
if callback:
callback(0.40, "OCR finished ({:.2f}s)".format(timer() - start))
@ -1074,7 +1176,6 @@ class RAGFlowPdfParser:
def insert_table_figures(tbls_or_figs, layout_type):
def min_rectangle_distance(rect1, rect2):
import math
pn1, left1, right1, top1, bottom1 = rect1
pn2, left2, right2, top2, bottom2 = rect2
if right1 >= left2 and right2 >= left1 and bottom1 >= top2 and bottom2 >= top1:
@ -1091,27 +1192,39 @@ class RAGFlowPdfParser:
dy = top1 - bottom2
else:
dy = 0
return math.sqrt(dx*dx + dy*dy)# + (pn2-pn1)*10000
return math.sqrt(dx * dx + dy * dy) # + (pn2-pn1)*10000
for (img, txt), poss in tbls_or_figs:
bboxes = [(i, (b["page_number"], b["x0"], b["x1"], b["top"], b["bottom"])) for i, b in enumerate(self.boxes)]
dists = [(min_rectangle_distance((pn, left, right, top+self.page_cum_height[pn], bott+self.page_cum_height[pn]), rect),i) for i, rect in bboxes for pn, left, right, top, bott in poss]
dists = [
(min_rectangle_distance((pn, left, right, top + self.page_cum_height[pn], bott + self.page_cum_height[pn]), rect), i) for i, rect in bboxes for pn, left, right, top, bott in poss
]
min_i = np.argmin(dists, axis=0)[0]
min_i, rect = bboxes[dists[min_i][-1]]
if isinstance(txt, list):
txt = "\n".join(txt)
pn, left, right, top, bott = poss[0]
if self.boxes[min_i]["bottom"] < top+self.page_cum_height[pn]:
if self.boxes[min_i]["bottom"] < top + self.page_cum_height[pn]:
min_i += 1
self.boxes.insert(min_i, {
"page_number": pn+1, "x0": left, "x1": right, "top": top+self.page_cum_height[pn], "bottom": bott+self.page_cum_height[pn], "layout_type": layout_type, "text": txt, "image": img,
"positions": [[pn+1, int(left), int(right), int(top), int(bott)]]
})
self.boxes.insert(
min_i,
{
"page_number": pn + 1,
"x0": left,
"x1": right,
"top": top + self.page_cum_height[pn],
"bottom": bott + self.page_cum_height[pn],
"layout_type": layout_type,
"text": txt,
"image": img,
"positions": [[pn + 1, int(left), int(right), int(top), int(bott)]],
},
)
for b in self.boxes:
b["position_tag"] = self._line_tag(b, zoomin)
b["image"] = self.crop(b["position_tag"], zoomin)
b["positions"] = [[pos[0][-1]+1, *pos[1:]] for pos in RAGFlowPdfParser.extract_positions(b["position_tag"])]
b["positions"] = [[pos[0][-1] + 1, *pos[1:]] for pos in RAGFlowPdfParser.extract_positions(b["position_tag"])]
insert_table_figures(tbls, "table")
insert_table_figures(figs, "figure")
@ -1129,7 +1242,7 @@ class RAGFlowPdfParser:
for tag in re.findall(r"@@[0-9-]+\t[0-9.\t]+##", txt):
pn, left, right, top, bottom = tag.strip("#").strip("@").split("\t")
left, right, top, bottom = float(left), float(right), float(top), float(bottom)
poss.append(([int(p) - 1 for p in pn.split("-")], int(left), int(right), int(top), int(bottom)))
poss.append(([int(p) - 1 for p in pn.split("-")], left, right, top, bottom))
return poss
def crop(self, text, ZM=3, need_position=False):
@ -1274,12 +1387,16 @@ class VisionParser(RAGFlowPdfParser):
prompt=vision_llm_describe_prompt(page=pdf_page_num + 1),
callback=callback,
)
if kwargs.get("callback"):
kwargs["callback"](idx * 1.0 / len(self.page_images), f"Processed: {idx + 1}/{len(self.page_images)}")
if text:
width, height = self.page_images[idx].size
all_docs.append((text, f"{pdf_page_num + 1} 0 {width / zoomin} 0 {height / zoomin}"))
all_docs.append((
text,
f"@@{pdf_page_num + 1}\t{0.0:.1f}\t{width / zoomin:.1f}\t{0.0:.1f}\t{height / zoomin:.1f}##"
))
return all_docs, []

View File

@ -84,7 +84,8 @@ def load_model(model_dir, nm, device_id: int | None = None):
def cuda_is_available():
try:
import torch
if torch.cuda.is_available() and torch.cuda.device_count() > device_id:
target_id = 0 if device_id is None else device_id
if torch.cuda.is_available() and torch.cuda.device_count() > target_id:
return True
except Exception:
return False
@ -100,10 +101,13 @@ def load_model(model_dir, nm, device_id: int | None = None):
# Shrink GPU memory after execution
run_options = ort.RunOptions()
if cuda_is_available():
gpu_mem_limit_mb = int(os.environ.get("OCR_GPU_MEM_LIMIT_MB", "2048"))
arena_strategy = os.environ.get("OCR_ARENA_EXTEND_STRATEGY", "kNextPowerOfTwo")
provider_device_id = 0 if device_id is None else device_id
cuda_provider_options = {
"device_id": device_id, # Use specific GPU
"gpu_mem_limit": 512 * 1024 * 1024, # Limit gpu memory
"arena_extend_strategy": "kNextPowerOfTwo", # gpu memory allocation strategy
"device_id": provider_device_id, # Use specific GPU
"gpu_mem_limit": max(gpu_mem_limit_mb, 0) * 1024 * 1024,
"arena_extend_strategy": arena_strategy, # gpu memory allocation strategy
}
sess = ort.InferenceSession(
model_file_path,
@ -111,8 +115,8 @@ def load_model(model_dir, nm, device_id: int | None = None):
providers=['CUDAExecutionProvider'],
provider_options=[cuda_provider_options]
)
run_options.add_run_config_entry("memory.enable_memory_arena_shrinkage", "gpu:" + str(device_id))
logging.info(f"load_model {model_file_path} uses GPU")
run_options.add_run_config_entry("memory.enable_memory_arena_shrinkage", "gpu:" + str(provider_device_id))
logging.info(f"load_model {model_file_path} uses GPU (device {provider_device_id}, gpu_mem_limit={cuda_provider_options['gpu_mem_limit']}, arena_strategy={arena_strategy})")
else:
sess = ort.InferenceSession(
model_file_path,

View File

@ -37,9 +37,12 @@ OPENSEARCH_PASSWORD=infini_rag_flow_OS_01
# The port used to expose the Kibana service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
# To enable kibana, you need to:
# 1. Ensure that COMPOSE_PROFILES includes kibana, for example: COMPOSE_PROFILES=${DOC_ENGINE},kibana
# 2. Comment out or delete the following configurations of the es service in docker-compose-base.yml: xpack.security.enabled、xpack.security.http.ssl.enabled、xpack.security.transport.ssl.enabled (for details: https://www.elastic.co/docs/deploy-manage/security/self-auto-setup#stack-existing-settings-detected)
# 3. Adjust the es.hosts in conf/service_config.yaml or docker/service_conf.yaml.template to 'https://localhost:1200'
# 4. After the startup is successful, in the es container, execute the command to generate the kibana token: `bin/elasticsearch-create-enrollment-token -s kibana`, then you can use kibana normally
KIBANA_PORT=6601
KIBANA_USER=rag_flow
KIBANA_PASSWORD=infini_rag_flow
# The maximum amount of the memory, in bytes, that a specific Docker container can use while running.
# Update it according to the available memory in the host machine.
@ -91,15 +94,16 @@ REDIS_PASSWORD=infini_rag_flow
# The port used to expose RAGFlow's HTTP API service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
SVR_HTTP_PORT=9380
ADMIN_SVR_HTTP_PORT=9381
# The RAGFlow Docker image to download.
# Defaults to the v0.20.5-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5-slim
# Defaults to the v0.21.0-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.20.5
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.0
#
# The Docker image of the v0.20.5 edition includes built-in embedding models:
# The Docker image of the v0.21.0 edition includes built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - maidalun1020/bce-embedding-base_v1
#

View File

@ -79,8 +79,8 @@ The [.env](./.env) file contains important environment variables for Docker.
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.20.5-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.5`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.21.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.0`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

View File

@ -77,7 +77,7 @@ services:
container_name: ragflow-infinity
profiles:
- infinity
image: infiniflow/infinity:v0.6.0-dev5
image: infiniflow/infinity:v0.6.0
volumes:
- infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml
@ -207,6 +207,30 @@ services:
start_period: 10s
kibana:
container_name: ragflow-kibana
profiles:
- kibana
image: kibana:${STACK_VERSION}
ports:
- ${KIBANA_PORT-5601}:5601
env_file: .env
environment:
- TZ=${TIMEZONE}
volumes:
- kibana_data:/usr/share/kibana/data
depends_on:
es01:
condition: service_started
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5601/api/status"]
interval: 10s
timeout: 10s
retries: 120
networks:
- ragflow
restart: on-failure
volumes:
esdata01:
@ -221,6 +245,8 @@ volumes:
driver: local
redis_data:
driver: local
kibana_data:
driver: local
networks:
ragflow:

View File

@ -22,9 +22,14 @@ services:
# - --no-transport-sse-enabled # Disable legacy SSE endpoints (/sse and /messages/)
# - --no-transport-streamable-http-enabled # Disable Streamable HTTP transport (/mcp endpoint)
# - --no-json-response # Disable JSON response mode in Streamable HTTP transport (instead of SSE over HTTP)
# Example configration to start Admin server:
# command:
# - --enable-adminserver
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- ${ADMIN_SVR_HTTP_PORT}:9381
- 80:80
- 443:443
- 5678:5678

View File

@ -11,6 +11,7 @@ function usage() {
echo " --disable-webserver Disables the web server (nginx + ragflow_server)."
echo " --disable-taskexecutor Disables task executor workers."
echo " --enable-mcpserver Enables the MCP server."
echo " --enable-adminserver Enables the Admin server."
echo " --consumer-no-beg=<num> Start range for consumers (if using range-based)."
echo " --consumer-no-end=<num> End range for consumers (if using range-based)."
echo " --workers=<num> Number of task executors to run (if range is not used)."
@ -21,12 +22,14 @@ function usage() {
echo " $0 --disable-webserver --consumer-no-beg=0 --consumer-no-end=5"
echo " $0 --disable-webserver --workers=2 --host-id=myhost123"
echo " $0 --enable-mcpserver"
echo " $0 --enable-adminserver"
exit 1
}
ENABLE_WEBSERVER=1 # Default to enable web server
ENABLE_TASKEXECUTOR=1 # Default to enable task executor
ENABLE_MCP_SERVER=0
ENABLE_ADMIN_SERVER=0 # Default close admin server
CONSUMER_NO_BEG=0
CONSUMER_NO_END=0
WORKERS=1
@ -70,6 +73,10 @@ for arg in "$@"; do
ENABLE_MCP_SERVER=1
shift
;;
--enable-adminserver)
ENABLE_ADMIN_SERVER=1
shift
;;
--mcp-host=*)
MCP_HOST="${arg#*=}"
shift
@ -185,6 +192,12 @@ if [[ "${ENABLE_WEBSERVER}" -eq 1 ]]; then
done &
fi
if [[ "${ENABLE_ADMIN_SERVER}" -eq 1 ]]; then
echo "Starting admin_server..."
while true; do
"$PY" admin/server/admin_server.py
done &
fi
if [[ "${ENABLE_MCP_SERVER}" -eq 1 ]]; then
start_mcp_server

View File

@ -99,8 +99,8 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
- `RAGFLOW-IMAGE`
The Docker image edition. Available editions:
- `infiniflow/ragflow:v0.20.5-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.5`: The RAGFlow Docker image with embedding models including:
- `infiniflow/ragflow:v0.21.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.0`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`

View File

@ -77,7 +77,7 @@ After building the infiniflow/ragflow:nightly-slim image, you are ready to launc
1. Edit Docker Compose Configuration
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.20.5-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.21.0-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
2. Launch the Service

View File

@ -98,7 +98,7 @@ Where:
- `mcp-host`: The MCP server's host address.
- `mcp-port`: The MCP server's listening port.
- `mcp-base_url`: The address of the running RAGFlow server.
- `mcp-base-url`: The address of the running RAGFlow server.
- `mcp-script-path`: The file path to the MCP servers main script.
- `mcp-mode`: The launch mode.
- `self-host`: (default) self-host mode.

View File

@ -30,29 +30,19 @@ The "garbage in garbage out" status quo remains unchanged despite the fact that
Each RAGFlow release is available in two editions:
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.20.5-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.20.5`
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.0-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.0`
---
### Which embedding models can be deployed locally?
RAGFlow offers two Docker image editions, `v0.20.5-slim` and `v0.20.5`:
RAGFlow offers two Docker image editions, `v0.21.0-slim` and `v0.21.0`:
- `infiniflow/ragflow:v0.20.5-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.20.5`: The RAGFlow Docker image with embedding models including:
- Built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`
- Embedding models that will be downloaded once you select them in the RAGFlow UI:
- `BAAI/bge-base-en-v1.5`
- `BAAI/bge-large-en-v1.5`
- `BAAI/bge-small-en-v1.5`
- `BAAI/bge-small-zh-v1.5`
- `jinaai/jina-embeddings-v2-base-en`
- `jinaai/jina-embeddings-v2-small-en`
- `nomic-ai/nomic-embed-text-v1.5`
- `sentence-transformers/all-MiniLM-L6-v2`
- `infiniflow/ragflow:v0.21.0-slim` (default): The RAGFlow Docker image without embedding models.
- `infiniflow/ragflow:v0.21.0`: The RAGFlow Docker image with the following built-in embedding models:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`
---

View File

@ -9,7 +9,7 @@ The component equipped with reasoning, tool usage, and multi-agent collaboration
---
An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.5 onwards, an **Agent** component is able to work independently and with the following capabilities:
An **Agent** component fine-tunes the LLM and sets its prompt. From v0.21.0 onwards, an **Agent** component is able to work independently and with the following capabilities:
- Autonomous reasoning with reflection and adjustment based on environmental feedback.
- Use of tools or subagents to complete tasks.
@ -147,7 +147,7 @@ An **Agent** component relies on keys (variables) to specify its data inputs. It
#### Advanced usage
From v0.20.5 onwards, four framework-level prompt blocks are available in the **System prompt** field, enabling you to customize and *override* prompts at the framework level. Type `/` or click **(x)** to view them; they appear under the **Framework** entry in the dropdown menu.
From v0.21.0 onwards, four framework-level prompt blocks are available in the **System prompt** field, enabling you to customize and *override* prompts at the framework level. Type `/` or click **(x)** to view them; they appear under the **Framework** entry in the dropdown menu.
- `task_analysis` prompt block
- This block is responsible for analyzing tasks — either a user task or a task assigned by the lead Agent when the **Agent** component is acting as a Sub-Agent.

View File

@ -48,7 +48,7 @@ You start an AI conversation by creating an assistant.
- If no target language is selected, the system will search only in the language of your query, which may cause relevant information in other languages to be missed.
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
- As of v0.20.5, if you add custom variables here, the only way you can pass in their values is to call:
- As of v0.21.0, if you add custom variables here, the only way you can pass in their values is to call:
- HTTP method [Converse with chat assistant](../../references/http_api_reference.md#converse-with-chat-assistant), or
- Python method [Converse with chat assistant](../../references/python_api_reference.md#converse-with-chat-assistant).

View File

@ -1,5 +1,5 @@
---
sidebar_position: -1
sidebar_position: -10
slug: /configure_knowledge_base
---
@ -37,7 +37,7 @@ This section covers the following topics:
### Select chunking method
RAGFlow offers multiple chunking template to facilitate chunking files of different layouts and ensure semantic integrity. In **Chunking method**, you can choose the default template that suits the layouts and formats of your files. The following table shows the descriptions and the compatible file formats of each supported chunk template:
RAGFlow offers multiple built-in chunking template to facilitate chunking files of different layouts and ensure semantic integrity. From the **Built-in** chunking method dropdown under **Parse type**, you can choose the default template that suits the layouts and formats of your files. The following table shows the descriptions and the compatible file formats of each supported chunk template:
| **Template** | Description | File format |
|--------------|-----------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|
@ -54,9 +54,23 @@ RAGFlow offers multiple chunking template to facilitate chunking files of differ
| One | Each document is chunked in its entirety (as one). | DOCX, XLSX, XLS (Excel 97-2003), PDF, TXT |
| Tag | The dataset functions as a tag set for the others. | XLSX, CSV/TXT |
You can also change a file's chunking method on the **Datasets** page.
You can also change a file's chunking method on the **Files** page.
![change chunking method](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/embedded_chat_app.jpg)
![change chunking method](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/change_chunking_method.jpg)
<details>
<summary>From v0.21.0 onward, RAGFlow supports ingestion pipeline for customized data ingestion and cleansing workflows.</summary>
To use a customized data pipeline:
1. On the **Agent** page, click **+ Create agent** > **Create from blank**.
2. Select **Ingestion pipeline** and name your data pipeline in the popup, then click **Save** to show the data pipeline canvas.
3. After updating your data pipeline, click **Save** on the top right of the canvas.
4. Navigate to the **Configuration** page of your dataset, select **Choose pipeline** in **Ingestion pipeline**.
*Your saved data pipeline will appear in the dropdown menu below.*
</details>
### Select embedding model
@ -124,7 +138,7 @@ See [Run retrieval test](./run_retrieval_test.md) for details.
## Search for dataset
As of RAGFlow v0.20.5, the search feature is still in a rudimentary form, supporting only dataset search by name.
As of RAGFlow v0.21.0, the search feature is still in a rudimentary form, supporting only dataset search by name.
![search dataset](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/search_datasets.jpg)

View File

@ -53,25 +53,31 @@ Whether to enable entity resolution. You can think of this as an entity deduplic
- (Default) Disable entity resolution.
- Enable entity resolution. This option consumes more tokens.
### Community report generation
### Community reports
In a knowledge graph, a community is a cluster of entities linked by relationships. You can have the LLM generate an abstract for each community, known as a community report. See [here](https://www.microsoft.com/en-us/research/blog/graphrag-improving-global-search-via-dynamic-community-selection/) for more information. This indicates whether to generate community reports:
- Generate community reports. This option consumes more tokens.
- (Default) Do not generate community reports.
## Procedure
## Quickstart
1. On the **Configuration** page of your dataset, switch on **Extract knowledge graph** or adjust its settings as needed, and click **Save** to confirm your changes.
1. Navigate to the **Configuration** page of your dataset and update:
- Entity types: *Required* - Specifies the entity types in the knowledge graph to generate. You don't have to stick with the default, but you need to customize them for your documents.
- Method: *Optional*
- Entity resolution: *Optional*
- Community reports: *Optional*
*The default knowledge graph configurations for your dataset are now set.*
- *The default knowledge graph configurations for your dataset are now set and files uploaded from this point onward will automatically use these settings during parsing.*
- *Files parsed before this update will retain their original knowledge graph settings.*
2. Navigate to the **Files** page of your dataset, click the **Generate** button on the top right corner of the page, then select **Knowledge graph** from the dropdown to initiate the knowledge graph generation process.
2. The knowledge graph of your dataset does *not* automatically update *until* a newly uploaded file is parsed.
*You can click the pause button in the dropdown to halt the build process when necessary.*
_A **Knowledge graph** entry appears under **Configuration** once a knowledge graph is created._
3. Go back to the **Configuration** page:
*Once a knowledge graph is generated, the **Knowledge graph** field changes from `Not generated` to `Generated at a specific timestamp`. You can delete it by clicking the recycle bin button to the right of the field.*
3. Click **Knowledge graph** to view the details of the generated graph.
4. To use the created knowledge graph, do either of the following:
- In the **Chat setting** panel of your chat app, switch on the **Use knowledge graph** toggle.
@ -79,17 +85,13 @@ In a knowledge graph, a community is a cluster of entities linked by relationshi
## Frequently asked questions
### Can I have different knowledge graph settings for different files in my dataset?
Yes, you can. Just one graph is generated per dataset. The smaller graphs of your files will be *combined* into one big, unified graph at the end of the graph extraction process.
### Does the knowledge graph automatically update when I remove a related file?
Nope. The knowledge graph does *not* automatically update *until* a newly uploaded document is parsed.
Nope. The knowledge graph does *not* update *until* you regenerate a knowledge graph for your dataset.
### How to remove a generated knowledge graph?
To remove the generated knowledge graph, delete all related files in your dataset. Although the **Knowledge graph** entry will still be visible, the graph has actually been deleted.
On the **Configuration** page of your dataset, find the **Knoweledge graph** field and click the recycle bin button to the right of the field.
### Where is the created knowledge graph stored?

View File

@ -72,3 +72,22 @@ The maximum number of clusters to create. Defaults to 64, with a maximum limit o
### Random seed
A random seed. Click **+** to change the seed value.
## Quickstart
1. Navigate to the **Configuration** page of your dataset and update:
- Prompt: *Optional* - We recommend that you keep it as-is until you understand the mechanism behind.
- Max token: *Optional*
- Threshold: *Optional*
- Max cluster: *Optional*
2. Navigate to the **Files** page of your dataset, click the **Generate** button on the top right corner of the page, then select **RAPTOR** from the dropdown to initiate the RAPTOR build process.
*You can click the pause button in the dropdown to halt the build process when necessary.*
3. Go back to the **Configuration** page:
*The **RAPTOR** field changes from `Not generated` to `Generated at a specific timestamp` when a RAPTOR hierarchical tree structure is generated. You can delete it by clicking the recycle bin button to the right of the field.*
4. Once a RAPTOR hierarchical tree structure is generated, your chat assistant and **Retrieval** agent component will use it for retrieval as a default.

View File

@ -0,0 +1,39 @@
---
sidebar_position: 4
slug: /enable_table_of_contents
---
# Extract table of contents
Extract table of contents (TOC) from documents to provide long context RAG and improve retrieval.
---
During indexing, this technique uses LLM to extract and generate chapter information, which is added to each chunk to provide sufficient global context. At the retrieval stage, it first uses the chunks matched by search, then supplements missing chunks based on the table of contents structure. This addresses issues caused by chunk fragmentation and insufficient context, improving answer quality.
:::danger WARNING
Enabling TOC extraction requires significant memory, computational resources, and tokens.
:::
## Prerequisites
The system's default chat model is used to summarize clustered content. Before proceeding, ensure that you have a chat model properly configured:
![Set default models](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/set_default_models.jpg)
## Quickstart
1. Navigate to the **Configuration** page.
2. Enable **TOC Enhance**.
3. To use this technique during retrieval, do either of the following:
- In the **Chat setting** panel of your chat app, switch on the **TOC Enhance** toggle.
- If you are using an agent, click the **Retrieval** agent component to specify the dataset(s) and switch on the **TOC Enhance** toggle.
## Frequently asked questions
### Will previously parsed files be searched using the TOC enhancement feature once I enable `TOC Enhance`?
No. Only files parsed after you enable **TOC Enhance** will be searched using the TOC enhancement feature. To apply this feature to files parsed before enabling **TOC Enhance**, you must reparse them.

View File

@ -1,5 +1,5 @@
---
sidebar_position: 1
sidebar_position: -4
slug: /select_pdf_parser
---
@ -25,7 +25,7 @@ RAGFlow isn't one-size-fits-all. It is built for flexibility and supports deeper
- **One**
- To use a third-party visual model for parsing PDFs, ensure you have set a default img2txt model under **Set default models** on the **Model providers** page.
## Procedure
## Quickstart
1. On your dataset's **Configuration** page, select a chunking method, say **General**.

View File

@ -1,5 +1,5 @@
---
sidebar_position: 0
sidebar_position: -7
slug: /set_metada
---
@ -21,6 +21,10 @@ Ensure that your metadata is in JSON format; otherwise, your updates will not be
![Input metadata](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/input_metadata.jpg)
## Related APIs
[Retrieve chunks](../../references/http_api_reference.md#retrieve-chunks)
## Frequently asked questions
### Can I set metadata for multiple documents at once?

View File

@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: -2
slug: /set_page_rank
---

View File

@ -42,8 +42,8 @@ A tag set is *not* involved in document indexing or retrieval. Do not specify a
:::
1. Click **+ Create dataset** to create a dataset.
2. Navigate to the **Configuration** page of the created dataset and choose **Tag** as the default chunking method.
3. Navigate to the **Dataset** page and upload and parse your table file in XLSX, CSV, or TXT formats.
2. Navigate to the **Configuration** page of the created dataset, select **Built-in** in **Ingestion pipeline**, then choose **Tag** as the default chunking method from the **Built-in** drop-down menu.
3. Go back to the **Files** page and upload and parse your table file in XLSX, CSV, or TXT formats.
_A tag cloud appears under the **Tag view** section, indicating the tag set is created:_
![Image](https://github.com/user-attachments/assets/abefbcbf-c130-4abe-95e1-267b0d2a0505)
4. Click the **Table** tab to view the tag frequency table:

Some files were not shown because too many files have changed in this diff Show More