Compare commits

..

70 Commits

Author SHA1 Message Date
ff2365b146 Replaced twine with uv 2025-10-30 21:08:00 +08:00
ac75bcdf95 Feat: Modify the style of the query variable dropdown list. #10866 (#10903)
### What problem does this PR solve?

Feat: Modify the style of the query variable dropdown list. #10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 20:14:15 +08:00
a62a1a5012 Fix(ci): Add error handling to Docker image removal in tests workflow (#10904)
### What problem does this PR solve?

Add '|| true' to docker rmi command to prevent workflow failure when
image removal fails. This ensures the CI pipeline continues even if the
Docker image cannot be removed for any reason.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 20:14:02 +08:00
361c74ab42 Fix several admin UI issues (#10869)
### What problem does this PR solve?

- Fix login card will overlap title in admin login page.
- Disable unnecessary `listRoles()` query in user management page and
create user form
- Disable admin UI API queries and mutations retry mechanism
- Fix page not redirect to login page automatically if API reports
unauthorized (401)
- Fix change password form not reset when change password modal close
- Resolve admin UI content (mostly long texts) may break layout main box
issue

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 20:13:39 +08:00
5059d3db18 Feat: The query variables of the subsequent operators can reference the structured variables defined in the agent operator. #10866 (#10902)
### What problem does this PR solve?

Feat: The query variables of the subsequent operators can reference the
structured variables defined in the agent operator. #10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 19:06:44 +08:00
5674d762f7 Feat:check embedding model api (#10854)
### What problem does this PR solve?
change:
Randomly sample `check_num` chunks from knowledge base `kb_id`, re-embed
them using `embd_id`, and compare with stored vectors via cosine
similarity. If `avg_cos_sim > 0.99`, return success (`code=0`);
otherwise return business failure (`code=10`).

url:
`/v1/kb/check_embedding`

Request Body:
```
{
  "kb_id": "<dataset_id>",
  "embd_id": "BAAI/bge-m3@SILICONFLOW",
  "check_num": 5
}

```
Success Response:
```
{
  "code": 0,
  "message": "success",
  "data": {
    "summary": { "avg_cos_sim": 0.999999, "sampled": 5, "valid": 5, "max_cos_sim":0.999999,"min_cos_sim":0.999999,"model":"BAAI/bge-m3@SILICONFLOW" },
    "results": [ ... ]
  }
}
```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 19:06:16 +08:00
fa38aed01b Fix: the input length exceeds the context length (#10895)
### What problem does this PR solve?

Fix: the input length exceeds the context length #10750

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 19:00:53 +08:00
ab52ffc9c0 Fix: law parser (#10897)
### What problem does this PR solve?

Fix: law parser  #10888

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 19:00:11 +08:00
5f65c7f48e Fix: video parser should follow selected VLM in pipeline (#10900)
### What problem does this PR solve?

Video parser should follow selected VLM, rather than default one.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 17:59:50 +08:00
bb9504d1cc Fix:enhance delimiters in markdown parser (#10896)
### What problem does this PR solve?
issue:
[#10890](https://github.com/infiniflow/ragflow/issues/10890)
change:
enhance delimiters in markdown parser
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 17:36:51 +08:00
5d79912274 Feat: location rule for admin UI (#10894)
### What problem does this PR solve?

Location rule for admin UI.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 17:32:32 +08:00
b52f09adfe Mineru api support (#10874)
### What problem does this PR solve?

support local mineru api in docker instance. like no gpu in wsl on
windows, but has mineru api with gpu support.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 17:31:46 +08:00
27f0d82102 Fix: opensearch retrieval error (#10891)
### What problem does this PR solve?

Fix: opensearch retrieval error #10828

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 17:30:54 +08:00
4be3754340 Bump infinity to 0.6.2 (#10887)
### What problem does this PR solve?

Bump infinity to 0.6.2
https://github.com/infiniflow/infinity/issues/3052

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 11:34:42 +08:00
52ceac62ab Feat: add German translations for all agent templates and optimized line breaks for template titles (#10643)
### What does this PR solve?
German translation for all agent template and optimizing line breaks in
the title for the new translation.

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 10:56:28 +08:00
871b1d7f9b Feat: Allow other operators to reference the structured output defined by the agent operator. #10866 (#10886)
### What problem does this PR solve?

Feat: Allow other operators to reference the structured output defined
by the agent operator. #10866
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-30 10:36:07 +08:00
bfdf02c6ce Chore(docker): add standard HTTP/HTTPS and MCP ports to .env configuration (#10881)
### What problem does this PR solve?

Added SVR_WEB_HTTP_PORT=80, SVR_WEB_HTTPS_PORT=443, and
SVR_MCP_PORT=9382 to the Docker environment configuration to support
standard web ports and Model Control Protocol access.

### Type of change

- [x] Update config
2025-10-30 09:32:08 +08:00
a3bb4aadcc Fix: predictable token generation (#10868)
### What problem does this PR solve?

Fix predictable token generation.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-30 09:31:36 +08:00
40b2c48957 Chore(config): remove Youdao and BAAI embedding model providers (#10873)
### What problem does this PR solve?

This commit removes the Youdao and BAAI entries from the LLM factories
configuration as they are no longer needed or supported.

### Type of change

- [x] Config update
2025-10-29 19:38:57 +08:00
55eb525fdc Feat: rename file to avoid package name conflict (#10863)
### What problem does this PR solve?

Feat: rename file to avoid package name conflict

### Type of change

- [x] Refactoring
2025-10-29 12:19:57 +08:00
4e69100ca7 Feat: Configure structured data output for agent forms #10866 (#10867)
### What problem does this PR solve?

Feat: Configure structured data output for agent forms #10866

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-29 12:19:24 +08:00
415de50419 Update web/README.md (#10864)
### What problem does this PR solve?

Add 'how to access login UI and admin UI'.

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-29 10:48:34 +08:00
4332948cf9 Update admin client default port to 9381 (#10862)
### What problem does this PR solve?

Now admin client default port is '8080', update it to '9381'

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-29 10:44:41 +08:00
c0c2a10680 Feat: allow initialize Redis without password (#10856)
### What problem does this PR solve?

Allow initialize Redis without password.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-29 09:45:28 +08:00
95fad5d523 Fix: Chat session completion (#10851)
### What problem does this PR solve?
Fix: Chat session completion #10791
### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-29 09:44:02 +08:00
119713153c Test: update test cases for chunk retrieval pagination (#10839)
### What problem does this PR solve?

Updated test cases in test_retrieval_chunks.py to:
- Remove skip mark from page pagination test case (#6646 resolved)
- Add skip marks for page_size=1 tests due to new issue (#10692)

### Type of change

- [x] Test update
2025-10-29 09:41:36 +08:00
d86d7061ea Refactor: Improve how to get total token count for AnthropicCV (#10658)
### What problem does this PR solve?

 Improve how to get total token count for AnthropicCV

### Type of change

- [x] Refactoring
2025-10-29 09:41:15 +08:00
e86bd723d1 Update Octoverse to README (#10859)
### Type of change

- [x] Documentation Update
2025-10-29 00:34:39 +08:00
2c0035dcea Feat: Admin UI (#10857)
### What problem does this PR solve?

Add admin UI for RAGFlow

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-28 22:25:43 +08:00
c3b0ab43e7 Fix release.yml 2025-10-28 21:29:48 +08:00
f93be47f51 Remove 'DID YOU KNOW', when start front-end (#10853)
### What problem does this PR solve?

Remove 'DID YOU KNOW', when start front-end

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-28 19:40:58 +08:00
bb4cc365c1 Add readme in web (#10855)
### What problem does this PR solve?

As title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-28 19:38:06 +08:00
c5d1139f7b Fix: Refactor the similarity slider component and modify the style of the dataset-test page #10703 (#10846)
### What problem does this PR solve?

Fix: Refactor the similarity slider component and modify the style of
the dataset-test page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-28 19:17:05 +08:00
11247d1a9d Feat: Adjust the style of the agent operator form tool #10703 (#10841)
### What problem does this PR solve?

Feat: Adjust the style of the agent operator form tool #10703

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-28 19:16:49 +08:00
5a200f7652 Add time utils (#10849)
### What problem does this PR solve?

- Add time utilities and unit tests

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-28 19:09:14 +08:00
057ae646f2 Fix: logging issues (#10836)
### What problem does this PR solve?

Fix: logging issues #10835 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-28 14:10:47 +08:00
6d7b2337bd Fix release.yml 2025-10-28 13:54:57 +08:00
755989e330 Fix release.yml 2025-10-28 13:29:00 +08:00
5b10daa72a Fix runner label 2025-10-28 13:17:57 +08:00
1bf974b592 Fix ragflow image (#10838)
### What problem does this PR solve?

Fix ragflow image

### Type of change

- [x] Other (please describe): CI
2025-10-28 13:03:45 +08:00
c9b08b7560 Customize service ports in tests.yml (#10834)
### What problem does this PR solve?

Customize service ports in tests.yml

### Type of change

- [x] Other (please describe): CI
2025-10-28 12:07:42 +08:00
60a6cf7c7a Fix:remove unexpected keyword argument in table_structure_recognizer logging (#10831)
### What problem does this PR solve?
issue:
#10825
change:
remove unexpected keyword argument in table_structure_recognizer logging

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-28 11:02:43 +08:00
8572e1f3db Fix: Add video icon in knowledge base #10703 (#10832)
### What problem does this PR solve?

Fix: Add video icon in knowledge base

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-28 11:02:01 +08:00
84d1ffe44c Feature/add new models for token pony and bug fix for use llm (#10823)
new models for token pony and bug fix for use llm

Co-authored-by: huangzl <huangzl@shinemo.com>
2025-10-28 10:04:41 +08:00
766d900a41 Refactor: rename rmSpace to remove_redundant_spaces (#10796)
### What problem does this PR solve?

- rename rmSpace to remove_redundant_spaces
- move clean_markdown_block to common module
- add unit tests for remove_redundant_spaces and clean_markdown_block

### Type of change

- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-28 09:46:32 +08:00
e59458c36b Fix: parsing excel with chartsheet & Clamp begin to a minimum of 0 to prevent negative indexing (#10819)
### What problem does this PR solve?

Fix: parsing excel with chartsheet #10815

Fix: Clamp begin to a minimum of 0 to prevent negative indexing #10804
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-28 09:40:37 +08:00
850e119a81 Doc: Updated How to update MinerU's settings (#10818)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2025-10-27 19:38:42 +08:00
0a78920bff Feat: Modify the style of the agent operator form #10703 (#10821)
### What problem does this PR solve?

Feat: Modify the style of the agent operator form #10703

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-27 19:37:52 +08:00
0089e2b30c Fix: bug fixes and icon replacement #10703 (#10814)
### What problem does this PR solve?

Fix: bug fixes and icon replacement #10703

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 19:02:18 +08:00
b7cb4d3e35 Feat:All-in-one MinerU and Docling (#10813)
### What problem does this PR solve?

issue:
[#10789](https://github.com/infiniflow/ragflow/issues/10789)
change:
All-in-one MinerU and Docling

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-27 19:01:15 +08:00
fd1ad18489 Feat: Adjust the style of the toolbar at the bottom of the agent canvas #10703 (#10807)
### What problem does this PR solve?

Feat: Adjust the style of the toolbar at the bottom of the agent canvas
#10703
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-27 17:04:32 +08:00
5acc407240 Feat: MinerU supports VLM-Transfomers backend (#10809)
### What problem does this PR solve?

MinerU supports VLM-Transfomers backend.

Set `MINERU_BACKEND="pipeline"` to choose the backend. (Options:
pipeline | vlm-transformers, default is pipeline)

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-27 17:04:13 +08:00
16ec6ad346 Fix: Pass kwargs in python api #10699 (#10808)
### What problem does this PR solve?

Fix: Pass kwargs in python api #10699

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 15:18:56 +08:00
5312b75362 Fix: Home and team page style adjustment, and some bug fixes #10703 (#10805)
### What problem does this PR solve?

Fix: Home and team page style adjustment, and some bug fixes #10703

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 15:15:12 +08:00
33a189f620 Feat: add TCADP Parser (#10775)
### What problem does this PR solve?

This PR adds a new TCADP (Tencent Cloud Advanced Document Processing)
parser to RAGFlow, enabling users to leverage Tencent Cloud's document
parsing capabilities for more accurate and structured document
processing. The implementation includes:
New TCADP Parser: A complete implementation of Tencent Cloud's document
parsing API without SDK dependency
Configuration Support: Added configuration options in service_conf.yaml
for Tencent Cloud API credentials
Frontend Integration: Updated UI components to support the new TCADP
parser option
Error Handling: Comprehensive error handling and retry mechanisms for
API calls
Result Processing: Support for both SSE streaming and JSON response
formats from Tencent Cloud API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-27 15:14:58 +08:00
56def59c2b Fix:Error retrieving DOCX image (docx.image.exceptions.UnrecognizedImageError) (#10794)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10776

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-27 13:23:16 +08:00
7fbab750af Doc: readme updates. (#10801)
### Type of change

- [x] Documentation Update
2025-10-27 12:20:23 +08:00
3bd0b99495 Fix: gemini cv model chat issue. (#10799)
### What problem does this PR solve?

#10787
#10781

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 11:43:56 +08:00
ff34c4232e More doc on RAGFlow image (#10800)
### What problem does this PR solve?

More doc on RAGFlow image

### Type of change

- [x] Documentation Update
2025-10-27 11:31:56 +08:00
c5ac571676 Fixed the bug that passing an empty array will not update (#10755)
### What problem does this PR solve?
Fixed the bug that the "dataset_ids" field will not be updated if an
empty array is passed when updating the assistant

### Type of change

- [*] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 11:29:20 +08:00
97401c1e33 Removing value={field} from EditTag (#10767)
### What problem does this PR solve?

Could not delete Entity Types from the Knowledge Graph settings. The
list was not updated on pressing the X on a tag.

What I think happened:
- value={field} was passing ['parser_config','entity_types'] to EditTag
instead of the real tags.
- That blocked AntD Form from injecting the right value/onChange.
- Clicking X filtered the wrong “value,” so no visible change.

Fix:
- Remove value={field} and let Form.Item control EditTag.
- EditTag now gets the real tags array and emits onChange(tags), Form
captures it.


Now it works.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 11:09:20 +08:00
24ab857471 Feat: Adjust the style of the canvas node #10703 (#10795)
### What problem does this PR solve?

Feat: Adjust the style of the canvas node #10703


### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-10-27 10:36:36 +08:00
50e93d1528 Fix: Opendal miss tenant id (#10774)
### What problem does this PR solve?

as https://github.com/infiniflow/ragflow/pull/10712

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 10:28:08 +08:00
42fbeb285a Docs/supplement incomplete params (#10758)
### What problem does this PR solve?
Supplement incomplete parameters of "Update document" interface
### Type of change
- [*] Documentation Update
2025-10-27 09:34:05 +08:00
51fb08be98 Fix: Fixed the issue where dataset log avatars were displayed incorrectly #9869 (#10764)
### What problem does this PR solve?

Fix: Fixed the issue where dataset log avatars were displayed
incorrectly #9869

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 09:33:14 +08:00
501b7d4d01 Fix: prio synonym match than wordnet for english (#10762)
### What problem does this PR solve?

Fix: prio synonym match than wordnet for english

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 09:32:55 +08:00
1d57801c0c Fix:ERROR 20 Method rag.nlp.search.Dealer.search() parameter highlight="None" violates type hint bool | list, as <class "builtins.NoneType"> "None" not list or bool. (#10743)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/10733

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-27 09:29:39 +08:00
73144e278b Don't release full image (#10654)
### What problem does this PR solve?

Introduced gpu profile in .env
Added Dockerfile_tei
fix datrie
Removed LIGHTEN flag

### Type of change

- [x] Documentation Update
- [x] Refactoring
2025-10-23 23:02:27 +08:00
92739ea804 Move test files (#10765)
### What problem does this PR solve?

Move some test files to test/testcases

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-10-23 22:31:55 +08:00
0ff2042fc1 Feat: add Docling parser (#10759)
### What problem does this PR solve?
issue:
#3945
change:
add Docling parser

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-23 19:44:25 +08:00
440 changed files with 19829 additions and 12226 deletions

View File

@ -16,7 +16,7 @@ concurrency:
jobs:
release:
runs-on: [ "self-hosted", "overseas" ]
runs-on: [ "self-hosted", "ragflow-test" ]
steps:
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
@ -75,62 +75,20 @@ jobs:
# The body field does not support environment variable substitution directly.
body_path: release_body.md
# https://github.com/marketplace/actions/docker-login
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: infiniflow
password: ${{ secrets.DOCKERHUB_TOKEN }}
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push full image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
infiniflow/ragflow:${{ env.RELEASE_TAG }}
infiniflow/ragflow:latest-full
file: Dockerfile
platforms: linux/amd64
# https://github.com/marketplace/actions/build-and-push-docker-images
- name: Build and push slim image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
infiniflow/ragflow:${{ env.RELEASE_TAG }}-slim
infiniflow/ragflow:latest-slim
file: Dockerfile
build-args: LIGHTEN=1
platforms: linux/amd64
- name: Build ragflow-sdk
- name: Build and push ragflow-sdk
if: startsWith(github.ref, 'refs/tags/v')
run: |
cd sdk/python && \
uv build
cd sdk/python && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }}
- name: Publish package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v')
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: sdk/python/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true
- name: Build ragflow-cli
- name: Build and push ragflow-cli
if: startsWith(github.ref, 'refs/tags/v')
run: |
cd admin/client && \
uv build
cd admin/client && uv build && uv publish --token ${{ secrets.PYPI_API_TOKEN }}
- name: Publish client package distributions to PyPI
if: startsWith(github.ref, 'refs/tags/v')
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: admin/client/dist/
password: ${{ secrets.PYPI_API_TOKEN }}
verbose: true
- name: Build and push image
run: |
echo ${{ secrets.DOCKERHUB_TOKEN }} | sudo docker login --username infiniflow --password-stdin
sudo docker build --build-arg NEED_MIRROR=1 -t infiniflow/ragflow:${RELEASE_TAG} -f Dockerfile .
sudo docker tag infiniflow/ragflow:${RELEASE_TAG} infiniflow/ragflow:latest
sudo docker push infiniflow/ragflow:${RELEASE_TAG}
sudo docker push infiniflow/ragflow:latest

View File

@ -10,7 +10,7 @@ on:
- '*.md'
- '*.mdx'
pull_request:
types: [ opened, synchronize, reopened, labeled ]
types: [ labeled, synchronize, reopened ]
paths-ignore:
- 'docs/**'
- '*.md'
@ -29,7 +29,7 @@ jobs:
# https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution
# https://github.com/orgs/community/discussions/26261
if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }}
runs-on: [ "self-hosted", "debug" ]
runs-on: [ "self-hosted", "ragflow-test" ]
steps:
# https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2
@ -49,20 +49,20 @@ jobs:
- name: Check workflow duplication
if: ${{ !cancelled() && !failure() && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci')) }}
run: |
if [[ ${{ github.event_name }} != 'pull_request' ]]; then
if [[ "$GITHUB_EVENT_NAME" != "pull_request" && "$GITHUB_EVENT_NAME" != "schedule" ]]; then
HEAD=$(git rev-parse HEAD)
# Find a PR that introduced a given commit
gh auth login --with-token <<< "${{ secrets.GITHUB_TOKEN }}"
PR_NUMBER=$(gh pr list --search ${HEAD} --state merged --json number --jq .[0].number)
echo "HEAD=${HEAD}"
echo "PR_NUMBER=${PR_NUMBER}"
if [[ -n ${PR_NUMBER} ]]; then
if [[ -n "${PR_NUMBER}" ]]; then
PR_SHA_FP=${RUNNER_WORKSPACE_PREFIX}/artifacts/${GITHUB_REPOSITORY}/PR_${PR_NUMBER}
if [[ -f ${PR_SHA_FP} ]]; then
if [[ -f "${PR_SHA_FP}" ]]; then
read -r PR_SHA PR_RUN_ID < "${PR_SHA_FP}"
# Calculate the hash of the current workspace content
HEAD_SHA=$(git rev-parse HEAD^{tree})
if [[ ${HEAD_SHA} == ${PR_SHA} ]]; then
if [[ "${HEAD_SHA}" == "${PR_SHA}" ]]; then
echo "Cancel myself since the workspace content hash is the same with PR #${PR_NUMBER} merged. See ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${PR_RUN_ID} for details."
gh run cancel ${GITHUB_RUN_ID}
while true; do
@ -91,122 +91,140 @@ jobs:
version: ">=0.11.x"
args: "check"
- name: Build ragflow:nightly-slim
run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
sudo docker pull ubuntu:22.04
sudo DOCKER_BUILDKIT=1 docker build --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
- name: Build ragflow:nightly
run: |
sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
- name: Start ragflow:nightly-slim
run: |
sudo docker compose -f docker/docker-compose.yml down --volumes --remove-orphans
echo -e "\nRAGFLOW_IMAGE=infiniflow/ragflow:nightly-slim" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Stop ragflow:nightly-slim
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
RAGFLOW_IMAGE=infiniflow/ragflow:${GITHUB_RUN_ID}
echo "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> $GITHUB_ENV
sudo docker pull ubuntu:22.04
sudo DOCKER_BUILDKIT=1 docker build --build-arg NEED_MIRROR=1 -f Dockerfile -t ${RAGFLOW_IMAGE} .
if [[ "$GITHUB_EVENT_NAME" == "schedule" ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
echo "HTTP_API_TEST_LEVEL=${HTTP_API_TEST_LEVEL}" >> $GITHUB_ENV
echo "RAGFLOW_CONTAINER=${GITHUB_RUN_ID}-ragflow-cpu-1" >> $GITHUB_ENV
- name: Start ragflow:nightly
run: |
echo -e "\nRAGFLOW_IMAGE=infiniflow/ragflow:nightly" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
# Determine runner number (default to 1 if not found)
RUNNER_NUM=$(sudo docker inspect $(hostname) --format '{{index .Config.Labels "com.docker.compose.container-number"}}' 2>/dev/null || true)
RUNNER_NUM=${RUNNER_NUM:-1}
# Compute port numbers using bash arithmetic
ES_PORT=$((1200 + RUNNER_NUM * 10))
OS_PORT=$((1201 + RUNNER_NUM * 10))
INFINITY_THRIFT_PORT=$((23817 + RUNNER_NUM * 10))
INFINITY_HTTP_PORT=$((23820 + RUNNER_NUM * 10))
INFINITY_PSQL_PORT=$((5432 + RUNNER_NUM * 10))
MYSQL_PORT=$((5455 + RUNNER_NUM * 10))
MINIO_PORT=$((9000 + RUNNER_NUM * 10))
MINIO_CONSOLE_PORT=$((9001 + RUNNER_NUM * 10))
REDIS_PORT=$((6379 + RUNNER_NUM * 10))
TEI_PORT=$((6380 + RUNNER_NUM * 10))
KIBANA_PORT=$((6601 + RUNNER_NUM * 10))
SVR_HTTP_PORT=$((9380 + RUNNER_NUM * 10))
ADMIN_SVR_HTTP_PORT=$((9381 + RUNNER_NUM * 10))
SVR_MCP_PORT=$((9382 + RUNNER_NUM * 10))
SANDBOX_EXECUTOR_MANAGER_PORT=$((9385 + RUNNER_NUM * 10))
SVR_WEB_HTTP_PORT=$((80 + RUNNER_NUM * 10))
SVR_WEB_HTTPS_PORT=$((443 + RUNNER_NUM * 10))
# Persist computed ports into docker/.env so docker-compose uses the correct host bindings
echo "" >> docker/.env
echo -e "ES_PORT=${ES_PORT}" >> docker/.env
echo -e "OS_PORT=${OS_PORT}" >> docker/.env
echo -e "INFINITY_THRIFT_PORT=${INFINITY_THRIFT_PORT}" >> docker/.env
echo -e "INFINITY_HTTP_PORT=${INFINITY_HTTP_PORT}" >> docker/.env
echo -e "INFINITY_PSQL_PORT=${INFINITY_PSQL_PORT}" >> docker/.env
echo -e "MYSQL_PORT=${MYSQL_PORT}" >> docker/.env
echo -e "MINIO_PORT=${MINIO_PORT}" >> docker/.env
echo -e "MINIO_CONSOLE_PORT=${MINIO_CONSOLE_PORT}" >> docker/.env
echo -e "REDIS_PORT=${REDIS_PORT}" >> docker/.env
echo -e "TEI_PORT=${TEI_PORT}" >> docker/.env
echo -e "KIBANA_PORT=${KIBANA_PORT}" >> docker/.env
echo -e "SVR_HTTP_PORT=${SVR_HTTP_PORT}" >> docker/.env
echo -e "ADMIN_SVR_HTTP_PORT=${ADMIN_SVR_HTTP_PORT}" >> docker/.env
echo -e "SVR_MCP_PORT=${SVR_MCP_PORT}" >> docker/.env
echo -e "SANDBOX_EXECUTOR_MANAGER_PORT=${SANDBOX_EXECUTOR_MANAGER_PORT}" >> docker/.env
echo -e "SVR_WEB_HTTP_PORT=${SVR_WEB_HTTP_PORT}" >> docker/.env
echo -e "SVR_WEB_HTTPS_PORT=${SVR_WEB_HTTPS_PORT}" >> docker/.env
echo -e "COMPOSE_PROFILES=\${COMPOSE_PROFILES},tei-cpu" >> docker/.env
echo -e "TEI_MODEL=BAAI/bge-small-en-v1.5" >> docker/.env
echo -e "RAGFLOW_IMAGE=${RAGFLOW_IMAGE}" >> docker/.env
echo "HOST_ADDRESS=http://host.docker.internal:${SVR_HTTP_PORT}" >> $GITHUB_ENV
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python
- name: Run sdk tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python && uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
source .venv/bin/activate && pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
- name: Run frontend api tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && UV_LINK_MODE=copy uv sync --python 3.10 --group test --frozen && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
source .venv/bin/activate && pytest -s --tb=short sdk/python/test/test_frontend_api/get_email.py sdk/python/test/test_frontend_api/test_dataset.py
- name: Run http api tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
source .venv/bin/activate && pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v
- name: Start ragflow:nightly
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml up -d
sed -i '1i DOC_ENGINE=infinity' docker/.env
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} up -d
- name: Run sdk tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && uv pip install sdk/python && DOC_ENGINE=infinity uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
source .venv/bin/activate && DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_sdk_api
- name: Run frontend api tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && UV_LINK_MODE=copy uv sync --python 3.10 --group test --frozen && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
source .venv/bin/activate && DOC_ENGINE=infinity pytest -s --tb=short sdk/python/test/test_frontend_api/get_email.py sdk/python/test/test_frontend_api/test_dataset.py
- name: Run http api tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
until sudo docker exec ${RAGFLOW_CONTAINER} curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
if [[ $GITHUB_EVENT_NAME == 'schedule' ]]; then
export HTTP_API_TEST_LEVEL=p3
else
export HTTP_API_TEST_LEVEL=p2
fi
UV_LINK_MODE=copy uv sync --python 3.10 --only-group test --no-default-groups --frozen && DOC_ENGINE=infinity uv run --only-group test --no-default-groups pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
source .venv/bin/activate && DOC_ENGINE=infinity pytest -s --tb=short --level=${HTTP_API_TEST_LEVEL} test/testcases/test_http_api
- name: Stop ragflow:nightly
if: always() # always run this step even if previous steps failed
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml down -v
sudo docker compose -f docker/docker-compose.yml -p ${GITHUB_RUN_ID} down -v
sudo docker rmi -f ${RAGFLOW_IMAGE:-NO_IMAGE} || true

View File

@ -4,8 +4,6 @@ USER root
SHELL ["/bin/bash", "-c"]
ARG NEED_MIRROR=0
ARG LIGHTEN=0
ENV LIGHTEN=${LIGHTEN}
WORKDIR /ragflow
@ -17,13 +15,6 @@ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co
/huggingface.co/InfiniFlow/text_concat_xgb_v1.0 \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co,target=/huggingface.co \
if [ "$LIGHTEN" != "1" ]; then \
(tar -cf - \
/huggingface.co/BAAI/bge-large-zh-v1.5 \
/huggingface.co/maidalun1020/bce-embedding-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow) \
fi
# https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
@ -63,11 +54,11 @@ RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
apt install -y ghostscript
RUN if [ "$NEED_MIRROR" == "1" ]; then \
pip3 config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
pip3 config set global.trusted-host mirrors.aliyun.com; \
pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && \
pip3 config set global.trusted-host pypi.tuna.tsinghua.edu.cn; \
mkdir -p /etc/uv && \
echo "[[index]]" > /etc/uv/uv.toml && \
echo 'url = "https://mirrors.aliyun.com/pypi/simple"' >> /etc/uv/uv.toml && \
echo 'url = "https://pypi.tuna.tsinghua.edu.cn/simple"' >> /etc/uv/uv.toml && \
echo "default = true" >> /etc/uv/uv.toml; \
fi; \
pipx install uv
@ -151,15 +142,11 @@ COPY pyproject.toml uv.lock ./
# uv records index url into uv.lock but doesn't failover among multiple indexes
RUN --mount=type=cache,id=ragflow_uv,target=/root/.cache/uv,sharing=locked \
if [ "$NEED_MIRROR" == "1" ]; then \
sed -i 's|pypi.org|mirrors.aliyun.com/pypi|g' uv.lock; \
sed -i 's|pypi.org|pypi.tuna.tsinghua.edu.cn|g' uv.lock; \
else \
sed -i 's|mirrors.aliyun.com/pypi|pypi.org|g' uv.lock; \
sed -i 's|pypi.tuna.tsinghua.edu.cn|pypi.org|g' uv.lock; \
fi; \
if [ "$LIGHTEN" == "1" ]; then \
uv sync --python 3.10 --frozen; \
else \
uv sync --python 3.10 --frozen --all-extras; \
fi
uv sync --python 3.10 --frozen
COPY web web
COPY docs docs
@ -169,11 +156,7 @@ RUN --mount=type=cache,id=ragflow_npm,target=/root/.npm,sharing=locked \
COPY .git /ragflow/.git
RUN version_info=$(git describe --tags --match=v* --first-parent --always); \
if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
version_info="$version_info"; \
echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION
@ -202,6 +185,7 @@ COPY agentic_reasoning agentic_reasoning
COPY pyproject.toml uv.lock ./
COPY mcp mcp
COPY plugin plugin
COPY common common
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./

14
Dockerfile_tei Normal file
View File

@ -0,0 +1,14 @@
FROM ghcr.io/huggingface/text-embeddings-inference:cpu-1.8
# uv tool install huggingface_hub
# hf download --local-dir tei_data/BAAI/bge-small-en-v1.5 BAAI/bge-small-en-v1.5
# hf download --local-dir tei_data/BAAI/bge-m3 BAAI/bge-m3
# hf download --local-dir tei_data/Qwen/Qwen3-Embedding-0.6B Qwen/Qwen3-Embedding-0.6B
COPY tei_data /data
# curl -X POST http://localhost:6380/embed -H "Content-Type: application/json" -d '{"inputs": "Hello, world! This is a test sentence."}'
# curl -X POST http://tei:80/embed -H "Content-Type: application/json" -d '{"inputs": "Hello, world! This is a test sentence."}'
# [[-0.058816575,0.019564206,0.026697718,...]]
# curl -X POST http://localhost:6380/v1/embeddings -H "Content-Type: application/json" -d '{"input": "Hello, world! This is a test sentence."}'
# {"object":"list","data":[{"object":"embedding","embedding":[-0.058816575,0.019564206,...],"index":0}],"model":"BAAI/bge-small-en-v1.5","usage":{"prompt_tokens":12,"total_tokens":12}}

View File

@ -43,7 +43,9 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -84,6 +86,7 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Latest Updates
- 2025-10-23 Supports MinerU & Docling as document parsing methods.
- 2025-10-15 Supports orchestrable ingestion pipeline.
- 2025-08-08 Supports OpenAI's latest GPT-5 series models.
- 2025-08-01 Supports agentic workflow and MCP.
@ -174,41 +177,42 @@ releases! 🌟
> ```bash
> vm.max_map_count=262144
> ```
>
2. Clone the repo:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Start up the server using the pre-built Docker images:
> [!CAUTION]
> All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64.
> If you are on an ARM64 platform, follow [this guide](https://ragflow.io/docs/dev/build_docker_image) to build a Docker image compatible with your system.
> The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server. For example: set `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` for the full edition `v0.21.1`.
> The command below downloads the `v0.21.1-slim` edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from `v0.21.1-slim`, update the `RAGFLOW_IMAGE` variable accordingly in **docker/.env** before using `docker compose` to start the server.
```bash
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
```
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|-------------------|-----------------|-----------------------|--------------------------|
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build |
> Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
4. Check the server status after having the server up and running:
```bash
$ docker logs -f ragflow-server
$ docker logs -f docker-ragflow-cpu-1
```
_The following output confirms a successful launch of the system:_
@ -226,14 +230,17 @@ releases! 🌟
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal`
> error because, at that moment, your RAGFlow may not be fully initialized.
>
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default
> HTTP serving port `80` can be omitted when using the default configurations.
>
6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update
the `API_KEY` field with the corresponding API key.
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
>
_The show is on!_
@ -272,7 +279,6 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
> `-v` will delete the docker container volumes, and the existing data will be cleared.
2. Set `DOC_ENGINE` in **docker/.env** to `infinity`.
3. Start the containers:
```bash
@ -286,16 +292,6 @@ RAGFlow uses Elasticsearch by default for storing full text and vectors. To swit
This image is approximately 2 GB in size and relies on external LLM and embedding services.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Build a Docker image including embedding models
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
@ -309,17 +305,15 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash
pipx install uv pre-commit
```
2. Clone the source code and install Python dependencies:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
```
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash
@ -331,13 +325,11 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
```
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. If your operating system does not have jemalloc, please install it as follows:
```bash
@ -350,7 +342,6 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
# macOS
sudo brew install jemalloc
```
6. Launch backend service:
```bash
@ -358,14 +349,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
7. Install frontend dependencies:
```bash
cd web
npm install
```
8. Launch frontend service:
```bash
@ -375,14 +364,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_The following output confirms a successful launch of the system:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Stop RAGFlow front-end and back-end service after development is complete:
```bash
pkill -f "ragflow_server.py|task_executor.py"
```
## 📚 Documentation
- [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -43,7 +43,13 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<details open>
<summary><b>📕 Daftar Isi </b> </summary>
@ -80,6 +86,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Pembaruan Terbaru
- 2025-10-23 Mendukung MinerU & Docling sebagai metode penguraian dokumen.
- 2025-10-15 Dukungan untuk jalur data yang terorkestrasi.
- 2025-08-08 Mendukung model seri GPT-5 terbaru dari OpenAI.
- 2025-08-01 Mendukung alur kerja agen dan MCP.
@ -168,41 +175,42 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
> ```bash
> vm.max_map_count=262144
> ```
>
2. Clone repositori:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Bangun image Docker pre-built dan jalankan server:
> [!CAUTION]
> Semua gambar Docker dibangun untuk platform x86. Saat ini, kami tidak menawarkan gambar Docker untuk ARM64.
> Jika Anda menggunakan platform ARM64, [silakan gunakan panduan ini untuk membangun gambar Docker yang kompatibel dengan sistem Anda](https://ragflow.io/docs/dev/build_docker_image).
> Perintah di bawah ini mengunduh edisi v0.21.1-slim dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.1-slim, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server. Misalnya, atur RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1 untuk edisi lengkap v0.21.1.
> Perintah di bawah ini mengunduh edisi v0.21.1 dari gambar Docker RAGFlow. Silakan merujuk ke tabel berikut untuk deskripsi berbagai edisi RAGFlow. Untuk mengunduh edisi RAGFlow yang berbeda dari v0.21.1, perbarui variabel RAGFLOW_IMAGE di docker/.env sebelum menggunakan docker compose untuk memulai server.
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build |
> Catatan: Mulai dari `v0.22.0`, kami hanya menyediakan edisi slim dan tidak lagi menambahkan akhiran **-slim** pada tag image.
1. Periksa status server setelah server aktif dan berjalan:
```bash
$ docker logs -f ragflow-server
$ docker logs -f docker-ragflow-cpu-1
```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_
@ -220,14 +228,17 @@ $ docker compose -f docker-compose.yml up -d
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal`
> karena RAGFlow mungkin belum sepenuhnya siap.
>
2. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
> Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena
> port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default.
>
3. Dalam [service_conf.yaml.template](./docker/service_conf.yaml.template), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
bidang `API_KEY` dengan kunci API yang sesuai.
> Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut.
>
_Sistem telah siap digunakan!_
@ -253,16 +264,6 @@ Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
Image ini berukuran sekitar 2 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Membangun Docker Image Termasuk Model Embedding
Image ini berukuran sekitar 9 GB. Karena sudah termasuk model embedding, ia hanya bergantung pada aplikasi LLM eksternal.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
@ -276,17 +277,15 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash
pipx install uv pre-commit
```
2. Clone kode sumber dan instal dependensi Python:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
```
3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose:
```bash
@ -298,13 +297,11 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
```
4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. Jika sistem operasi Anda tidak memiliki jemalloc, instal sebagai berikut:
```bash
@ -315,7 +312,6 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
# mac
sudo brew install jemalloc
```
6. Jalankan aplikasi backend:
```bash
@ -323,14 +319,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
7. Instal dependensi frontend:
```bash
cd web
npm install
```
8. Jalankan aplikasi frontend:
```bash
@ -340,15 +334,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Hentikan layanan front-end dan back-end RAGFlow setelah pengembangan selesai:
```bash
pkill -f "ragflow_server.py|task_executor.py"
```
## 📚 Dokumentasi
- [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -43,7 +43,13 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## 💡 RAGFlow とは?
@ -60,6 +66,7 @@
## 🔥 最新情報
- 2025-10-23 ドキュメント解析方法として MinerU と Docling をサポートします。
- 2025-10-15 オーケストレーションされたデータパイプラインのサポート。
- 2025-08-08 OpenAI の最新 GPT-5 シリーズモデルをサポートします。
- 2025-08-01 エージェントワークフローとMCPをサポート。
@ -147,41 +154,42 @@
> ```bash
> vm.max_map_count=262144
> ```
>
2. リポジトリをクローンする:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. ビルド済みの Docker イメージをビルドし、サーバーを起動する:
> [!CAUTION]
> 現在、公式に提供されているすべての Docker イメージは x86 アーキテクチャ向けにビルドされており、ARM64 用の Docker イメージは提供されていません。
> ARM64 アーキテクチャのオペレーティングシステムを使用している場合は、[このドキュメント](https://ragflow.io/docs/dev/build_docker_image)を参照して Docker イメージを自分でビルドしてください。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.1-slim エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.1-slim とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。例えば、完全版 v0.21.1 をダウンロードするには、RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1 と設定します。
> 以下のコマンドは、RAGFlow Docker イメージの v0.21.1 エディションをダウンロードします。異なる RAGFlow エディションの説明については、以下の表を参照してください。v0.21.1 とは異なるエディションをダウンロードするには、docker/.env ファイルの RAGFLOW_IMAGE 変数を適宜更新し、docker compose を使用してサーバーを起動してください。
```bash
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
```
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build |
> 注意:`v0.22.0` 以降、当プロジェクトでは slim エディションのみを提供し、イメージタグに **-slim** サフィックスを付けなくなりました。
1. サーバーを立ち上げた後、サーバーの状態を確認する:
```bash
$ docker logs -f ragflow-server
$ docker logs -f docker-ragflow-cpu-1
```
_以下の出力は、システムが正常に起動したことを確認するものです:_
@ -197,12 +205,15 @@
```
> もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。
>
2. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
>
3. [service_conf.yaml.template](./docker/service_conf.yaml.template) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
> 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。
>
_これで初期設定完了ショーの開幕です_
@ -231,33 +242,27 @@
RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。
1. 実行中のすべてのコンテナを停止するには:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
Note: `-v` は docker コンテナのボリュームを削除し、既存のデータをクリアします。
2. **docker/.env** の「DOC \_ ENGINE」を「infinity」に設定します。
3. 起動コンテナ:
```bash
$ docker compose -f docker-compose.yml up -d
```
> [!WARNING]
> Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。
>
## 🔧 ソースコードで Docker イメージを作成(埋め込みモデルなし)
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 ソースコードをコンパイルした Docker イメージ(埋め込みモデルを含む)
この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
@ -271,17 +276,15 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash
pipx install uv pre-commit
```
2. ソースコードをクローンし、Python の依存関係をインストールする:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
```
3. Docker Compose を使用して依存サービスMinIO、Elasticsearch、Redis、MySQLを起動する:
```bash
@ -293,13 +296,11 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
```
4. HuggingFace にアクセスできない場合は、`HF_ENDPOINT` 環境変数を設定してミラーサイトを使用してください:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. オペレーティングシステムにjemallocがない場合は、次のようにインストールします:
```bash
@ -310,7 +311,6 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
# mac
sudo brew install jemalloc
```
6. バックエンドサービスを起動する:
```bash
@ -318,14 +318,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
7. フロントエンドの依存関係をインストールする:
```bash
cd web
npm install
```
8. フロントエンドサービスを起動する:
```bash
@ -335,14 +333,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_以下の画面で、システムが正常に起動したことを示します:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. 開発が完了したら、RAGFlow のフロントエンド サービスとバックエンド サービスを停止します:
```bash
pkill -f "ragflow_server.py|task_executor.py"
```
## 📚 ドキュメンテーション
- [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -43,7 +43,14 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
## 💡 RAGFlow란?
@ -60,6 +67,7 @@
## 🔥 업데이트
- 2025-10-23 문서 파싱 방법으로 MinerU 및 Docling을 지원합니다.
- 2025-10-15 조정된 데이터 파이프라인 지원.
- 2025-08-08 OpenAI의 최신 GPT-5 시리즈 모델을 지원합니다.
- 2025-08-01 에이전트 워크플로우와 MCP를 지원합니다.
@ -160,7 +168,7 @@
> 모든 Docker 이미지는 x86 플랫폼을 위해 빌드되었습니다. 우리는 현재 ARM64 플랫폼을 위한 Docker 이미지를 제공하지 않습니다.
> ARM64 플랫폼을 사용 중이라면, [시스템과 호환되는 Docker 이미지를 빌드하려면 이 가이드를 사용해 주세요](https://ragflow.io/docs/dev/build_docker_image).
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.1-slim 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.1-slim과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오. 예를 들어, 전체 버전인 v0.21.1을 다운로드하려면 RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1로 설정합니다.
> 아래 명령어는 RAGFlow Docker 이미지의 v0.21.1 버전을 다운로드합니다. 다양한 RAGFlow 버전에 대한 설명은 다음 표를 참조하십시오. v0.21.1과 다른 RAGFlow 버전을 다운로드하려면, docker/.env 파일에서 RAGFLOW_IMAGE 변수를 적절히 업데이트한 후 docker compose를 사용하여 서버를 시작하십시오.
```bash
$ cd ragflow/docker
@ -168,20 +176,22 @@
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| nightly | &approx;2 | | _Unstable_ nightly build |
> 참고: `v0.22.0`부터는 slim 에디션만 배포하며 이미지 태그에 **-slim** 접미사를 더 이상 붙이지 않습니다.
1. 서버가 시작된 후 서버 상태를 확인하세요:
```bash
$ docker logs -f ragflow-server
$ docker logs -f docker-ragflow-cpu-1
```
_다음 출력 결과로 시스템이 성공적으로 시작되었음을 확인합니다:_
@ -247,16 +257,6 @@ RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및
이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
@ -276,7 +276,7 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
```

View File

@ -43,7 +43,13 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<details open>
<summary><b>📕 Índice</b></summary>
@ -80,7 +86,8 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
## 🔥 Últimas Atualizações
- 10-15-2025 Suporte para pipelines de dados orquestrados.
- 23-10-2025 Suporta MinerU e Docling como métodos de análise de documentos.
- 15-10-2025 Suporte para pipelines de dados orquestrados.
- 08-08-2025 Suporta a mais recente série GPT-5 da OpenAI.
- 01-08-2025 Suporta fluxo de trabalho agente e MCP.
- 23-05-2025 Adicione o componente executor de código Python/JS ao Agente.
@ -147,84 +154,86 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
### 🚀 Iniciar o servidor
1. Certifique-se de que `vm.max_map_count` >= 262144:
1. Certifique-se de que `vm.max_map_count` >= 262144:
> Para verificar o valor de `vm.max_map_count`:
>
> ```bash
> $ sysctl vm.max_map_count
> ```
>
> Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144:
>
> ```bash
> # Neste caso, defina para 262144:
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**:
>
> ```bash
> vm.max_map_count=262144
> ```
> Para verificar o valor de `vm.max_map_count`:
>
> ```bash
> $ sysctl vm.max_map_count
> ```
>
> Se necessário, redefina `vm.max_map_count` para um valor de pelo menos 262144:
>
> ```bash
> # Neste caso, defina para 262144:
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> Essa mudança será resetada após a reinicialização do sistema. Para garantir que a alteração permaneça permanente, adicione ou atualize o valor de `vm.max_map_count` em **/etc/sysctl.conf**:
>
> ```bash
> vm.max_map_count=262144
> ```
>
2. Clone o repositório:
2. Clone o repositório:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Inicie o servidor usando as imagens Docker pré-compiladas:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Inicie o servidor usando as imagens Docker pré-compiladas:
> [!CAUTION]
> Todas as imagens Docker são construídas para plataformas x86. Atualmente, não oferecemos imagens Docker para ARM64.
> Se você estiver usando uma plataforma ARM64, por favor, utilize [este guia](https://ragflow.io/docs/dev/build_docker_image) para construir uma imagem Docker compatível com o seu sistema.
> O comando abaixo baixa a edição `v0.21.1-slim` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.1-slim`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor. Por exemplo: defina `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` para a edição completa `v0.21.1`.
> O comando abaixo baixa a edição`v0.21.1` da imagem Docker do RAGFlow. Consulte a tabela a seguir para descrições de diferentes edições do RAGFlow. Para baixar uma edição do RAGFlow diferente da `v0.21.1`, atualize a variável `RAGFLOW_IMAGE` conforme necessário no **docker/.env** antes de usar `docker compose` para iniciar o servidor.
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
```
# To use GPU to accelerate embedding and DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | ------------------------------- | ------------------------ |
| v0.21.1 | ~9 | :heavy_check_mark: | Lançamento estável |
| v0.21.1-slim | ~2 | ❌ | Lançamento estável |
| nightly | ~9 | :heavy_check_mark: | _Instável_ build noturno |
| nightly-slim | ~2 | ❌ | _Instável_ build noturno |
| Tag da imagem RAGFlow | Tamanho da imagem (GB) | Possui modelos de incorporação? | Estável? |
| --------------------- | ---------------------- | --------------------------------- | ------------------------------ |
| v0.21.1 | &approx;9 | ✔️ | Lançamento estável |
| v0.21.1-slim | &approx;2 | ❌ | Lançamento estável |
| nightly | &approx;2 | ❌ | Construção noturna instável |
4. Verifique o status do servidor após tê-lo iniciado:
> Observação: A partir da`v0.22.0`, distribuímos apenas a edição slim e não adicionamos mais o sufixo **-slim** às tags das imagens.
```bash
$ docker logs -f ragflow-server
```
4. Verifique o status do servidor após tê-lo iniciado:
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
```bash
$ docker logs -f docker-ragflow-cpu-1
```
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
* Rodando em todos os endereços (0.0.0.0)
```
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
* Rodando em todos os endereços (0.0.0.0)
```
5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
> Se você pular essa etapa de confirmação e acessar diretamente o RAGFlow, seu navegador pode exibir um erro `network anormal`, pois, nesse momento, seu RAGFlow pode não estar totalmente inicializado.
>
5. No seu navegador, insira o endereço IP do seu servidor e faça login no RAGFlow.
> Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão.
> Com as configurações padrão, você só precisa digitar `http://IP_DO_SEU_MÁQUINA` (**sem** o número da porta), pois a porta HTTP padrão `80` pode ser omitida ao usar as configurações padrão.
>
6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
6. Em [service_conf.yaml.template](./docker/service_conf.yaml.template), selecione a fábrica LLM desejada em `user_default_llm` e atualize o campo `API_KEY` com a chave de API correspondente.
> Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações.
> Consulte [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) para mais informações.
>
_O show está no ar!_
@ -255,9 +264,9 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
Note: `-v` irá deletar os volumes do contêiner, e os dados existentes serão apagados.
2. Defina `DOC_ENGINE` no **docker/.env** para `infinity`.
3. Inicie os contêineres:
```bash
@ -271,16 +280,6 @@ O RAGFlow usa o Elasticsearch por padrão para armazenar texto completo e vetore
Esta imagem tem cerca de 2 GB de tamanho e depende de serviços externos de LLM e incorporação.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 Criar uma imagem Docker incluindo modelos de incorporação
Esta imagem tem cerca de 9 GB de tamanho. Como inclui modelos de incorporação, depende apenas de serviços externos de LLM.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
@ -294,17 +293,15 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```bash
pipx install uv pre-commit
```
2. Clone o código-fonte e instale as dependências Python:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # instala os módulos Python dependentes do RAGFlow
uv sync --python 3.10 # instala os módulos Python dependentes do RAGFlow
uv run download_deps.py
pre-commit install
```
3. Inicie os serviços dependentes (MinIO, Elasticsearch, Redis e MySQL) usando Docker Compose:
```bash
@ -316,24 +313,21 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
```
4. Se não conseguir acessar o HuggingFace, defina a variável de ambiente `HF_ENDPOINT` para usar um site espelho:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. Se o seu sistema operacional não tiver jemalloc, instale-o da seguinte maneira:
```bash
# ubuntu
sudo apt-get install libjemalloc-dev
# centos
sudo yum instalar jemalloc
# mac
sudo brew install jemalloc
```
```bash
# ubuntu
sudo apt-get install libjemalloc-dev
# centos
sudo yum instalar jemalloc
# mac
sudo brew install jemalloc
```
6. Lance o serviço de back-end:
```bash
@ -341,14 +335,12 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
7. Instale as dependências do front-end:
```bash
cd web
npm install
```
8. Lance o serviço de front-end:
```bash
@ -358,13 +350,11 @@ docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly
_O seguinte resultado confirma o lançamento bem-sucedido do sistema:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
9. Pare os serviços de front-end e back-end do RAGFlow após a conclusão do desenvolvimento:
```bash
pkill -f "ragflow_server.py|task_executor.py"
```
```bash
pkill -f "ragflow_server.py|task_executor.py"
```
## 📚 Documentação

View File

@ -43,7 +43,9 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -83,6 +85,7 @@
## 🔥 近期更新
- 2025-10-23 支援 MinerU 和 Docling 作為文件解析方法。
- 2025-10-15 支援可編排的資料管道。
- 2025-08-08 支援 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支援 agentic workflow 和 MCP
@ -170,47 +173,48 @@
> ```bash
> vm.max_map_count=262144
> ```
>
2. 克隆倉庫:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. 進入 **docker** 資料夾,利用事先編譯好的 Docker 映像啟動伺服器:
> [!CAUTION]
> 所有 Docker 映像檔都是為 x86 平台建置的。目前,我們不提供 ARM64 平台的 Docker 映像檔。
> 如果您使用的是 ARM64 平台,請使用 [這份指南](https://ragflow.io/docs/dev/build_docker_image) 來建置適合您系統的 Docker 映像檔。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.1-slim`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.1-slim` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。例如,你可以透過設定 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` 來下載 RAGFlow 鏡像的 `v0.21.1` 完整發行版。
> 執行以下指令會自動下載 RAGFlow slim Docker 映像 `v0.21.1`。請參考下表查看不同 Docker 發行版的說明。如需下載不同於 `v0.21.1` 的 Docker 映像,請在執行 `docker compose` 啟動服務之前先更新 **docker/.env** 檔案內的 `RAGFLOW_IMAGE` 變數。
```bash
```bash
$ cd ragflow/docker
# Use CPU for embedding and DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
```
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | -------------------------- |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;2 | | _Unstable_ nightly build |
> [!TIP]
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
>
> - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
> 注意:自 `v0.22.0` 起,我們僅發佈 slim 版本,並且不再在映像標籤後附加 **-slim** 後綴。
> [!TIP]
> 如果你遇到 Docker 映像檔拉不下來的問題,可以在 **docker/.env** 檔案內根據變數 `RAGFLOW_IMAGE` 的註解提示選擇華為雲或阿里雲的對應映像。
>
> - 華為雲鏡像名:`swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow`
> - 阿里雲鏡像名:`registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow`
4. 伺服器啟動成功後再次確認伺服器狀態:
```bash
$ docker logs -f ragflow-server
$ docker logs -f docker-ragflow-cpu-1
```
_出現以下介面提示說明伺服器啟動成功_
@ -226,12 +230,15 @@
```
> 如果您跳過這一步驟系統確認步驟就登入 RAGFlow你的瀏覽器有可能會提示 `network anormal` 或 `網路異常`,因為 RAGFlow 可能並未完全啟動成功。
>
5. 在你的瀏覽器中輸入你的伺服器對應的 IP 位址並登入 RAGFlow。
> 上面這個範例中,您只需輸入 http://IP_OF_YOUR_MACHINE 即可:未改動過設定則無需輸入連接埠(預設的 HTTP 服務連接埠 80
>
6. 在 [service_conf.yaml.template](./docker/service_conf.yaml.template) 檔案的 `user_default_llm` 欄位設定 LLM factory並在 `API_KEY` 欄填入和你選擇的大模型相對應的 API key。
> 詳見 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
>
_好戲開始接著奏樂接著舞 _
@ -249,7 +256,7 @@
> [./docker/README](./docker/README.md) 解釋了 [service_conf.yaml.template](./docker/service_conf.yaml.template) 用到的環境變數設定和服務配置。
如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置`80:80` 改為`<YOUR_SERVING_PORT>:80` 。
如需更新預設的 HTTP 服務連接埠(80), 可以在[docker-compose.yml](./docker/docker-compose.yml) 檔案中將配置 `80:80` 改為 `<YOUR_SERVING_PORT>:80` 。
> 所有系統配置都需要透過系統重新啟動生效:
>
@ -266,10 +273,9 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
Note: `-v` 將會刪除 docker 容器的 volumes已有的資料會被清空。
2. 設定 **docker/.env** 目錄中的 `DOC_ENGINE` 為 `infinity`.
3. 啟動容器:
```bash
@ -286,17 +292,7 @@ RAGFlow 預設使用 Elasticsearch 儲存文字和向量資料. 如果要切換
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 原始碼編譯 Docker 映像(包含 embedding 模型)
本 Docker 大小約 9 GB 左右。由於已包含 embedding 模型,所以只需依賴外部的大模型服務即可。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 以原始碼啟動服務
@ -307,17 +303,15 @@ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t i
pipx install uv pre-commit
export UV_INDEX=https://mirrors.aliyun.com/pypi/simple
```
2. 下載原始碼並安裝 Python 依賴:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
```
3. 透過 Docker Compose 啟動依賴的服務MinIO, Elasticsearch, Redis, and MySQL
```bash
@ -329,13 +323,11 @@ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t i
```
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
```
4. 如果無法存取 HuggingFace可以把環境變數 `HF_ENDPOINT` 設為對應的鏡像網站:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. 如果你的操作系统没有 jemalloc请按照如下方式安装
```bash
@ -346,7 +338,6 @@ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t i
# mac
sudo brew install jemalloc
```
6. 啟動後端服務:
```bash
@ -354,14 +345,12 @@ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t i
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
7. 安裝前端依賴:
```bash
cd web
npm install
```
8. 啟動前端服務:
```bash
@ -371,15 +360,16 @@ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t i
以下界面說明系統已成功啟動_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
```
```
9. 開發完成後停止 RAGFlow 前端和後端服務:
```bash
pkill -f "ragflow_server.py|task_executor.py"
```
## 📚 技術文檔
- [Quickstart](https://ragflow.io/docs/dev/)

View File

@ -43,7 +43,9 @@
<a href="https://demo.ragflow.io">Demo</a>
</h4>
#
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png" width="1200"/>
</div>
<div align="center">
<a href="https://trendshift.io/repositories/9064" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9064" alt="infiniflow%2Fragflow | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@ -83,6 +85,7 @@
## 🔥 近期更新
- 2025-10-23 支持 MinerU 和 Docling 作为文档解析方法。
- 2025-10-15 支持可编排的数据管道。
- 2025-08-08 支持 OpenAI 最新的 GPT-5 系列模型。
- 2025-08-01 支持 agentic workflow 和 MCP。
@ -183,7 +186,7 @@
> 请注意,目前官方提供的所有 Docker 镜像均基于 x86 架构构建,并不提供基于 ARM64 的 Docker 镜像。
> 如果你的操作系统是 ARM64 架构,请参考[这篇文档](https://ragflow.io/docs/dev/build_docker_image)自行构建 Docker 镜像。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.1-slim`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.1-slim` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。比如,你可以通过设置 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1` 来下载 RAGFlow 镜像的 `v0.21.1` 完整发行版。
> 运行以下命令会自动下载 RAGFlow slim Docker 镜像 `v0.21.1`。请参考下表查看不同 Docker 发行版的描述。如需下载不同于 `v0.21.1` 的 Docker 镜像,请在运行 `docker compose` 启动服务之前先更新 **docker/.env** 文件内的 `RAGFLOW_IMAGE` 变量。
```bash
$ cd ragflow/docker
@ -191,15 +194,17 @@
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate embedding and DeepDoc tasks:
# docker compose -f docker-compose-gpu.yml up -d
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
```
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
| ----------------- | --------------- | --------------------- | ------------------------ |
| v0.21.1 | &approx;9 | :heavy_check_mark: | Stable release |
| v0.21.1 | &approx;9 | ✔️ | Stable release |
| v0.21.1-slim | &approx;2 | ❌ | Stable release |
| nightly | &approx;9 | :heavy_check_mark: | _Unstable_ nightly build |
| nightly-slim | &approx;2 | ❌ | _Unstable_ nightly build |
| nightly | &approx;2 | | _Unstable_ nightly build |
> 注意:从 `v0.22.0` 开始,我们只发布 slim 版本,并且不再在镜像标签后附加 **-slim** 后缀。
> [!TIP]
> 如果你遇到 Docker 镜像拉不下来的问题,可以在 **docker/.env** 文件内根据变量 `RAGFLOW_IMAGE` 的注释提示选择华为云或者阿里云的相应镜像。
@ -210,7 +215,7 @@
4. 服务器启动成功后再次确认服务器状态:
```bash
$ docker logs -f ragflow-server
$ docker logs -f docker-ragflow-cpu-1
```
_出现以下界面提示说明服务器启动成功_
@ -286,17 +291,7 @@ RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg LIGHTEN=1 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
## 🔧 源码编译 Docker 镜像(包含 embedding 模型)
本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t infiniflow/ragflow:nightly .
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
```
## 🔨 以源代码启动服务
@ -313,7 +308,7 @@ docker build --platform linux/amd64 --build-arg NEED_MIRROR=1 -f Dockerfile -t i
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
```

View File

@ -473,7 +473,7 @@ class AdminCLI(Cmd):
def parse_connection_args(self, args: List[str]) -> Dict[str, Any]:
parser = argparse.ArgumentParser(description='Admin CLI Client', add_help=False)
parser.add_argument('-h', '--host', default='localhost', help='Admin service host')
parser.add_argument('-p', '--port', type=int, default=8080, help='Admin service port')
parser.add_argument('-p', '--port', type=int, default=9381, help='Admin service port')
parser.add_argument('-w', '--password', default='admin', type=str, help='Superuser password')
parser.add_argument('command', nargs='?', help='Single command')
try:

View File

@ -29,12 +29,8 @@ from api.db.init_data import encode_to_base64
from api.db.services import UserService
from api.db import ActiveEnum, StatusEnum
from api.utils.crypt import decrypt
from api.utils import (
current_timestamp,
datetime_format,
get_format_time,
get_uuid,
)
from api.utils import get_uuid
from common.time_utils import current_timestamp, datetime_format, get_format_time
from api.utils.api_utils import (
construct_response,
)

View File

@ -2,10 +2,12 @@
"id": 23,
"title": {
"en": "Advanced Ingestion Pipeline",
"de": "Erweiterte Ingestion Pipeline",
"zh": "编排复杂的 Ingestion Pipeline"
},
"description": {
"en": "This template demonstrates how to use an LLM to generate summaries, keywords, Q&A, and metadata for each chunk to support diverse retrieval needs.",
"de": "Diese Vorlage demonstriert, wie ein LLM verwendet wird, um Zusammenfassungen, Schlüsselwörter, Fragen & Antworten und Metadaten für jedes Segment zu generieren, um vielfältige Abrufanforderungen zu unterstützen.",
"zh": "此模板演示如何利用大模型为切片生成摘要、关键词、问答及元数据,以满足多样化的召回需求。"
},
"canvas_type": "Ingestion Pipeline",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -2,10 +2,12 @@
"id": 24,
"title": {
"en": "Chunk Summary",
"de": "Segmentzusammenfassung",
"zh": "总结切片"
},
"description": {
"en": "This template uses an LLM to generate chunk summaries for building text and vector indexes. During retrieval, summaries enhance matching, and the original chunks are returned as results.",
"de": "Diese Vorlage nutzt ein LLM zur Generierung von Segmentzusammenfassungen für den Aufbau von Text- und Vektorindizes. Bei der Abfrage verbessern die Zusammenfassungen die Übereinstimmung, während die ursprünglichen Segmente als Ergebnisse zurückgegeben werden.",
"zh": "此模板利用大模型生成切片摘要,并据此建立全文索引与向量。检索时以摘要提升匹配效果,最终召回对应的原文切片。"
},
"canvas_type": "Ingestion Pipeline",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -2,9 +2,11 @@
"id": 8,
"title": {
"en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"},
"description": {
"en": "This is a multi-agent version of the SEO blog generation workflow. It simulates a small team of AI “writers”, where each agent plays a specialized role — just like a real editorial team.",
"de": "Dies ist eine Multi-Agenten-Version des Workflows zur Erstellung von SEO-Blogs. Sie simuliert ein kleines Team von KI-„Autoren“, in dem jeder Agent eine spezielle Rolle übernimmt genau wie in einem echten Redaktionsteam.",
"zh": "多智能体架构可根据简单的用户输入自动生成完整的SEO博客文章。模拟小型“作家”团队其中每个智能体扮演一个专业角色——就像真正的编辑团队。"},
"canvas_type": "Agent",
"dsl": {

File diff suppressed because one or more lines are too long

View File

@ -2,9 +2,11 @@
"id": 20,
"title": {
"en": "Report Agent Using Knowledge Base",
"de": "Berichtsagent mit Wissensdatenbank",
"zh": "知识库检索智能体"},
"description": {
"en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"de": "Ein Berichtsgenerierungsassistent, der eine lokale Wissensdatenbank nutzt, mit erweiterten Fähigkeiten in Aufgabenplanung, Schlussfolgerung und reflektierender Analyse. Empfohlen für akademische Forschungspapier-Fragen und -Antworten.",
"zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"},
"canvas_type": "Agent",
"dsl": {

View File

@ -1,10 +1,12 @@
{
"id": 21,
"title": {
"en": "Report Agent Using Knowledge Base",
"en": "Report Agent Using Knowledge Base",
"de": "Berichtsagent mit Wissensdatenbank",
"zh": "知识库检索智能体"},
"description": {
"en": "A report generation assistant using local knowledge base, with advanced capabilities in task planning, reasoning, and reflective analysis. Recommended for academic research paper Q&A",
"de": "Ein Berichtsgenerierungsassistent, der eine lokale Wissensdatenbank nutzt, mit erweiterten Fähigkeiten in Aufgabenplanung, Schlussfolgerung und reflektierender Analyse. Empfohlen für akademische Forschungspapier-Fragen und -Antworten.",
"zh": "一个使用本地知识库的报告生成助手,具备高级能力,包括任务规划、推理和反思性分析。推荐用于学术研究论文问答。"},
"canvas_type": "Recommended",
"dsl": {

View File

@ -2,9 +2,11 @@
"id": 12,
"title": {
"en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"},
"description": {
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don't need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"de": "Dieser Workflow generiert automatisch einen vollständigen SEO-optimierten Blogartikel basierend auf einer einfachen Benutzereingabe. Sie benötigen keine Schreiberfahrung. Geben Sie einfach ein Thema oder eine kurze Anfrage ein das System übernimmt den Rest.",
"zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"},
"canvas_type": "Marketing",
"dsl": {
@ -916,4 +918,4 @@
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
}
}

View File

@ -2,9 +2,11 @@
"id": 4,
"title": {
"en": "Generate SEO Blog",
"de": "SEO Blog generieren",
"zh": "生成SEO博客"},
"description": {
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You dont need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"en": "This workflow automatically generates a complete SEO-optimized blog article based on a simple user input. You don't need any writing experience. Just provide a topic or short request — the system will handle the rest.",
"de": "Dieser Workflow generiert automatisch einen vollständigen SEO-optimierten Blogartikel basierend auf einer einfachen Benutzereingabe. Sie benötigen keine Schreiberfahrung. Geben Sie einfach ein Thema oder eine kurze Anfrage ein das System übernimmt den Rest.",
"zh": "此工作流根据简单的用户输入自动生成完整的SEO博客文章。你无需任何写作经验只需提供一个主题或简短请求系统将处理其余部分。"},
"canvas_type": "Recommended",
"dsl": {
@ -916,4 +918,4 @@
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAAwADADASIAAhEBAxEB/8QAGQAAAwEBAQAAAAAAAAAAAAAABgkKBwUI/8QAMBAAAAYCAQIEBQQCAwAAAAAAAQIDBAUGBxEhCAkAEjFBFFFhcaETFiKRFyOx8PH/xAAaAQACAwEBAAAAAAAAAAAAAAACAwABBgQF/8QALBEAAgIBAgUCBAcAAAAAAAAAAQIDBBEFEgATITFRIkEGIzJhFBUWgaGx8P/aAAwDAQACEQMRAD8AfF2hez9089t7pvxgQMa1Gb6qZ6oQE9m/NEvCIStyPfJSOF/M1epzMugo/qtMqbiRc1mJjoJKCLMNIxKcsLJedfO1Ct9cI63x9fx6CA/19t+oh4LFA5HfuAgP/A8eOIsnsTBrkBHXA7+v53+Q+ficTgJft9gIgA+/P9/1r342O/YA8A8k3/if+IbAN7+2/f8AAiI6H19PGoPyESTMZQPKUAHkQEN+3r9dh78/YPGUTk2wb/qAZZIugH1OHH5DjkdfbnWw2DsOxPj+xjrnx2H39unBopJGBn9s+PHv1HXjPJtH+J+B40O9a16h/wB/92j/ALrPa/wR104UyAobHlXhuo2HrEtK4qy3CwjKOuJLRHJLSkXWrFKs/gVrJVrE8TUiH8bPrP20UEu8m4hNpMJJuTOfnbUw/kUqyZgMHGjAO9+mtDsQ53sdcB6eMhnpEjhNQxRKICAgHy5+/roOdjr7c+J6O4x07dx484/n7nzw1gexBGfIPkZ/3t39uGpqc6+fP5/Ht8vGFZCzJjWpWuBxvO2yPjrtclUUK7BqmUI4fuASeyhG5FzFI0Bw4aQ0iZNoDgzvRW4qtyFkI4XmwyEk2YNnDp0sVBu3IUyy5iqH8gqKERSIRNIii67hddRJs1at01Xbx2sgzZoLu10UFJR+4V1A5cxF3FqNcLvjwcno43uuLrOxZYjujaClcb4QQfxEizpFiQyM9olcueRnjC2ZMt9iY06zL0qytrMSqSOVGsfHMaGhZ3l4lSRI2MqE74zJvRTveNFWWIh3RWw+XCAM5icKQLrCH57T17FhErSlRXnWvyZXKQwWJ3eraD14p5YuZCFgacskK2oGkVuKO5GYTHzf7DaD12cBD3DgPOIDrWw9PnrXPgDkpVsUDGMG+DD6E9gHXIjrYjwUPQTCXYgHPhIV974+F6E1hpC14Yzmzj56YaQEeZhXsayD1zLPW7pygxaMf81Nzu1iJsnIuDIKnaJAkPldqrHaoORZ73tMVEbFdSXT9nVgRQgnBq6j8e/HCIEATpAnH5KlmRVkFRFJwks/bqImSXJ5VFyA3N6Ikh3bCW3YHp5cowOmCfTgA+xJCnrjtwHKcLvJj2ZGcTRFj19kEhckdzgEjKnABGSSzdc1Fe5byXXGNjKdvRcw5NxvLidNZFFCxUa62KrzMaChw8hhYScFJtROAgmuLByq1MsgkZYPaVVuDe0wraRaqAdJwgRQo+YR8xTlAQNx6b49w41vXiJpCalLh1jZhyrTqRM4+jstdRmYryNkydLQRWg1LNGcWd5jIFFvCythlIySa0mNu74sKRQtaWsTmupqPItw0lE52ufpyYzrSkx6cw5bLmBEpkTsz+dt8P5QFuCRtAIkBH9MuwKHICIaDQhnojMs9mKaeGcrMxXlQtAYkdVljimRrE5MqI4zL8oSqQ6wxjodBqK05qdK3Vo3aCSVkBW7bjuC1NFJJBPaqyx6fp6pWkliYLXK2XrukkRu2CCVoSWMgsdMyySKwoLFcIGWSTUMg4IBgTcICoBhRcplMcpFkhIqQp1ClMBTmA0Zfe1zpjvHfXff65bZlzXpB3jjGTgiirmPjAfs16PHqHeQ75Wbj3xxZpOEkV3LRJJSPdomUBZISJLncV2k+8D07dxXp7xsYuTapA9UkJUYWIzNhadnWEZeCXGLQQiJi1ViHfhHL2unWh+mlORsrW0JFpEFnGVfm1mU4kq0FY3eD6corJncv6dr5NLSMNXVaTUksjTiMnaq8uFfSVuDyiJ1iZpy0LOJtpa3YfkcQ5fdozyxI2m5qqcrHN61YYmHsh6v3o9ParYmYJEtlhIx6+gUbjgD23M6oqg92YL0JyF6Bps+qDValVA9h9Lj5SZI3SHXdEQlj1wiQtLLIe6pGzjO3BlBkK1hxpblLVH5wdW0BcFKf/JwRtjsot2z8omaSdxbzzk1iEjsE0AM9rrRZNRIrVyo7dGO6E+oh8axLlJ5H5VaJKx7ePRGFbW6vUeFfHQIWPTI9Tm7HHfuhqY7E6C7JFqUzM6iZXIoncNxX7+bIVdJnTT48x3OQU1krIDW3UeixVhyISzYz6cadY5Xph6TseRNTRsTElzzBn9Vlly0TAERsdgnMYyLROjyFbg5R4ZlsGaMT4yNi2Zlq1GwjZB3jq0PsaJfA3t0jL0W0Y9xf1V41lpWckXMLaZiwxuKYPqc6LlHdkeRF+Qxswx5ASDqBVrsL+2A/N6SiCbYymV2BywJiMZj3GRRMTnL+lVyHCll3R7Szv0vqXMtQ74T+HijljIScLaEpkKCB3rqMBIi0jPs5JeOKTZMZEi5VVnouzy0k3jXjWSMlY6UcVGDxlKMVDqx91SILWSi3D2KdgYy3kP8E9X/AE1SnRXBNdNRMlefT6g7aY6giK+cPLGNg0bY68rcnpsNh9PqIBve/EcPQ3WIq2dR93xpSgk5SAZ9R6MLAOZFUkpLSUDXp6/KPpGUkmTdswlnKnwbl5ITMdGwcXJi7LKsqzUmT5tWYmkXuF9wjBvb76b7dHheazJ9RElUJOCxViuMlUJC0Gtz6PKyjLBY4qMWUe12r1xZ6lOyT6XPEBKN2CkTDOlZd02TBdTMt7Upx2knrkdCv1UKjDKn1A7XBYH6SCOOrWn5Oi/DtRiu+GleRthDL8rXdVjZlcfWrSIxVlGGGCOnH//Z"
}
}

View File

@ -2,10 +2,12 @@
"id": 17,
"title": {
"en": "SQL Assistant",
"de": "SQL Assistent",
"zh": "SQL助理"},
"description": {
"en": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., Show me last quarters top 10 products by revenue) and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ",
"zh": "用户能够将简单文本问题转化为完整的SQL查询并输出结果。只需输入您的问题例如“展示上个季度前十名按收入排序的产品”SQL助理就会生成精确的SQL语句对其运行您的数据库并几秒钟内返回结果。"},
"en": "SQL Assistant is an AI-powered tool that lets business users turn plain-English questions into fully formed SQL queries. Simply type your question (e.g., 'Show me last quarter's top 10 products by revenue') and SQL Assistant generates the exact SQL, runs it against your database, and returns the results in seconds. ",
"de": "SQL-Assistent ist ein KI-gestütztes Tool, mit dem Geschäftsanwender einfache englische Fragen in vollständige SQL-Abfragen umwandeln können. Geben Sie einfach Ihre Frage ein (z.B. 'Zeige mir die Top 10 Produkte des letzten Quartals nach Umsatz') und der SQL-Assistent generiert das exakte SQL, führt es gegen Ihre Datenbank aus und liefert die Ergebnisse in Sekunden.",
"zh": "用户能够将简单文本问题转化为完整的SQL查询并输出结果。只需输入您的问题例如展示上个季度前十名按收入排序的产品SQL助理就会生成精确的SQL语句对其运行您的数据库并几秒钟内返回结果。"},
"canvas_type": "Marketing",
"dsl": {
"components": {
@ -713,4 +715,4 @@
"retrieval": []
},
"avatar": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAcFBQYFBAcGBQYIBwcIChELCgkJChUPEAwRGBUaGRgVGBcbHichGx0lHRcYIi4iJSgpKywrGiAvMy8qMicqKyr/2wBDAQcICAoJChQLCxQqHBgcKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKir/wAARCAAwADADAREAAhEBAxEB/8QAGgAAAwEBAQEAAAAAAAAAAAAABQYHBAMAAf/EADIQAAEDAwMCBAMHBQAAAAAAAAECAwQFESEABjESEyJBUYEUYXEHFSNSkaGxMjNictH/xAAZAQADAQEBAAAAAAAAAAAAAAACAwQBAAX/xAAlEQACAgICAgEEAwAAAAAAAAABAgARAyESMQRBEyIycYFCkbH/2gAMAwEAAhEDEQA/AKHt2DGpNHXDLrZdWtSrIub39tZ5GbGwPA+pmDFkX7x7idvra85xqQaFNkxUTVIVJQzf8QpBFjbgEenNs681MnA9WJ6fEOKJoxVpSpFLTCo6KEZlTlLcQBIJS20hAv1D1ve+qPk52b0IsYuIGtyt7ZkVVNP+H3A5GdlN2u7GQUBSfmkk8cXH10tmLD6Yl0CG5qmTXBMZiQEMuvupUoKdc6UeEi4FsqOeBxrsKnv1AY+hJ2l5yfu6qQ6/UZtPDRHZ+Eldpsqz1hSrXJGLXwRxqxUQizFs7galPYUFDKT+h15oMuImspQpFiL+2i1A3A1bgxmixUgwlT8ZfgJ/y8P8HXdRuPZoxaqtfkQKbKqF03jtEoDeFKV1lNgfK4H764XfccVUgipvdiwKpFaXMLklFg4juuqV0m3Izg/MaEZCDYMScYqiJOd6xmqfUVfBJcWwtHV1Elfi87k51ViyhrsxL4ivQj1KrFZjTGjTJ8aShdyph5SUqFhwPzX9jpC0dXUqZK3ViHNq7oNaVJjz2Vw5LCrdKknpULZyfMf801MfI1e5NmpAGHUL12EZNFWWlhXSUuWHKgk3xomwEDuDhzLysySU9EndEVyIz3GmxJR+KpBIdCLlRHn/AFEjjIF9AMJlZ8gLZ/qUiJSg1Tu0HO4plFj4FC1h9NYfHIU7kwzgnqCJlKLiCO2s6hKytWiPJoFdfnLW7HS0or6bqXbjg2AI99XjAa3NPlL6jFTduOR5sd1+oyfjQMONqI7QOMA4V7/pqjHjC9SLNn56I1HiqrqTUKM0hbq2lpst5CQSST54xjSPJbICOHUhawISiRQ02T2Uq6AAkqFj/GquJQks1iEr/INLU82bploKSFXusG9xfjHofXQuQUNRoQqQT0ZwVEST5687iZWGgpDsebNbaTDfKVL/ALnbQU/UkKNhjXpFt0BJBVXe/wAGGG6YMlvvNkjlBGmKeJimHIVc0TY89akCKspT28C5BKgDyR7fvrCFI+q/1DQsvVfudYcVyKw49KU6tZyQbmwHFhrOKr9s0uz0CAIpbr3RKo1Rbh02C4HJISp2ZIz0pJ8IQk5Nr/QXznSX6NSnGAwHI/gD/TM+3vtAj1arJpcpgtPdPSH0kFt5wDxAWOOLgamIAFwijCfD927N2tGXuNxlK2W0occUhJWpR+QzzrPjc+pvyqT3Ftf2zbObf7YYecb6CrrDAGfy20wYMkA5Vjbtev7b3nEcXRela27d1ogoWi/rnQsjrqZzHdwzKoKUsqWz3mOnJUlZJt8uokD621w+RdzgynUkUpoUafPZXMnSHlrKluyX1Eug8XF7GwxbgWxrubMO5WmNRsCKtLfcY3rAU0nIltkBP+w0X8Jjdz//2Q=="
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -2,10 +2,12 @@
"id": 25,
"title": {
"en": "Title Chunker",
"de": "Titel basierte Segmentierung",
"zh": "标题切片"
},
"description": {
"en": "This template slices the parsed file based on its title structure. It is ideal for documents with well-defined headings, such as product manuals, legal contracts, research reports, and academic papers.",
"de": "Diese Vorlage segmentiert die geparste Datei basierend auf ihrer Titelstruktur. Sie eignet sich ideal für Dokumente mit klar definierten Überschriften, wie Produkthandbücher, Verträge, Forschungsberichte und wissenschaftliche Arbeiten.",
"zh": "此模板将解析后的文件按标题结构进行切片,适用于具有清晰标题层级的文档类型,如产品手册、合同法规、研究报告和学术论文等。"
},
"canvas_type": "Ingestion Pipeline",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -27,7 +27,7 @@ from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api.db import StatusEnum
from api.db.db_models import close_connection
from api.db.services import UserService
from api.utils.json import CustomJSONEncoder
from api.utils.json_encode import CustomJSONEncoder
from api.utils import commands
from flask_mail import Mail

View File

@ -33,7 +33,7 @@ from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import queue_tasks, TaskService
from api.db.services.user_service import UserTenantService
from api import settings
from api.utils import get_uuid, current_timestamp, datetime_format
from api.utils import get_uuid
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request, \
generate_confirmation_token
@ -41,6 +41,7 @@ from api.utils.file_utils import filename_type, thumbnail
from rag.app.tag import label_question
from rag.prompts.generator import keyword_extraction
from rag.utils.storage_factory import STORAGE_IMPL
from common.time_utils import current_timestamp, datetime_format
from api.db.services.canvas_service import UserCanvasService
from agent.canvas import Canvas
@ -58,7 +59,7 @@ def new_token():
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(tenant_id),
obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(),
"create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()),
"update_time": None,
@ -867,7 +868,7 @@ def retrieval():
similarity_threshold = float(req.get("similarity_threshold", 0.2))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
top = int(req.get("top_k", 1024))
highlight = bool(req.get("highlight", False))
highlight = bool(req.get("highlight", False))
try:
kbs = KnowledgebaseService.get_by_ids(kb_ids)

View File

@ -35,7 +35,7 @@ from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search
from rag.prompts.generator import gen_meta_filter, cross_languages, keyword_extraction
from rag.settings import PAGERANK_FLD
from rag.utils import rmSpace
from common.string_utils import remove_redundant_spaces
@manager.route('/list', methods=['POST']) # noqa: F821
@ -65,7 +65,7 @@ def list_chunk():
for id in sres.ids:
d = {
"chunk_id": id,
"content_with_weight": rmSpace(sres.highlight[id]) if question and id in sres.highlight else sres.field[
"content_with_weight": remove_redundant_spaces(sres.highlight[id]) if question and id in sres.highlight else sres.field[
id].get(
"content_with_weight", ""),
"doc_id": sres.field[id]["doc_id"],

View File

@ -15,11 +15,15 @@
#
import json
import logging
import random
from flask import request
from flask_login import login_required, current_user
import numpy as np
from api.db import LLMType
from api.db.services import duplicate_name
from api.db.services.llm_service import LLMBundle
from api.db.services.document_service import DocumentService, queue_raptor_o_graphrag_tasks
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
@ -38,6 +42,7 @@ from api.constants import DATASET_NAME_LIMIT
from rag.settings import PAGERANK_FLD
from rag.utils.redis_conn import REDIS_CONN
from rag.utils.storage_factory import STORAGE_IMPL
from rag.utils.doc_store_conn import OrderByExpr
@manager.route('/create', methods=['post']) # noqa: F821
@ -788,3 +793,141 @@ def delete_kb_task():
return server_error_response(f"Internal error: cannot delete task {pipeline_task_type}")
return get_json_result(data=True)
@manager.route("/check_embedding", methods=["post"]) # noqa: F821
@login_required
def check_embedding():
def _guess_vec_field(src: dict) -> str | None:
for k in src or {}:
if k.endswith("_vec"):
return k
return None
def _as_float_vec(v):
if v is None:
return []
if isinstance(v, str):
return [float(x) for x in v.split("\t") if x != ""]
if isinstance(v, (list, tuple, np.ndarray)):
return [float(x) for x in v]
return []
def _to_1d(x):
a = np.asarray(x, dtype=np.float32)
return a.reshape(-1)
def _cos_sim(a, b, eps=1e-12):
a = _to_1d(a)
b = _to_1d(b)
na = np.linalg.norm(a)
nb = np.linalg.norm(b)
if na < eps or nb < eps:
return 0.0
return float(np.dot(a, b) / (na * nb))
def sample_random_chunks_with_vectors(
docStoreConn,
tenant_id: str,
kb_id: str,
n: int = 5,
base_fields=("docnm_kwd","doc_id","content_with_weight","page_num_int","position_int","top_int"),
):
index_nm = search.index_name(tenant_id)
res0 = docStoreConn.search(
selectFields=[], highlightFields=[],
condition={"kb_id": kb_id, "available_int": 1},
matchExprs=[], orderBy=OrderByExpr(),
offset=0, limit=1,
indexNames=index_nm, knowledgebaseIds=[kb_id]
)
total = docStoreConn.getTotal(res0)
if total <= 0:
return []
n = min(n, total)
offsets = sorted(random.sample(range(total), n))
out = []
for off in offsets:
res1 = docStoreConn.search(
selectFields=list(base_fields),
highlightFields=[],
condition={"kb_id": kb_id, "available_int": 1},
matchExprs=[], orderBy=OrderByExpr(),
offset=off, limit=1,
indexNames=index_nm, knowledgebaseIds=[kb_id]
)
ids = docStoreConn.getChunkIds(res1)
if not ids:
continue
cid = ids[0]
full_doc = docStoreConn.get(cid, index_nm, [kb_id]) or {}
vec_field = _guess_vec_field(full_doc)
vec = _as_float_vec(full_doc.get(vec_field))
out.append({
"chunk_id": cid,
"kb_id": kb_id,
"doc_id": full_doc.get("doc_id"),
"doc_name": full_doc.get("docnm_kwd"),
"vector_field": vec_field,
"vector_dim": len(vec),
"vector": vec,
"page_num_int": full_doc.get("page_num_int"),
"position_int": full_doc.get("position_int"),
"top_int": full_doc.get("top_int"),
"content_with_weight": full_doc.get("content_with_weight") or "",
})
return out
req = request.json
kb_id = req.get("kb_id", "")
embd_id = req.get("embd_id", "")
n = int(req.get("check_num", 5))
_, kb = KnowledgebaseService.get_by_id(kb_id)
tenant_id = kb.tenant_id
emb_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_id)
samples = sample_random_chunks_with_vectors(settings.docStoreConn, tenant_id=tenant_id, kb_id=kb_id, n=n)
results, eff_sims = [], []
for ck in samples:
txt = (ck.get("content_with_weight") or "").strip()
if not txt:
results.append({"chunk_id": ck["chunk_id"], "reason": "no_text"})
continue
if not ck.get("vector"):
results.append({"chunk_id": ck["chunk_id"], "reason": "no_stored_vector"})
continue
try:
qv, _ = emb_mdl.encode_queries(txt)
sim = _cos_sim(qv, ck["vector"])
except Exception:
return get_error_data_result(message="embedding failure")
eff_sims.append(sim)
results.append({
"chunk_id": ck["chunk_id"],
"doc_id": ck["doc_id"],
"doc_name": ck["doc_name"],
"vector_field": ck["vector_field"],
"vector_dim": ck["vector_dim"],
"cos_sim": round(sim, 6),
})
summary = {
"kb_id": kb_id,
"model": embd_id,
"sampled": len(samples),
"valid": len(eff_sims),
"avg_cos_sim": round(float(np.mean(eff_sims)) if eff_sims else 0.0, 6),
"min_cos_sim": round(float(np.min(eff_sims)) if eff_sims else 0.0, 6),
"max_cos_sim": round(float(np.max(eff_sims)) if eff_sims else 0.0, 6),
}
if summary["avg_cos_sim"] > 0.99:
return get_json_result(data={"summary": summary, "results": results})
return get_json_result(code=settings.RetCode.NOT_EFFECTIVE, message="failed", data={"summary": summary, "results": results})

View File

@ -15,11 +15,11 @@
#
import logging
import json
import os
from flask import request
from flask_login import login_required, current_user
from api.db.services.tenant_llm_service import LLMFactoriesService, TenantLLMService
from api.db.services.llm_service import LLMService
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM
@ -215,7 +215,7 @@ def add_llm():
mdl = EmbeddingModel[factory](
key=llm['api_key'],
model_name=mdl_nm,
base_url=llm["api_base"])
base_url=llm["api_base"])
try:
arr, tc = mdl.encode(["Test if the api key is available"])
if len(arr[0]) == 0:
@ -264,7 +264,7 @@ def add_llm():
try:
image_data = test_image
m, tc = mdl.describe(image_data)
if not m and not tc:
if not tc and m.find("**ERROR**:") >= 0:
raise Exception(m)
except Exception as e:
msg += f"\nFail to access model({factory}/{mdl_nm})." + str(e)
@ -368,8 +368,8 @@ def my_llms():
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_app():
self_deployed = ["Youdao", "FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
weighted = ["Youdao", "FastEmbed", "BAAI"] if settings.LIGHTEN != 0 else []
self_deployed = ["FastEmbed", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
weighted = []
model_type = request.args.get("model_type")
try:
objs = TenantLLMService.query(tenant_id=current_user.id)
@ -379,6 +379,8 @@ def list_app():
for m in llms if m.status == StatusEnum.VALID.value and m.fid not in weighted]
for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deployed
if "tei-" in os.getenv("COMPOSE_PROFILES", "") and m["model_type"]==LLMType.EMBEDDING and m["fid"]=="Builtin" and m["llm_name"]==os.getenv('TEI_MODEL', ''):
m["available"] = True
llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms])
for o in objs:

View File

@ -169,6 +169,8 @@ def update(tenant_id, chat_id):
if len(embd_count) > 1:
return get_result(message='Datasets use different embedding models."', code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
else:
req["kb_ids"] = []
llm = req.get("llm")
if llm:
if "model_name" in llm:

View File

@ -41,8 +41,8 @@ from rag.app.qa import beAdoc, rmPrefix
from rag.app.tag import label_question
from rag.nlp import rag_tokenizer, search
from rag.prompts.generator import cross_languages, keyword_extraction
from rag.utils import rmSpace
from rag.utils.storage_factory import STORAGE_IMPL
from common.string_utils import remove_redundant_spaces
MAXIMUM_OF_UPLOADING_FILES = 256
@ -1000,7 +1000,7 @@ def list_chunks(tenant_id, dataset_id, document_id):
for id in sres.ids:
d = {
"id": id,
"content": (rmSpace(sres.highlight[id]) if question and id in sres.highlight else sres.field[id].get("content_with_weight", "")),
"content": (remove_redundant_spaces(sres.highlight[id]) if question and id in sres.highlight else sres.field[id].get("content_with_weight", "")),
"document_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"],
"important_keywords": sres.field[id].get("important_kwd", []),

View File

@ -24,7 +24,6 @@ from api.db.services.api_service import APITokenService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import UserTenantService
from api import settings
from api.utils import current_timestamp, datetime_format
from api.utils.api_utils import (
get_json_result,
get_data_error_result,
@ -32,6 +31,7 @@ from api.utils.api_utils import (
generate_confirmation_token,
)
from api.versions import get_ragflow_version
from common.time_utils import current_timestamp, datetime_format
from rag.utils.storage_factory import STORAGE_IMPL, STORAGE_IMPL_TYPE
from timeit import default_timer as timer
@ -217,8 +217,8 @@ def new_token():
tenant_id = [tenant for tenant in tenants if tenant.role == 'owner'][0].tenant_id
obj = {
"tenant_id": tenant_id,
"token": generate_confirmation_token(tenant_id),
"beta": generate_confirmation_token(generate_confirmation_token(tenant_id)).replace("ragflow-", "")[:32],
"token": generate_confirmation_token(),
"beta": generate_confirmation_token().replace("ragflow-", "")[:32],
"create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()),
"update_time": None,
@ -274,7 +274,7 @@ def token_list():
objs = [o.to_dict() for o in objs]
for o in objs:
if not o["beta"]:
o["beta"] = generate_confirmation_token(generate_confirmation_token(tenants[0].tenant_id)).replace(
o["beta"] = generate_confirmation_token().replace(
"ragflow-", "")[:32]
APITokenService.filter_update([APIToken.tenant_id == tenant_id, APIToken.token == o["token"]], o)
return get_json_result(data=objs)

View File

@ -23,7 +23,8 @@ from api.db import UserTenantRole, StatusEnum
from api.db.db_models import UserTenant
from api.db.services.user_service import UserTenantService, UserService
from api.utils import get_uuid, delta_seconds
from api.utils import get_uuid
from common.time_utils import delta_seconds
from api.utils.api_utils import get_json_result, validate_request, server_error_response, get_data_error_result
from api.utils.web_utils import send_invite_email

View File

@ -34,13 +34,8 @@ from api.db.services.file_service import FileService
from api.db.services.llm_service import get_init_tenant_llm
from api.db.services.tenant_llm_service import TenantLLMService
from api.db.services.user_service import TenantService, UserService, UserTenantService
from api.utils import (
current_timestamp,
datetime_format,
download_img,
get_format_time,
get_uuid,
)
from common.time_utils import current_timestamp, datetime_format, get_format_time
from api.utils import download_img, get_uuid
from api.utils.api_utils import (
construct_response,
get_data_error_result,

View File

@ -32,9 +32,11 @@ from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase
from api import settings, utils
from api.db import ParserType, SerializedType
from api.utils.json import json_dumps, json_loads
from api.utils.json_encode import json_dumps, json_loads
from api.utils.configs import deserialize_b64, serialize_b64
from common.time_utils import current_timestamp, timestamp_to_date, date_string_to_timestamp
def singleton(cls, *args, **kw):
instances = {}
@ -189,7 +191,7 @@ class BaseModel(Model):
for i, v in enumerate(f_v):
if isinstance(v, str) and f_n in auto_date_timestamp_field():
# time type: %Y-%m-%d %H:%M:%S
f_v[i] = utils.date_string_to_timestamp(v)
f_v[i] = date_string_to_timestamp(v)
lt_value = f_v[0]
gt_value = f_v[1]
if lt_value is not None and gt_value is not None:
@ -218,9 +220,9 @@ class BaseModel(Model):
@classmethod
def insert(cls, __data=None, **insert):
if isinstance(__data, dict) and __data:
__data[cls._meta.combined["create_time"]] = utils.current_timestamp()
__data[cls._meta.combined["create_time"]] = current_timestamp()
if insert:
insert["create_time"] = utils.current_timestamp()
insert["create_time"] = current_timestamp()
return super().insert(__data, **insert)
@ -231,11 +233,11 @@ class BaseModel(Model):
if not normalized:
return {}
normalized[cls._meta.combined["update_time"]] = utils.current_timestamp()
normalized[cls._meta.combined["update_time"]] = current_timestamp()
for f_n in AUTO_DATE_TIMESTAMP_FIELD_PREFIX:
if {f"{f_n}_time", f"{f_n}_date"}.issubset(cls._meta.combined.keys()) and cls._meta.combined[f"{f_n}_time"] in normalized and normalized[cls._meta.combined[f"{f_n}_time"]] is not None:
normalized[cls._meta.combined[f"{f_n}_date"]] = utils.timestamp_to_date(normalized[cls._meta.combined[f"{f_n}_time"]])
normalized[cls._meta.combined[f"{f_n}_date"]] = timestamp_to_date(normalized[cls._meta.combined[f"{f_n}_time"]])
return normalized
@ -331,9 +333,9 @@ class RetryingPooledPostgresqlDatabase(PooledPostgresqlDatabase):
# 08006: connection_failure
# 08003: connection_does_not_exist
# 08000: connection_exception
error_messages = ['connection', 'server closed', 'connection refused',
error_messages = ['connection', 'server closed', 'connection refused',
'no connection to the server', 'terminating connection']
should_retry = any(msg in str(e).lower() for msg in error_messages)
if should_retry and attempt < self.max_retries:
@ -366,7 +368,7 @@ class RetryingPooledPostgresqlDatabase(PooledPostgresqlDatabase):
except (OperationalError, InterfaceError) as e:
error_messages = ['connection', 'server closed', 'connection refused',
'no connection to the server', 'terminating connection']
should_retry = any(msg in str(e).lower() for msg in error_messages)
if should_retry and attempt < self.max_retries:
@ -394,7 +396,7 @@ class BaseDataBase:
def __init__(self):
database_config = settings.DATABASE.copy()
db_name = database_config.pop("name")
pool_config = {
'max_retries': 5,
'retry_delay': 1,

View File

@ -18,7 +18,7 @@ from functools import reduce
from playhouse.pool import PooledMySQLDatabase
from api.utils import current_timestamp, timestamp_to_date
from common.time_utils import current_timestamp, timestamp_to_date
from api.db.db_models import DB, DataBaseModel

View File

@ -19,7 +19,7 @@ import peewee
from api.db.db_models import DB, API4Conversation, APIToken, Dialog
from api.db.services.common_service import CommonService
from api.utils import current_timestamp, datetime_format
from common.time_utils import current_timestamp, datetime_format
class APITokenService(CommonService):

View File

@ -19,7 +19,8 @@ import peewee
from peewee import InterfaceError, OperationalError
from api.db.db_models import DB
from api.utils import current_timestamp, datetime_format, get_uuid
from api.utils import get_uuid
from common.time_utils import current_timestamp, datetime_format
def retry_db_operation(func):
@retry(

View File

@ -34,15 +34,16 @@ from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.langfuse_service import TenantLangfuseService
from api.db.services.llm_service import LLMBundle
from api.db.services.tenant_llm_service import TenantLLMService
from api.utils import current_timestamp, datetime_format
from common.time_utils import current_timestamp, datetime_format
from graphrag.general.mind_map_extractor import MindMapExtractor
from rag.app.resume import forbidden_select_fields4resume
from rag.app.tag import label_question
from rag.nlp.search import index_name
from rag.prompts.generator import chunks_format, citation_prompt, cross_languages, full_question, kb_prompt, keyword_extraction, message_fit_in, \
gen_meta_filter, PROMPT_JINJA_ENV, ASK_SUMMARY
from rag.utils import num_tokens_from_string, rmSpace
from rag.utils import num_tokens_from_string
from rag.utils.tavily_conn import Tavily
from common.string_utils import remove_redundant_spaces
class DialogService(CommonService):
@ -706,7 +707,7 @@ Please write the SQL, only SQL, without any other explanations or text.
line = "|" + "|".join(["------" for _ in range(len(column_idx))]) + ("|------|" if docid_idx and docid_idx else "")
rows = ["|" + "|".join([rmSpace(str(r[i])) for i in column_idx]).replace("None", " ") + "|" for r in tbl["rows"]]
rows = ["|" + "|".join([remove_redundant_spaces(str(r[i])) for i in column_idx]).replace("None", " ") + "|" for r in tbl["rows"]]
rows = [r for r in rows if re.sub(r"[ |]+", "", r)]
if quota:
rows = "\n".join([r + f" ##{ii}$$ |" for ii, r in enumerate(rows)])

View File

@ -34,7 +34,8 @@ from api.db.db_models import DB, Document, Knowledgebase, Task, Tenant, UserTena
from api.db.db_utils import bulk_insert_into_db
from api.db.services.common_service import CommonService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils import current_timestamp, get_format_time, get_uuid
from api.utils import get_uuid
from common.time_utils import current_timestamp, get_format_time
from rag.nlp import rag_tokenizer, search
from rag.settings import get_svr_queue_name, SVR_CONSUMER_GROUP_NAME
from rag.utils.redis_conn import REDIS_CONN

View File

@ -20,7 +20,7 @@ from api.db.db_models import DB
from api.db.db_models import File, File2Document
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.utils import current_timestamp, datetime_format
from common.time_utils import current_timestamp, datetime_format
class File2DocumentService(CommonService):

View File

@ -20,7 +20,7 @@ from peewee import fn, JOIN
from api.db import StatusEnum, TenantPermission
from api.db.db_models import DB, Document, Knowledgebase, User, UserTenant, UserCanvas
from api.db.services.common_service import CommonService
from api.utils import current_timestamp, datetime_format
from common.time_utils import current_timestamp, datetime_format
class KnowledgebaseService(CommonService):

View File

@ -20,7 +20,7 @@ import peewee
from api.db.db_models import DB, TenantLangfuse
from api.db.services.common_service import CommonService
from api.utils import current_timestamp, datetime_format
from common.time_utils import current_timestamp, datetime_format
class TenantLangfuseService(CommonService):

View File

@ -16,6 +16,7 @@
import inspect
import logging
import re
from rag.utils import num_tokens_from_string
from functools import partial
from typing import Generator
from api.db.db_models import LLM
@ -59,21 +60,6 @@ def get_init_tenant_llm(user_id):
}
)
if settings.LIGHTEN != 1:
for buildin_embedding_model in settings.BUILTIN_EMBEDDING_MODELS:
mdlnm, fid = TenantLLMService.split_model_name_and_factory(buildin_embedding_model)
tenant_llm.append(
{
"tenant_id": user_id,
"llm_factory": fid,
"llm_name": mdlnm,
"model_type": "embedding",
"api_key": "",
"api_base": "",
"max_tokens": 1024 if buildin_embedding_model == "BAAI/bge-large-zh-v1.5@BAAI" else 512,
}
)
unique = {}
for item in tenant_llm:
key = (item["tenant_id"], item["llm_factory"], item["llm_name"])
@ -94,9 +80,19 @@ class LLMBundle(LLM4Tenant):
def encode(self, texts: list):
if self.langfuse:
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="encode", model=self.llm_name, input={"texts": texts})
generation = self.langfuse.start_generation(trace_context=self.trace_context, name="encode", model=self.llm_name, input={"texts": texts})
safe_texts = []
for text in texts:
token_size = num_tokens_from_string(text)
if token_size > self.max_length:
target_len = int(self.max_length * 0.95)
safe_texts.append(text[:target_len])
else:
safe_texts.append(text)
embeddings, used_tokens = self.mdl.encode(safe_texts)
embeddings, used_tokens = self.mdl.encode(texts)
llm_name = getattr(self, "llm_name", None)
if not TenantLLMService.increase_usage(self.tenant_id, self.llm_type, used_tokens, llm_name):
logging.error("LLMBundle.encode can't update token usage for {}/EMBEDDING used_tokens: {}".format(self.tenant_id, used_tokens))

View File

@ -27,7 +27,8 @@ from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import GRAPH_RAPTOR_FAKE_DOC_ID
from api.utils import current_timestamp, datetime_format, get_uuid
from api.utils import get_uuid
from common.time_utils import current_timestamp, datetime_format
class PipelineOperationLogService(CommonService):

View File

@ -20,7 +20,7 @@ from peewee import fn
from api.db import StatusEnum
from api.db.db_models import DB, Search, User
from api.db.services.common_service import CommonService
from api.utils import current_timestamp, datetime_format
from common.time_utils import current_timestamp, datetime_format
class SearchService(CommonService):

View File

@ -27,7 +27,8 @@ from api.db import StatusEnum, FileType, TaskStatus
from api.db.db_models import Task, Document, Knowledgebase, Tenant
from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.utils import current_timestamp, get_uuid
from api.utils import get_uuid
from common.time_utils import current_timestamp
from deepdoc.parser.excel_parser import RAGFlowExcelParser
from rag.settings import get_svr_queue_name
from rag.utils.storage_factory import STORAGE_IMPL

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import logging
from langfuse import Langfuse
from api import settings
@ -112,24 +113,17 @@ class TenantLLMService(CommonService):
model_config = cls.get_api_key(tenant_id, mdlnm)
if model_config:
model_config = model_config.to_dict()
llm = LLMService.query(llm_name=mdlnm) if not fid else LLMService.query(llm_name=mdlnm, fid=fid)
if not llm and fid: # for some cases seems fid mismatch
llm = LLMService.query(llm_name=mdlnm)
if llm:
model_config["is_tools"] = llm[0].is_tools
if not model_config:
if llm_type in [LLMType.EMBEDDING, LLMType.RERANK]:
llm = LLMService.query(llm_name=mdlnm) if not fid else LLMService.query(llm_name=mdlnm, fid=fid)
if llm and llm[0].fid in ["Youdao", "FastEmbed", "BAAI"]:
model_config = {"llm_factory": llm[0].fid, "api_key": "", "llm_name": mdlnm, "api_base": ""}
if not model_config:
if mdlnm == "flag-embedding":
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "", "llm_name": llm_name,
"api_base": ""}
else:
if not mdlnm:
raise LookupError(f"Type of {llm_type} model is not set.")
raise LookupError("Model({}) not authorized".format(mdlnm))
elif llm_type == LLMType.EMBEDDING and fid == 'Builtin' and "tei-" in os.getenv("COMPOSE_PROFILES", "") and mdlnm == os.getenv('TEI_MODEL', ''):
embedding_cfg = settings.EMBEDDING_CFG
model_config = {"llm_factory": 'Builtin', "api_key": embedding_cfg["api_key"], "llm_name": mdlnm, "api_base": embedding_cfg["base_url"]}
else:
raise LookupError(f"Model({mdlnm}@{fid}) not authorized")
llm = LLMService.query(llm_name=mdlnm) if not fid else LLMService.query(llm_name=mdlnm, fid=fid)
if not llm and fid: # for some cases seems fid mismatch
llm = LLMService.query(llm_name=mdlnm)
if llm:
model_config["is_tools"] = llm[0].is_tools
return model_config
@classmethod

View File

@ -24,7 +24,8 @@ from api.db import UserTenantRole
from api.db.db_models import DB, UserTenant
from api.db.db_models import User, Tenant
from api.db.services.common_service import CommonService
from api.utils import get_uuid, current_timestamp, datetime_format
from api.utils import get_uuid
from common.time_utils import current_timestamp, datetime_format
from api.db import StatusEnum
from rag.settings import MINIO

View File

@ -28,8 +28,6 @@ from api.utils.configs import decrypt_database_config, get_base_config
from api.utils.file_utils import get_project_base_directory
from rag.nlp import search
LIGHTEN = int(os.environ.get("LIGHTEN", "0"))
LLM = None
LLM_FACTORY = None
LLM_BASE_URL = None
@ -77,8 +75,6 @@ SANDBOX_ENABLED = 0
SANDBOX_HOST = None
STRONG_TEST_COUNT = int(os.environ.get("STRONG_TEST_COUNT", "8"))
BUILTIN_EMBEDDING_MODELS = ["BAAI/bge-large-zh-v1.5@BAAI", "maidalun1020/bce-embedding-base_v1@Youdao"]
SMTP_CONF = None
MAIL_SERVER = ""
MAIL_PORT = 000
@ -109,8 +105,7 @@ def get_or_create_secret_key():
def init_settings():
global LLM, LLM_FACTORY, LLM_BASE_URL, LIGHTEN, DATABASE_TYPE, DATABASE, FACTORY_LLM_INFOS, REGISTER_ENABLED
LIGHTEN = int(os.environ.get("LIGHTEN", "0"))
global LLM, LLM_FACTORY, LLM_BASE_URL, DATABASE_TYPE, DATABASE, FACTORY_LLM_INFOS, REGISTER_ENABLED
DATABASE_TYPE = os.getenv("DB_TYPE", "mysql")
DATABASE = decrypt_database_config(name=DATABASE_TYPE)
LLM = get_base_config("user_default_llm", {}) or {}
@ -130,8 +125,6 @@ def init_settings():
global CHAT_MDL, EMBEDDING_MDL, RERANK_MDL, ASR_MDL, IMAGE2TEXT_MDL
global CHAT_CFG, EMBEDDING_CFG, RERANK_CFG, ASR_CFG, IMAGE2TEXT_CFG
if not LIGHTEN:
EMBEDDING_MDL = BUILTIN_EMBEDDING_MODELS[0]
global API_KEY, PARSERS, HOST_IP, HOST_PORT, SECRET_KEY
API_KEY = LLM.get("api_key")
@ -152,7 +145,7 @@ def init_settings():
IMAGE2TEXT_CFG = _resolve_per_model_config(image2text_entry, LLM_FACTORY, API_KEY, LLM_BASE_URL)
CHAT_MDL = CHAT_CFG.get("model", "") or ""
EMBEDDING_MDL = EMBEDDING_CFG.get("model", "") or ""
EMBEDDING_MDL = os.getenv("TEI_MODEL", "BAAI/bge-small-en-v1.5") if "tei-" in os.getenv("COMPOSE_PROFILES", "") else ""
RERANK_MDL = RERANK_CFG.get("model", "") or ""
ASR_MDL = ASR_CFG.get("model", "") or ""
IMAGE2TEXT_MDL = IMAGE2TEXT_CFG.get("model", "") or ""

View File

@ -14,11 +14,9 @@
# limitations under the License.
#
import base64
import datetime
import hashlib
import os
import socket
import time
import uuid
import requests
@ -26,26 +24,6 @@ import importlib
from .common import string_to_bytes
def current_timestamp():
return int(time.time() * 1000)
def timestamp_to_date(timestamp, format_string="%Y-%m-%d %H:%M:%S"):
if not timestamp:
timestamp = time.time()
timestamp = int(timestamp) / 1000
time_array = time.localtime(timestamp)
str_date = time.strftime(format_string, time_array)
return str_date
def date_string_to_timestamp(time_str, format_string="%Y-%m-%d %H:%M:%S"):
time_array = time.strptime(time_str, format_string)
time_stamp = int(time.mktime(time_array) * 1000)
return time_stamp
def get_lan_ip():
if os.name != "nt":
import fcntl
@ -94,26 +72,6 @@ def get_uuid():
return uuid.uuid1().hex
def datetime_format(date_time: datetime.datetime) -> datetime.datetime:
return datetime.datetime(date_time.year, date_time.month, date_time.day,
date_time.hour, date_time.minute, date_time.second)
def get_format_time() -> datetime.datetime:
return datetime_format(datetime.datetime.now())
def str2date(date_time: str):
return datetime.datetime.strptime(date_time, '%Y-%m-%d')
def elapsed2time(elapsed):
seconds = elapsed / 1000
minuter, second = divmod(seconds, 60)
hour, minuter = divmod(minuter, 60)
return '%02d:%02d:%02d' % (hour, minuter, second)
def download_img(url):
if not url:
return ""
@ -123,10 +81,5 @@ def download_img(url):
"base64," + base64.b64encode(response.content).decode("utf-8")
def delta_seconds(date_string: str):
dt = datetime.datetime.strptime(date_string, "%Y-%m-%d %H:%M:%S")
return (datetime.datetime.now() - dt).total_seconds()
def hash_str2int(line: str, mod: int = 10 ** 8) -> int:
return int(hashlib.sha1(line.encode("utf-8")).hexdigest(), 16) % mod

View File

@ -43,7 +43,6 @@ from flask_login import current_user
from flask import (
request as flask_request,
)
from itsdangerous import URLSafeTimedSerializer
from peewee import OperationalError
from werkzeug.http import HTTP_STATUS_CODES
@ -51,8 +50,7 @@ from api import settings
from api.constants import REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC
from api.db import ActiveEnum
from api.db.db_models import APIToken
from api.utils.json import CustomJSONEncoder, json_dumps
from api.utils import get_uuid
from api.utils.json_encode import CustomJSONEncoder, json_dumps
from rag.utils.mcp_tool_call_conn import MCPToolCallSession, close_multiple_mcp_toolcall_sessions
requests.models.complexjson.dumps = functools.partial(json.dumps, cls=CustomJSONEncoder)
@ -410,9 +408,9 @@ def get_error_operating_result(message="Operating error"):
return get_result(code=settings.RetCode.OPERATING_ERROR, message=message)
def generate_confirmation_token(tenant_id):
serializer = URLSafeTimedSerializer(tenant_id)
return "ragflow-" + serializer.dumps(get_uuid(), salt=tenant_id)[2:34]
def generate_confirmation_token():
import secrets
return "ragflow-" + secrets.token_urlsafe(32)
def get_parser_config(chunk_method, parser_config):
@ -588,7 +586,7 @@ def verify_embedding_availability(embd_id: str, tenant_id: str) -> tuple[bool, R
llm["llm_name"] == llm_name and llm["llm_factory"] == llm_factory and llm["model_type"] == "embedding" for
llm in tenant_llms)
is_builtin_model = embd_id in settings.BUILTIN_EMBEDDING_MODELS
is_builtin_model = llm_factory=='Builtin'
if not (is_builtin_model or is_tenant_model or in_llm_service):
return False, get_error_argument_result(f"Unsupported model: <{embd_id}>")

View File

@ -14,6 +14,12 @@
# limitations under the License.
#
import threading
import subprocess
import sys
import os
import logging
def string_to_bytes(string):
return string if isinstance(
string, bytes) else string.encode(encoding="utf-8")
@ -44,3 +50,48 @@ def convert_bytes(size_in_bytes: int) -> str:
return f"{size:.1f} {units[i]}"
else:
return f"{size:.2f} {units[i]}"
def once(func):
"""
A thread-safe decorator that ensures the decorated function runs exactly once,
caching and returning its result for all subsequent calls. This prevents
race conditions in multi-threaded environments by using a lock to protect
the execution state.
Args:
func (callable): The function to be executed only once.
Returns:
callable: A wrapper function that executes `func` on the first call
and returns the cached result thereafter.
Example:
@once
def compute_expensive_value():
print("Computing...")
return 42
# First call: executes and prints
# Subsequent calls: return 42 without executing
"""
executed = False
result = None
lock = threading.Lock()
def wrapper(*args, **kwargs):
nonlocal executed, result
with lock:
if not executed:
executed = True
result = func(*args, **kwargs)
return result
return wrapper
@once
def pip_install_torch():
device = os.getenv("DEVICE", "cpu")
if device=="cpu":
return
logging.info("Installing pytorch")
pkg_names = ["torch>=2.5.0,<3.0.0"]
subprocess.check_call([sys.executable, "-m", "pip", "install", *pkg_names])

View File

@ -34,8 +34,6 @@ def get_ragflow_version() -> str:
RAGFLOW_VERSION_INFO = f.read().strip()
else:
RAGFLOW_VERSION_INFO = get_closest_tag_and_count()
LIGHTEN = int(os.environ.get("LIGHTEN", "0"))
RAGFLOW_VERSION_INFO += " slim" if LIGHTEN == 1 else " full"
return RAGFLOW_VERSION_INFO

15
common/__init__.py Normal file
View File

@ -0,0 +1,15 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

46
common/float_utils.py Normal file
View File

@ -0,0 +1,46 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
def get_float(v):
"""
Convert a value to float, handling None and exceptions gracefully.
Attempts to convert the input value to a float. If the value is None or
cannot be converted to float, returns negative infinity as a default value.
Args:
v: The value to convert to float. Can be any type that float() accepts,
or None.
Returns:
float: The converted float value if successful, otherwise float('-inf').
Examples:
>>> get_float("3.14")
3.14
>>> get_float(None)
-inf
>>> get_float("invalid")
-inf
>>> get_float(42)
42.0
"""
if v is None:
return float('-inf')
try:
return float(v)
except Exception:
return float('-inf')

73
common/string_utils.py Normal file
View File

@ -0,0 +1,73 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re
def remove_redundant_spaces(txt: str):
"""
Remove redundant spaces around punctuation marks while preserving meaningful spaces.
This function performs two main operations:
1. Remove spaces after left-boundary characters (opening brackets, etc.)
2. Remove spaces before right-boundary characters (closing brackets, punctuation, etc.)
Args:
txt (str): Input text to process
Returns:
str: Text with redundant spaces removed
"""
# First pass: Remove spaces after left-boundary characters
# Matches: [non-alphanumeric-and-specific-right-punctuation] + [non-space]
# Removes spaces after characters like '(', '<', and other non-alphanumeric chars
# Examples:
# "( test" → "(test"
txt = re.sub(r"([^a-z0-9.,\)>]) +([^ ])", r"\1\2", txt, flags=re.IGNORECASE)
# Second pass: Remove spaces before right-boundary characters
# Matches: [non-space] + [non-alphanumeric-and-specific-left-punctuation]
# Removes spaces before characters like non-')', non-',', non-'.', and non-alphanumeric chars
# Examples:
# "world !" → "world!"
return re.sub(r"([^ ]) +([^a-z0-9.,\(<])", r"\1\2", txt, flags=re.IGNORECASE)
def clean_markdown_block(text):
"""
Remove Markdown code block syntax from the beginning and end of text.
This function cleans Markdown code blocks by removing:
- Opening ```Markdown tags (with optional whitespace and newlines)
- Closing ``` tags (with optional whitespace and newlines)
Args:
text (str): Input text that may be wrapped in Markdown code blocks
Returns:
str: Cleaned text with Markdown code block syntax removed, and stripped of surrounding whitespace
"""
# Remove opening ```markdown tag with optional whitespace and newlines
# Matches: optional whitespace + ```markdown + optional whitespace + optional newline
text = re.sub(r'^\s*```markdown\s*\n?', '', text)
# Remove closing ``` tag with optional whitespace and newlines
# Matches: optional newline + optional whitespace + ``` + optional whitespace at end
text = re.sub(r'\n?\s*```\s*$', '', text)
# Return text with surrounding whitespace removed
return text.strip()

126
common/time_utils.py Normal file
View File

@ -0,0 +1,126 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import time
def current_timestamp():
"""
Get the current timestamp in milliseconds.
Returns:
int: Current Unix timestamp in milliseconds (13 digits)
Example:
>>> current_timestamp()
1704067200000
"""
return int(time.time() * 1000)
def timestamp_to_date(timestamp, format_string="%Y-%m-%d %H:%M:%S"):
"""
Convert a timestamp to formatted date string.
Args:
timestamp: Unix timestamp in milliseconds. If None or empty, uses current time.
format_string: Format string for the output date (default: "%Y-%m-%d %H:%M:%S")
Returns:
str: Formatted date string
Example:
>>> timestamp_to_date(1704067200000)
'2024-01-01 08:00:00'
"""
if not timestamp:
timestamp = time.time()
timestamp = int(timestamp) / 1000
time_array = time.localtime(timestamp)
str_date = time.strftime(format_string, time_array)
return str_date
def date_string_to_timestamp(time_str, format_string="%Y-%m-%d %H:%M:%S"):
"""
Convert a date string to timestamp in milliseconds.
Args:
time_str: Date string to convert
format_string: Format of the input date string (default: "%Y-%m-%d %H:%M:%S")
Returns:
int: Unix timestamp in milliseconds
Example:
>>> date_string_to_timestamp("2024-01-01 00:00:00")
1704067200000
"""
time_array = time.strptime(time_str, format_string)
time_stamp = int(time.mktime(time_array) * 1000)
return time_stamp
def datetime_format(date_time: datetime.datetime) -> datetime.datetime:
"""
Normalize a datetime object by removing microsecond component.
Creates a new datetime object with only year, month, day, hour, minute, second.
Microseconds are set to 0.
Args:
date_time: datetime object to normalize
Returns:
datetime.datetime: New datetime object without microseconds
Example:
>>> dt = datetime.datetime(2024, 1, 1, 12, 30, 45, 123456)
>>> datetime_format(dt)
datetime.datetime(2024, 1, 1, 12, 30, 45)
"""
return datetime.datetime(date_time.year, date_time.month, date_time.day,
date_time.hour, date_time.minute, date_time.second)
def get_format_time() -> datetime.datetime:
"""
Get current datetime normalized without microseconds.
Returns:
datetime.datetime: Current datetime with microseconds set to 0
Example:
>>> get_format_time()
datetime.datetime(2024, 1, 1, 12, 30, 45)
"""
return datetime_format(datetime.datetime.now())
def delta_seconds(date_string: str):
"""
Calculate seconds elapsed from a given date string to now.
Args:
date_string: Date string in "YYYY-MM-DD HH:MM:SS" format
Returns:
float: Number of seconds between the given date and current time
Example:
>>> delta_seconds("2024-01-01 12:00:00")
3600.0 # If current time is 2024-01-01 13:00:00
"""
dt = datetime.datetime.strptime(date_string, "%Y-%m-%d %H:%M:%S")
return (datetime.datetime.now() - dt).total_seconds()

View File

@ -227,8 +227,8 @@
"llm": [
{
"llm_name": "qwen3-8b",
"tags": "LLM,CHAT,131k",
"max_tokens": 131000,
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
@ -241,15 +241,15 @@
},
{
"llm_name": "qwen3-32b",
"tags": "LLM,CHAT,131k",
"max_tokens": 131000,
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "kimi-k2-instruct",
"tags": "LLM,CHAT,128K",
"max_tokens": 128000,
"llm_name": "kimi-k2-instruct-0905",
"tags": "LLM,CHAT,256K",
"max_tokens": 256000,
"model_type": "chat",
"is_tools": true
},
@ -280,6 +280,48 @@
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "hunyuan-a13b-instruct",
"tags": "LLM,CHAT,256k",
"max_tokens": 256000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-next-80b-a3b-instruct",
"tags": "LLM,CHAT,1024k",
"max_tokens": 1024000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-v3.2-exp",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "deepseek-v3.1-terminus",
"tags": "LLM,CHAT,128k",
"max_tokens": 128000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-vl-235b-a22b-instruct",
"tags": "LLM,CHAT,262k",
"max_tokens": 262000,
"model_type": "chat",
"is_tools": true
},
{
"llm_name": "qwen3-vl-30b-a3b-instruct",
"tags": "LLM,CHAT,262k",
"max_tokens": 262000,
"model_type": "chat",
"is_tools": true
}
]
},
@ -932,20 +974,6 @@
"status": "1",
"llm": []
},
{
"name": "Youdao",
"logo": "",
"tags": "TEXT EMBEDDING",
"status": "1",
"llm": [
{
"llm_name": "maidalun1020/bce-embedding-base_v1",
"tags": "TEXT EMBEDDING,",
"max_tokens": 512,
"model_type": "embedding"
}
]
},
{
"name": "DeepSeek",
"logo": "",
@ -1099,15 +1127,27 @@
]
},
{
"name": "BAAI",
"name": "Builtin",
"logo": "",
"tags": "TEXT EMBEDDING",
"status": "1",
"llm": [
{
"llm_name": "BAAI/bge-large-zh-v1.5",
"tags": "TEXT EMBEDDING,",
"max_tokens": 1024,
"llm_name": "BAAI/bge-small-en-v1.5",
"tags": "TEXT EMBEDDING,512",
"max_tokens": 512,
"model_type": "embedding"
},
{
"llm_name": "BAAI/bge-m3",
"tags": "TEXT EMBEDDING,8k",
"max_tokens": 8192,
"model_type": "embedding"
},
{
"llm_name": "Qwen/Qwen3-Embedding-0.6B",
"tags": "TEXT EMBEDDING,32k",
"max_tokens": 32768,
"model_type": "embedding"
}
]

View File

@ -32,6 +32,11 @@ redis:
db: 1
password: 'infini_rag_flow'
host: 'localhost:6379'
user_default_llm:
default_models:
embedding_model:
api_key: 'xxx'
base_url: 'http://localhost:6380'
# postgres:
# name: 'rag_flow'
# user: 'rag_flow'
@ -77,7 +82,8 @@ redis:
# api_key: 'xxxx'
# base_url: 'https://api.xx.com'
# embedding_model:
# name: 'bge-m3'
# api_key: 'xxx'
# base_url: 'http://localhost:6380'
# rerank_model: 'bge-reranker-v2'
# asr_model:
# model: 'whisper-large-v3' # alias of name
@ -127,3 +133,9 @@ redis:
# - "RAGFlow" # display name
# - "" # sender email address
# mail_frontend_url: "https://your-frontend.example.com"
# tcadp_config:
# secret_id: 'tencent_secret_id'
# secret_key: 'tencent_secret_key'
# region: 'tencent_region'
# table_result_type: '1'
# markdown_image_response_type: '1'

View File

@ -0,0 +1,344 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import annotations
import logging
import re
from dataclasses import dataclass
from enum import Enum
from io import BytesIO
from os import PathLike
from pathlib import Path
from typing import Any, Callable, Iterable, Optional
import pdfplumber
from PIL import Image
try:
from docling.document_converter import DocumentConverter
except Exception:
DocumentConverter = None
try:
from deepdoc.parser.pdf_parser import RAGFlowPdfParser
except Exception:
class RAGFlowPdfParser:
pass
class DoclingContentType(str, Enum):
IMAGE = "image"
TABLE = "table"
TEXT = "text"
EQUATION = "equation"
@dataclass
class _BBox:
page_no: int
x0: float
y0: float
x1: float
y1: float
class DoclingParser(RAGFlowPdfParser):
def __init__(self):
self.logger = logging.getLogger(self.__class__.__name__)
self.page_images: list[Image.Image] = []
self.page_from = 0
self.page_to = 10_000
def check_installation(self) -> bool:
if DocumentConverter is None:
self.logger.warning("[Docling] 'docling' is not importable, please: pip install docling")
return False
try:
_ = DocumentConverter()
return True
except Exception as e:
self.logger.error(f"[Docling] init DocumentConverter failed: {e}")
return False
def __images__(self, fnm, zoomin: int = 1, page_from=0, page_to=600, callback=None):
self.page_from = page_from
self.page_to = page_to
try:
opener = pdfplumber.open(fnm) if isinstance(fnm, (str, PathLike)) else pdfplumber.open(BytesIO(fnm))
with opener as pdf:
pages = pdf.pages[page_from:page_to]
self.page_images = [p.to_image(resolution=72 * zoomin, antialias=True).original for p in pages]
except Exception as e:
self.page_images = []
self.logger.exception(e)
def _make_line_tag(self,bbox: _BBox) -> str:
if bbox is None:
return ""
x0,x1, top, bott = bbox.x0, bbox.x1, bbox.y0, bbox.y1
if hasattr(self, "page_images") and self.page_images and len(self.page_images) >= bbox.page_no:
_, page_height = self.page_images[bbox.page_no-1].size
top, bott = page_height-top ,page_height-bott
return "@@{}\t{:.1f}\t{:.1f}\t{:.1f}\t{:.1f}##".format(
bbox.page_no, x0,x1, top, bott
)
@staticmethod
def extract_positions(txt: str) -> list[tuple[list[int], float, float, float, float]]:
poss = []
for tag in re.findall(r"@@[0-9-]+\t[0-9.\t]+##", txt):
pn, left, right, top, bottom = tag.strip("#").strip("@").split("\t")
left, right, top, bottom = float(left), float(right), float(top), float(bottom)
poss.append(([int(p) - 1 for p in pn.split("-")], left, right, top, bottom))
return poss
def crop(self, text: str, ZM: int = 1, need_position: bool = False):
imgs = []
poss = self.extract_positions(text)
if not poss:
return (None, None) if need_position else None
GAP = 6
pos = poss[0]
poss.insert(0, ([pos[0][0]], pos[1], pos[2], max(0, pos[3] - 120), max(pos[3] - GAP, 0)))
pos = poss[-1]
poss.append(([pos[0][-1]], pos[1], pos[2], min(self.page_images[pos[0][-1]].size[1], pos[4] + GAP), min(self.page_images[pos[0][-1]].size[1], pos[4] + 120)))
positions = []
for ii, (pns, left, right, top, bottom) in enumerate(poss):
if bottom <= top:
bottom = top + 4
img0 = self.page_images[pns[0]]
x0, y0, x1, y1 = int(left), int(top), int(right), int(min(bottom, img0.size[1]))
crop0 = img0.crop((x0, y0, x1, y1))
imgs.append(crop0)
if 0 < ii < len(poss)-1:
positions.append((pns[0] + self.page_from, x0, x1, y0, y1))
remain_bottom = bottom - img0.size[1]
for pn in pns[1:]:
if remain_bottom <= 0:
break
page = self.page_images[pn]
x0, y0, x1, y1 = int(left), 0, int(right), int(min(remain_bottom, page.size[1]))
cimgp = page.crop((x0, y0, x1, y1))
imgs.append(cimgp)
if 0 < ii < len(poss) - 1:
positions.append((pn + self.page_from, x0, x1, y0, y1))
remain_bottom -= page.size[1]
if not imgs:
return (None, None) if need_position else None
height = sum(i.size[1] + GAP for i in imgs)
width = max(i.size[0] for i in imgs)
pic = Image.new("RGB", (width, int(height)), (245, 245, 245))
h = 0
for ii, img in enumerate(imgs):
if ii == 0 or ii + 1 == len(imgs):
img = img.convert("RGBA")
overlay = Image.new("RGBA", img.size, (0, 0, 0, 0))
overlay.putalpha(128)
img = Image.alpha_composite(img, overlay).convert("RGB")
pic.paste(img, (0, int(h)))
h += img.size[1] + GAP
return (pic, positions) if need_position else pic
def _iter_doc_items(self, doc) -> Iterable[tuple[str, Any, Optional[_BBox]]]:
for t in getattr(doc, "texts", []):
parent=getattr(t, "parent", "")
ref=getattr(parent,"cref","")
label=getattr(t, "label", "")
if (label in ("section_header","text",) and ref in ("#/body",)) or label in ("list_item",):
text = getattr(t, "text", "") or ""
bbox = None
if getattr(t, "prov", None):
pn = getattr(t.prov[0], "page_no", None)
bb = getattr(t.prov[0], "bbox", None)
bb = [getattr(bb, "l", None),getattr(bb, "t", None),getattr(bb, "r", None),getattr(bb, "b", None)]
if pn and bb and len(bb) == 4:
bbox = _BBox(page_no=int(pn), x0=bb[0], y0=bb[1], x1=bb[2], y1=bb[3])
yield (DoclingContentType.TEXT.value, text, bbox)
for item in getattr(doc, "texts", []):
if getattr(item, "label", "") in ("FORMULA",):
text = getattr(item, "text", "") or ""
bbox = None
if getattr(item, "prov", None):
pn = getattr(item.prov, "page_no", None)
bb = getattr(item.prov, "bbox", None)
bb = [getattr(bb, "l", None),getattr(bb, "t", None),getattr(bb, "r", None),getattr(bb, "b", None)]
if pn and bb and len(bb) == 4:
bbox = _BBox(int(pn), bb[0], bb[1], bb[2], bb[3])
yield (DoclingContentType.EQUATION.value, text, bbox)
def _transfer_to_sections(self, doc) -> list[tuple[str, str]]:
"""
和 MinerUParser 保持一致:返回 [(section_text, line_tag), ...]
"""
sections: list[tuple[str, str]] = []
for typ, payload, bbox in self._iter_doc_items(doc):
if typ == DoclingContentType.TEXT.value:
section = payload.strip()
if not section:
continue
elif typ == DoclingContentType.EQUATION.value:
section = payload.strip()
else:
continue
tag = self._make_line_tag(bbox) if isinstance(bbox,_BBox) else ""
sections.append((section, tag))
return sections
def cropout_docling_table(self, page_no: int, bbox: tuple[float, float, float, float], zoomin: int = 1):
if not getattr(self, "page_images", None):
return None, ""
idx = (page_no - 1) - getattr(self, "page_from", 0)
if idx < 0 or idx >= len(self.page_images):
return None, ""
page_img = self.page_images[idx]
W, H = page_img.size
left, top, right, bott = bbox
x0 = float(left)
y0 = float(H-top)
x1 = float(right)
y1 = float(H-bott)
x0, y0 = max(0.0, min(x0, W - 1)), max(0.0, min(y0, H - 1))
x1, y1 = max(x0 + 1.0, min(x1, W)), max(y0 + 1.0, min(y1, H))
try:
crop = page_img.crop((int(x0), int(y0), int(x1), int(y1))).convert("RGB")
except Exception:
return None, ""
pos = (page_no-1 if page_no>0 else 0, x0, x1, y0, y1)
return crop, [pos]
def _transfer_to_tables(self, doc):
tables = []
for tab in getattr(doc, "tables", []):
img = None
positions = ""
if getattr(tab, "prov", None):
pn = getattr(tab.prov[0], "page_no", None)
bb = getattr(tab.prov[0], "bbox", None)
if pn is not None and bb is not None:
left = getattr(bb, "l", None)
top = getattr(bb, "t", None)
right = getattr(bb, "r", None)
bott = getattr(bb, "b", None)
if None not in (left, top, right, bott):
img, positions = self.cropout_docling_table(int(pn), (float(left), float(top), float(right), float(bott)))
html = ""
try:
html = tab.export_to_html(doc=doc)
except Exception:
pass
tables.append(((img, html), positions if positions else ""))
for pic in getattr(doc, "pictures", []):
img = None
positions = ""
if getattr(pic, "prov", None):
pn = getattr(pic.prov[0], "page_no", None)
bb = getattr(pic.prov[0], "bbox", None)
if pn is not None and bb is not None:
left = getattr(bb, "l", None)
top = getattr(bb, "t", None)
right = getattr(bb, "r", None)
bott = getattr(bb, "b", None)
if None not in (left, top, right, bott):
img, positions = self.cropout_docling_table(int(pn), (float(left), float(top), float(right), float(bott)))
captions = ""
try:
captions = pic.caption_text(doc=doc)
except Exception:
pass
tables.append(((img, [captions]), positions if positions else ""))
return tables
def parse_pdf(
self,
filepath: str | PathLike[str],
binary: BytesIO | bytes | None = None,
callback: Optional[Callable] = None,
*,
output_dir: Optional[str] = None,
lang: Optional[str] = None,
method: str = "auto",
delete_output: bool = True,
):
if not self.check_installation():
raise RuntimeError("Docling not available, please install `docling`")
if binary is not None:
tmpdir = Path(output_dir) if output_dir else Path.cwd() / ".docling_tmp"
tmpdir.mkdir(parents=True, exist_ok=True)
name = Path(filepath).name or "input.pdf"
tmp_pdf = tmpdir / name
with open(tmp_pdf, "wb") as f:
if isinstance(binary, (bytes, bytearray)):
f.write(binary)
else:
f.write(binary.getbuffer())
src_path = tmp_pdf
else:
src_path = Path(filepath)
if not src_path.exists():
raise FileNotFoundError(f"PDF not found: {src_path}")
if callback:
callback(0.1, f"[Docling] Converting: {src_path}")
try:
self.__images__(str(src_path), zoomin=1)
except Exception as e:
self.logger.warning(f"[Docling] render pages failed: {e}")
conv = DocumentConverter()
conv_res = conv.convert(str(src_path))
doc = conv_res.document
if callback:
callback(0.7, f"[Docling] Parsed doc: {getattr(doc, 'num_pages', 'n/a')} pages")
sections = self._transfer_to_sections(doc)
tables = self._transfer_to_tables(doc)
if callback:
callback(0.95, f"[Docling] Sections: {len(sections)}, Tables: {len(tables)}")
if binary is not None and delete_output:
try:
Path(src_path).unlink(missing_ok=True)
except Exception:
pass
if callback:
callback(1.0, "[Docling] Done.")
return sections, tables
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
parser = DoclingParser()
print("Docling available:", parser.check_installation())
sections, tables = parser.parse_pdf(filepath="test_docling/toc.pdf", binary=None)
print(len(sections), len(tables))

View File

@ -123,7 +123,12 @@ class RAGFlowExcelParser:
for sheetname in wb.sheetnames:
ws = wb[sheetname]
rows = list(ws.rows)
try:
rows = list(ws.rows)
except Exception as e:
logging.warning(f"Skip sheet '{sheetname}' due to rows access error: {e}")
continue
if not rows:
continue
@ -170,7 +175,11 @@ class RAGFlowExcelParser:
res = []
for sheetname in wb.sheetnames:
ws = wb[sheetname]
rows = list(ws.rows)
try:
rows = list(ws.rows)
except Exception as e:
logging.warning(f"Skip sheet '{sheetname}' due to rows access error: {e}")
continue
if not rows:
continue
ti = list(rows[0])
@ -193,9 +202,14 @@ class RAGFlowExcelParser:
if fnm.split(".")[-1].lower().find("xls") >= 0:
wb = RAGFlowExcelParser._load_excel_to_workbook(BytesIO(binary))
total = 0
for sheetname in wb.sheetnames:
ws = wb[sheetname]
total += len(list(ws.rows))
try:
ws = wb[sheetname]
total += len(list(ws.rows))
except Exception as e:
logging.warning(f"Skip sheet '{sheetname}' due to rows access error: {e}")
continue
return total
if fnm.split(".")[-1].lower() in ["csv", "txt"]:

View File

@ -117,11 +117,24 @@ class MarkdownElementExtractor:
self.markdown_content = markdown_content
self.lines = markdown_content.split("\n")
def extract_elements(self):
def get_delimiters(self,delimiters):
toks = re.findall(r"`([^`]+)`", delimiters)
toks = sorted(set(toks), key=lambda x: -len(x))
return "|".join(re.escape(t) for t in toks if t)
def extract_elements(self,delimiter=None):
"""Extract individual elements (headers, code blocks, lists, etc.)"""
sections = []
i = 0
dels=""
if delimiter:
dels = self.get_delimiters(delimiter)
if len(dels) > 0:
text = "\n".join(self.lines)
parts = re.split(dels, text)
sections = [p.strip() for p in parts if p and p.strip()]
return sections
while i < len(self.lines):
line = self.lines[i]

View File

@ -27,6 +27,9 @@ from os import PathLike
from pathlib import Path
from queue import Empty, Queue
from typing import Any, Callable, Optional
import requests
import os
import zipfile
import numpy as np
import pdfplumber
@ -45,13 +48,58 @@ class MinerUContentType(StrEnum):
TABLE = "table"
TEXT = "text"
EQUATION = "equation"
CODE = "code"
LIST = "list"
DISCARDED = "discarded"
class MinerUParser(RAGFlowPdfParser):
def __init__(self, mineru_path: str = "mineru"):
def __init__(self, mineru_path: str = "mineru", mineru_api: str = "http://host.docker.internal:9987"):
self.mineru_path = Path(mineru_path)
self.mineru_api = mineru_api.rstrip('/')
self.using_api = False
self.logger = logging.getLogger(self.__class__.__name__)
def _extract_zip_no_root(self, zip_path, extract_to, root_dir):
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
if not root_dir:
files = zip_ref.namelist()
if files and files[0].endswith('/'):
root_dir = files[0]
else:
root_dir = None
if not root_dir or not root_dir.endswith('/'):
self.logger.info(f"[MinerU] No root directory found, extracting all...fff{root_dir}")
zip_ref.extractall(extract_to)
return
root_len = len(root_dir)
for member in zip_ref.infolist():
filename = member.filename
if filename == root_dir:
self.logger.info("[MinerU] Ignore root folder...")
continue
path = filename
if path.startswith(root_dir):
path = path[root_len:]
full_path = os.path.join(extract_to, path)
if member.is_dir():
os.makedirs(full_path, exist_ok=True)
else:
os.makedirs(os.path.dirname(full_path), exist_ok=True)
with open(full_path, 'wb') as f:
f.write(zip_ref.read(filename))
def _is_http_endpoint_valid(self, url, timeout=5):
try:
response = requests.head(url, timeout=timeout, allow_redirects=True)
return response.status_code in [200, 301, 302, 307, 308]
except Exception:
return False
def check_installation(self) -> bool:
subprocess_kwargs = {
"capture_output": True,
@ -78,10 +126,100 @@ class MinerUParser(RAGFlowPdfParser):
logging.warning("[MinerU] MinerU not found. Please install it via: pip install -U 'mineru[core]'")
except Exception as e:
logging.error(f"[MinerU] Unexpected error during installation check: {e}")
try:
if self.mineru_api:
# check openapi.json
openapi_exists = self._is_http_endpoint_valid(self.mineru_api + "/openapi.json")
logging.info(f"[MinerU] Detected {self.mineru_api}/openapi.json: {openapi_exists}")
self.using_api = openapi_exists
return openapi_exists
else:
logging.info("[MinerU] api not exists.")
except Exception as e:
logging.error(f"[MinerU] Unexpected error during api check: {e}")
return False
def _run_mineru(self, input_path: Path, output_dir: Path, method: str = "auto", lang: Optional[str] = None):
def _run_mineru(self, input_path: Path, output_dir: Path, method: str = "auto", backend: str = "pipeline", lang: Optional[str] = None, callback: Optional[Callable] = None):
if self.using_api:
self._run_mineru_api(input_path, output_dir, method, backend, lang, callback)
else:
self._run_mineru_executable(input_path, output_dir, method, backend, lang, callback)
def _run_mineru_api(self, input_path: Path, output_dir: Path, method: str = "auto", backend: str = "pipeline", lang: Optional[str] = None, callback: Optional[Callable] = None):
OUTPUT_ZIP_PATH = os.path.join(str(output_dir), "output.zip")
pdf_file_path = str(input_path)
if not os.path.exists(pdf_file_path):
raise RuntimeError(f"[MinerU] PDF file not exists: {pdf_file_path}")
pdf_file_name = Path(pdf_file_path).stem.strip()
output_path = os.path.join(str(output_dir), pdf_file_name, method)
os.makedirs(output_path, exist_ok=True)
files = {
"files": (pdf_file_name + ".pdf", open(pdf_file_path, "rb"), "application/pdf")
}
data = {
"output_dir": "./output",
"lang_list": lang,
"backend": backend,
"parse_method": method,
"formula_enable": True,
"table_enable": True,
"server_url": None,
"return_md": True,
"return_middle_json": True,
"return_model_output": True,
"return_content_list": True,
"return_images": True,
"response_format_zip": True,
"start_page_id": 0,
"end_page_id": 99999
}
headers = {
"Accept": "application/json"
}
try:
self.logger.info(f"[MinerU] invoke api: {self.mineru_api}/file_parse")
if callback:
callback(0.20, f"[MinerU] invoke api: {self.mineru_api}/file_parse")
response = requests.post(
url=f"{self.mineru_api}/file_parse",
files=files,
data=data,
headers=headers,
timeout=1800
)
response.raise_for_status()
if response.headers.get("Content-Type") == "application/zip":
self.logger.info(f"[MinerU] zip file returned, saving to {OUTPUT_ZIP_PATH}...")
if callback:
callback(0.30, f"[MinerU] zip file returned, saving to {OUTPUT_ZIP_PATH}...")
with open(OUTPUT_ZIP_PATH, "wb") as f:
f.write(response.content)
self.logger.info(f"[MinerU] Unzip to {output_path}...")
self._extract_zip_no_root(OUTPUT_ZIP_PATH, output_path, pdf_file_name + "/")
if callback:
callback(0.40, f"[MinerU] Unzip to {output_path}...")
else:
self.logger.warning("[MinerU] not zip returned from api%s " % response.headers.get("Content-Type"))
except Exception as e:
raise RuntimeError(f"[MinerU] api failed with exception {e}")
self.logger.info("[MinerU] Api completed successfully.")
def _run_mineru_executable(self, input_path: Path, output_dir: Path, method: str = "auto", backend: str = "pipeline", lang: Optional[str] = None, callback: Optional[Callable] = None):
cmd = [str(self.mineru_path), "-p", str(input_path), "-o", str(output_dir), "-m", method]
if backend:
cmd.extend(["-b", backend])
if lang:
cmd.extend(["-l", lang])
@ -231,8 +369,10 @@ class MinerUParser(RAGFlowPdfParser):
poss.append(([int(p) - 1 for p in pn.split("-")], left, right, top, bottom))
return poss
def _read_output(self, output_dir: Path, file_stem: str, method: str = "auto") -> list[dict[str, Any]]:
def _read_output(self, output_dir: Path, file_stem: str, method: str = "auto", backend: str = "pipeline") -> list[dict[str, Any]]:
subdir = output_dir / file_stem / method
if backend.startswith("vlm-"):
subdir = output_dir / file_stem / "vlm"
json_file = subdir / f"{file_stem}_content_list.json"
if not json_file.exists():
@ -254,11 +394,17 @@ class MinerUParser(RAGFlowPdfParser):
case MinerUContentType.TEXT:
section = output["text"]
case MinerUContentType.TABLE:
section = output["table_body"] + "\n".join(output["table_caption"]) + "\n".join(output["table_footnote"])
section = output["table_body"] if "table_body" in output else "" + "\n".join(output["table_caption"]) + "\n".join(output["table_footnote"])
case MinerUContentType.IMAGE:
section = "".join(output["image_caption"]) + "\n" + "".join(output["image_footnote"])
case MinerUContentType.EQUATION:
section = output["text"]
case MinerUContentType.CODE:
section = output["code_body"] + "\n".join(output.get("code_caption", []))
case MinerUContentType.LIST:
section = "\n".join(output.get("list_items", []))
case MinerUContentType.DISCARDED:
pass
if section:
sections.append((section, self._line_tag(output)))
@ -274,6 +420,7 @@ class MinerUParser(RAGFlowPdfParser):
callback: Optional[Callable] = None,
*,
output_dir: Optional[str] = None,
backend: str = "pipeline",
lang: Optional[str] = None,
method: str = "auto",
delete_output: bool = True,
@ -283,9 +430,14 @@ class MinerUParser(RAGFlowPdfParser):
temp_pdf = None
created_tmp_dir = False
# remove spaces, or mineru crash, and _read_output fail too
file_path = Path(filepath)
pdf_file_name = file_path.stem.replace(" ", "") + ".pdf"
pdf_file_path_valid = os.path.join(file_path.parent, pdf_file_name)
if binary:
temp_dir = Path(tempfile.mkdtemp(prefix="mineru_bin_pdf_"))
temp_pdf = temp_dir / Path(filepath).name
temp_pdf = temp_dir / pdf_file_name
with open(temp_pdf, "wb") as f:
f.write(binary)
pdf = temp_pdf
@ -293,7 +445,10 @@ class MinerUParser(RAGFlowPdfParser):
if callback:
callback(0.15, f"[MinerU] Received binary PDF -> {temp_pdf}")
else:
pdf = Path(filepath)
if pdf_file_path_valid != filepath:
self.logger.info(f"[MinerU] Remove all space in file name: {pdf_file_path_valid}")
shutil.move(filepath, pdf_file_path_valid)
pdf = Path(pdf_file_path_valid)
if not pdf.exists():
if callback:
callback(-1, f"[MinerU] PDF not found: {pdf}")
@ -313,8 +468,8 @@ class MinerUParser(RAGFlowPdfParser):
self.__images__(pdf, zoomin=1)
try:
self._run_mineru(pdf, out_dir, method=method, lang=lang)
outputs = self._read_output(out_dir, pdf.stem, method=method)
self._run_mineru(pdf, out_dir, method=method, backend=backend, lang=lang, callback=callback)
outputs = self._read_output(out_dir, pdf.stem, method=method, backend=backend)
self.logger.info(f"[MinerU] Parsed {len(outputs)} blocks from PDF.")
if callback:
callback(0.75, f"[MinerU] Parsed {len(outputs)} blocks from PDF.")

View File

@ -34,8 +34,8 @@ from huggingface_hub import snapshot_download
from PIL import Image
from pypdf import PdfReader as pdf2_read
from api import settings
from api.utils.file_utils import get_project_base_directory
from api.utils.common import pip_install_torch
from deepdoc.vision import OCR, AscendLayoutRecognizer, LayoutRecognizer, Recognizer, TableStructureRecognizer
from rag.app.picture import vision_llm_chunk as picture_vision_llm_chunk
from rag.nlp import rag_tokenizer
@ -84,14 +84,13 @@ class RAGFlowPdfParser:
self.tbl_det = TableStructureRecognizer()
self.updown_cnt_mdl = xgb.Booster()
if not settings.LIGHTEN:
try:
import torch.cuda
if torch.cuda.is_available():
self.updown_cnt_mdl.set_param({"device": "cuda"})
except Exception:
logging.exception("RAGFlowPdfParser __init__")
try:
pip_install_torch()
import torch.cuda
if torch.cuda.is_available():
self.updown_cnt_mdl.set_param({"device": "cuda"})
except Exception:
logging.exception("RAGFlowPdfParser __init__")
try:
model_dir = os.path.join(get_project_base_directory(), "rag/res/deepdoc")
self.updown_cnt_mdl.load_model(os.path.join(model_dir, "updown_concat_xgb.model"))
@ -1131,7 +1130,7 @@ class RAGFlowPdfParser:
bxes = [b for bxs in self.boxes for b in bxs]
self.is_english = re.search(r"[\na-zA-Z0-9,/¸;:'\[\]\(\)!@#$%^&*\"?<>._-]{30,}", "".join([b["text"] for b in random.choices(bxes, k=min(30, len(bxes)))]))
logging.debug("Is it English:", self.is_english)
logging.debug(f"Is it English: {self.is_english}")
self.page_cum_height = np.cumsum(self.page_cum_height)
assert len(self.page_cum_height) == len(self.page_images) + 1

View File

@ -0,0 +1,504 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import base64
import json
import logging
import os
import shutil
import tempfile
import time
import traceback
import types
import zipfile
from datetime import datetime
from io import BytesIO
from os import PathLike
from pathlib import Path
from typing import Any, Callable, Optional
import requests
from tencentcloud.common import credential
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.common.exception.tencent_cloud_sdk_exception import TencentCloudSDKException
from tencentcloud.lkeap.v20240522 import lkeap_client, models
from api.utils.configs import get_base_config
from deepdoc.parser.pdf_parser import RAGFlowPdfParser
class TencentCloudAPIClient:
"""Tencent Cloud API client using official SDK"""
def __init__(self, secret_id, secret_key, region):
self.secret_id = secret_id
self.secret_key = secret_key
self.region = region
# Create credentials
self.cred = credential.Credential(secret_id, secret_key)
# Instantiate an http option, optional, can be skipped if no special requirements
self.httpProfile = HttpProfile()
self.httpProfile.endpoint = "lkeap.tencentcloudapi.com"
# Instantiate a client option, optional, can be skipped if no special requirements
self.clientProfile = ClientProfile()
self.clientProfile.httpProfile = self.httpProfile
# Instantiate the client object for the product to be requested, clientProfile is optional
self.client = lkeap_client.LkeapClient(self.cred, region, self.clientProfile)
def reconstruct_document_sse(self, file_type, file_url=None, file_base64=None, file_start_page=1, file_end_page=1000, config=None):
"""Call document parsing API using official SDK"""
try:
# Instantiate a request object, each interface corresponds to a request object
req = models.ReconstructDocumentSSERequest()
# Build request parameters
params = {
"FileType": file_type,
"FileStartPageNumber": file_start_page,
"FileEndPageNumber": file_end_page,
}
# According to Tencent Cloud API documentation, either FileUrl or FileBase64 parameter must be provided, if both are provided only FileUrl will be used
if file_url:
params["FileUrl"] = file_url
logging.info(f"[TCADP] Using file URL: {file_url}")
elif file_base64:
params["FileBase64"] = file_base64
logging.info(f"[TCADP] Using Base64 data, length: {len(file_base64)} characters")
else:
raise ValueError("Must provide either FileUrl or FileBase64 parameter")
if config:
params["Config"] = config
req.from_json_string(json.dumps(params))
# The returned resp is an instance of ReconstructDocumentSSEResponse, corresponding to the request object
resp = self.client.ReconstructDocumentSSE(req)
parser_result = {}
# Output json format string response
if isinstance(resp, types.GeneratorType): # Streaming response
logging.info("[TCADP] Detected streaming response")
for event in resp:
logging.info(f"[TCADP] Received event: {event}")
if event.get('data'):
try:
data_dict = json.loads(event['data'])
logging.info(f"[TCADP] Parsed data: {data_dict}")
if data_dict.get('Progress') == "100":
parser_result = data_dict
logging.info("[TCADP] Document parsing completed!")
logging.info(f"[TCADP] Task ID: {data_dict.get('TaskId')}")
logging.info(f"[TCADP] Success pages: {data_dict.get('SuccessPageNum')}")
logging.info(f"[TCADP] Failed pages: {data_dict.get('FailPageNum')}")
# Print failed page information
failed_pages = data_dict.get("FailedPages", [])
if failed_pages:
logging.warning("[TCADP] Failed parsing pages:")
for page in failed_pages:
logging.warning(f"[TCADP] Page number: {page.get('PageNumber')}, Error: {page.get('ErrorMsg')}")
# Check if there is a download link
download_url = data_dict.get("DocumentRecognizeResultUrl")
if download_url:
logging.info(f"[TCADP] Got download link: {download_url}")
else:
logging.warning("[TCADP] No download link obtained")
break # Found final result, exit loop
else:
# Print progress information
progress = data_dict.get("Progress", "0")
logging.info(f"[TCADP] Progress: {progress}%")
except json.JSONDecodeError as e:
logging.error(f"[TCADP] Failed to parse JSON data: {e}")
logging.error(f"[TCADP] Raw data: {event.get('data')}")
continue
else:
logging.info(f"[TCADP] Event without data: {event}")
else: # Non-streaming response
logging.info("[TCADP] Detected non-streaming response")
if hasattr(resp, 'data') and resp.data:
try:
data_dict = json.loads(resp.data)
parser_result = data_dict
logging.info(f"[TCADP] JSON parsing successful: {parser_result}")
except json.JSONDecodeError as e:
logging.error(f"[TCADP] JSON parsing failed: {e}")
return None
else:
logging.error("[TCADP] No data in response")
return None
return parser_result
except TencentCloudSDKException as err:
logging.error(f"[TCADP] Tencent Cloud SDK error: {err}")
return None
except Exception as e:
logging.error(f"[TCADP] Unknown error: {e}")
logging.error(f"[TCADP] Error stack trace: {traceback.format_exc()}")
return None
def download_result_file(self, download_url, output_dir):
"""Download parsing result file"""
if not download_url:
logging.warning("[TCADP] No downloadable result file")
return None
try:
response = requests.get(download_url)
response.raise_for_status()
# Ensure output directory exists
os.makedirs(output_dir, exist_ok=True)
# Generate filename
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"tcadp_result_{timestamp}.zip"
file_path = os.path.join(output_dir, filename)
# Save file
with open(file_path, "wb") as f:
f.write(response.content)
logging.info(f"[TCADP] Document parsing result downloaded to: {os.path.basename(file_path)}")
return file_path
except requests.exceptions.RequestException as e:
logging.error(f"[TCADP] Failed to download file: {e}")
return None
class TCADPParser(RAGFlowPdfParser):
def __init__(self, secret_id: str = None, secret_key: str = None, region: str = "ap-guangzhou"):
super().__init__()
# First initialize logger
self.logger = logging.getLogger(self.__class__.__name__)
# Priority: read configuration from RAGFlow configuration system (service_conf.yaml)
try:
tcadp_parser = get_base_config("tcadp_config", {})
if isinstance(tcadp_parser, dict) and tcadp_parser:
self.secret_id = secret_id or tcadp_parser.get("secret_id")
self.secret_key = secret_key or tcadp_parser.get("secret_key")
self.region = region or tcadp_parser.get("region", "ap-guangzhou")
self.table_result_type = tcadp_parser.get("table_result_type", "1")
self.markdown_image_response_type = tcadp_parser.get("markdown_image_response_type", "1")
self.logger.info("[TCADP] Configuration read from service_conf.yaml")
else:
self.logger.error("[TCADP] Please configure tcadp_config in service_conf.yaml first")
except ImportError:
self.logger.info("[TCADP] Configuration module import failed")
if not self.secret_id or not self.secret_key:
raise ValueError("[TCADP] Please set Tencent Cloud API keys, configure tcadp_config in service_conf.yaml")
def check_installation(self) -> bool:
"""Check if Tencent Cloud API configuration is correct"""
try:
# Check necessary configuration parameters
if not self.secret_id or not self.secret_key:
self.logger.error("[TCADP] Tencent Cloud API configuration incomplete")
return False
# Try to create client to verify configuration
TencentCloudAPIClient(self.secret_id, self.secret_key, self.region)
self.logger.info("[TCADP] Tencent Cloud API configuration check passed")
return True
except Exception as e:
self.logger.error(f"[TCADP] Tencent Cloud API configuration check failed: {e}")
return False
def _file_to_base64(self, file_path: str, binary: bytes = None) -> str:
"""Convert file to Base64 format"""
if binary:
# If binary data is directly available, convert directly
return base64.b64encode(binary).decode('utf-8')
else:
# Read from file path and convert
with open(file_path, 'rb') as f:
file_data = f.read()
return base64.b64encode(file_data).decode('utf-8')
def _extract_content_from_zip(self, zip_path: str) -> list[dict[str, Any]]:
"""Extract parsing results from downloaded ZIP file"""
results = []
try:
with zipfile.ZipFile(zip_path, "r") as zip_file:
# Find JSON result files
json_files = [f for f in zip_file.namelist() if f.endswith(".json")]
for json_file in json_files:
with zip_file.open(json_file) as f:
data = json.load(f)
if isinstance(data, list):
results.extend(data)
else:
results.append(data)
# Find Markdown files
md_files = [f for f in zip_file.namelist() if f.endswith(".md")]
for md_file in md_files:
with zip_file.open(md_file) as f:
content = f.read().decode("utf-8")
results.append({"type": "text", "content": content, "file": md_file})
except Exception as e:
self.logger.error(f"[TCADP] Failed to extract ZIP file content: {e}")
return results
def _parse_content_to_sections(self, content_data: list[dict[str, Any]]) -> list[tuple[str, str]]:
"""Convert parsing results to sections format"""
sections = []
for item in content_data:
content_type = item.get("type", "text")
content = item.get("content", "")
if not content:
continue
# Process based on content type
if content_type == "text" or content_type == "paragraph":
section_text = content
elif content_type == "table":
# Handle table content
table_data = item.get("table_data", {})
if isinstance(table_data, dict):
# Convert table data to text
rows = table_data.get("rows", [])
section_text = "\n".join([" | ".join(row) for row in rows])
else:
section_text = str(table_data)
elif content_type == "image":
# Handle image content
caption = item.get("caption", "")
section_text = f"[Image] {caption}" if caption else "[Image]"
elif content_type == "equation":
# Handle equation content
section_text = f"$${content}$$"
else:
section_text = content
if section_text.strip():
# Generate position tag (simplified version)
position_tag = "@@1\t0.0\t1000.0\t0.0\t100.0##"
sections.append((section_text, position_tag))
return sections
def _parse_content_to_tables(self, content_data: list[dict[str, Any]]) -> list:
"""Convert parsing results to tables format"""
tables = []
for item in content_data:
if item.get("type") == "table":
table_data = item.get("table_data", {})
if isinstance(table_data, dict):
rows = table_data.get("rows", [])
if rows:
# Convert to table format
table_html = "<table>\n"
for i, row in enumerate(rows):
table_html += " <tr>\n"
for cell in row:
tag = "th" if i == 0 else "td"
table_html += f" <{tag}>{cell}</{tag}>\n"
table_html += " </tr>\n"
table_html += "</table>"
tables.append(table_html)
return tables
def parse_pdf(
self,
filepath: str | PathLike[str],
binary: BytesIO | bytes,
callback: Optional[Callable] = None,
*,
output_dir: Optional[str] = None,
file_type: str = "PDF",
file_start_page: Optional[int] = 1,
file_end_page: Optional[int] = 1000,
delete_output: Optional[bool] = True,
max_retries: Optional[int] = 1,
) -> tuple:
"""Parse PDF document"""
temp_file = None
created_tmp_dir = False
try:
# Handle input file
if binary:
temp_file = tempfile.NamedTemporaryFile(delete=False, suffix=".pdf")
temp_file.write(binary)
temp_file.close()
file_path = temp_file.name
self.logger.info(f"[TCADP] Received binary PDF -> {os.path.basename(file_path)}")
if callback:
callback(0.1, f"[TCADP] Received binary PDF -> {os.path.basename(file_path)}")
else:
file_path = str(filepath)
if not os.path.exists(file_path):
if callback:
callback(-1, f"[TCADP] PDF file does not exist: {file_path}")
raise FileNotFoundError(f"[TCADP] PDF file does not exist: {file_path}")
# Convert file to Base64 format
if callback:
callback(0.2, "[TCADP] Converting file to Base64 format")
file_base64 = self._file_to_base64(file_path, binary)
if callback:
callback(0.25, f"[TCADP] File converted to Base64, size: {len(file_base64)} characters")
# Create Tencent Cloud API client
client = TencentCloudAPIClient(self.secret_id, self.secret_key, self.region)
# Call document parsing API (with retry mechanism)
if callback:
callback(0.3, "[TCADP] Starting to call Tencent Cloud document parsing API")
result = None
for attempt in range(max_retries):
try:
if attempt > 0:
self.logger.info(f"[TCADP] Retry attempt {attempt + 1}")
if callback:
callback(0.3 + attempt * 0.1, f"[TCADP] Retry attempt {attempt + 1}")
time.sleep(2 ** attempt) # Exponential backoff
config = {
"TableResultType": self.table_result_type,
"MarkdownImageResponseType": self.markdown_image_response_type
}
result = client.reconstruct_document_sse(
file_type=file_type,
file_base64=file_base64,
file_start_page=file_start_page,
file_end_page=file_end_page,
config=config
)
if result:
self.logger.info(f"[TCADP] Attempt {attempt + 1} successful")
break
else:
self.logger.warning(f"[TCADP] Attempt {attempt + 1} failed, result is None")
except Exception as e:
self.logger.error(f"[TCADP] Attempt {attempt + 1} exception: {e}")
if attempt == max_retries - 1:
raise
if not result:
error_msg = f"[TCADP] Document parsing failed, retried {max_retries} times"
self.logger.error(error_msg)
if callback:
callback(-1, error_msg)
raise RuntimeError(error_msg)
# Get download link
download_url = result.get("DocumentRecognizeResultUrl")
if not download_url:
if callback:
callback(-1, "[TCADP] No parsing result download link obtained")
raise RuntimeError("[TCADP] No parsing result download link obtained")
if callback:
callback(0.6, f"[TCADP] Parsing result download link: {download_url}")
# Set output directory
if output_dir:
out_dir = Path(output_dir)
out_dir.mkdir(parents=True, exist_ok=True)
else:
out_dir = Path(tempfile.mkdtemp(prefix="adp_pdf_"))
created_tmp_dir = True
# Download result file
zip_path = client.download_result_file(download_url, str(out_dir))
if not zip_path:
if callback:
callback(-1, "[TCADP] Failed to download parsing result")
raise RuntimeError("[TCADP] Failed to download parsing result")
if callback:
# Shorten file path display, only show filename
zip_filename = os.path.basename(zip_path)
callback(0.8, f"[TCADP] Parsing result downloaded: {zip_filename}")
# Extract ZIP file content
content_data = self._extract_content_from_zip(zip_path)
self.logger.info(f"[TCADP] Extracted {len(content_data)} content blocks")
if callback:
callback(0.9, f"[TCADP] Extracted {len(content_data)} content blocks")
# Convert to sections and tables format
sections = self._parse_content_to_sections(content_data)
tables = self._parse_content_to_tables(content_data)
self.logger.info(f"[TCADP] Parsing completed: {len(sections)} sections, {len(tables)} tables")
if callback:
callback(1.0, f"[TCADP] Parsing completed: {len(sections)} sections, {len(tables)} tables")
return sections, tables
finally:
# Clean up temporary files
if temp_file and os.path.exists(temp_file.name):
try:
os.unlink(temp_file.name)
except Exception:
pass
if delete_output and created_tmp_dir and out_dir.exists():
try:
shutil.rmtree(out_dir)
except Exception:
pass
if __name__ == "__main__":
# Test ADP parser
parser = TCADPParser()
print("ADP available:", parser.check_installation())
# Test parsing
filepath = ""
if filepath and os.path.exists(filepath):
with open(filepath, "rb") as file:
sections, tables = parser.parse_pdf(filepath=filepath, binary=file.read())
print(f"Parsing result: {len(sections)} sections, {len(tables)} tables")
for i, (section, tag) in enumerate(sections[:3]): # Only print first 3
print(f"Section {i + 1}: {section[:100]}...")

View File

@ -22,6 +22,7 @@ import os
from huggingface_hub import snapshot_download
from api.utils.file_utils import get_project_base_directory
from api.utils.common import pip_install_torch
from rag.settings import PARALLEL_DEVICES
from .operators import * # noqa: F403
from . import operators
@ -83,6 +84,7 @@ def load_model(model_dir, nm, device_id: int | None = None):
def cuda_is_available():
try:
pip_install_torch()
import torch
target_id = 0 if device_id is None else device_id
if torch.cuda.is_available() and torch.cuda.device_count() > target_id:

View File

@ -57,10 +57,10 @@ class TableStructureRecognizer(Recognizer):
raise RuntimeError("Unsupported table structure recognizer type.")
if table_structure_recognizer_type == "onnx":
logging.debug("Using Onnx table structure recognizer", flush=True)
logging.debug("Using Onnx table structure recognizer")
tbls = super().__call__(images, thr)
else: # ascend
logging.debug("Using Ascend table structure recognizer", flush=True)
logging.debug("Using Ascend table structure recognizer")
tbls = self._run_ascend_tsr(images, thr)
res = []

View File

@ -1,3 +1,8 @@
# ------------------------------
# docker env var for specifying vector db type at startup
# (based on the vector db type, the corresponding docker
# compose profile will be used)
# ------------------------------
# The type of doc engine to use.
# Available options:
# - `elasticsearch` (default)
@ -5,12 +10,13 @@
# - `opensearch` (https://github.com/opensearch-project/OpenSearch)
DOC_ENGINE=${DOC_ENGINE:-elasticsearch}
# ------------------------------
# docker env var for specifying vector db type at startup
# (based on the vector db type, the corresponding docker
# compose profile will be used)
# ------------------------------
COMPOSE_PROFILES=${DOC_ENGINE}
# Device on which deepdoc inference run.
# Available levels:
# - `cpu` (default)
# - `gpu`
DEVICE=${DEVICE:-cpu}
COMPOSE_PROFILES=${DOC_ENGINE},${DEVICE}
# The version of Elasticsearch.
STACK_VERSION=8.11.3
@ -38,7 +44,7 @@ OPENSEARCH_PASSWORD=infini_rag_flow_OS_01
# The port used to expose the Kibana service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
# To enable kibana, you need to:
# 1. Ensure that COMPOSE_PROFILES includes kibana, for example: COMPOSE_PROFILES=${DOC_ENGINE},kibana
# 1. Ensure that COMPOSE_PROFILES includes kibana, for example: COMPOSE_PROFILES=${COMPOSE_PROFILES},kibana
# 2. Comment out or delete the following configurations of the es service in docker-compose-base.yml: xpack.security.enabled、xpack.security.http.ssl.enabled、xpack.security.transport.ssl.enabled (for details: https://www.elastic.co/docs/deploy-manage/security/self-auto-setup#stack-existing-settings-detected)
# 3. Adjust the es.hosts in conf/service_config.yaml or docker/service_conf.yaml.template to 'https://localhost:1200'
# 4. After the startup is successful, in the es container, execute the command to generate the kibana token: `bin/elasticsearch-create-enrollment-token -s kibana`, then you can use kibana normally
@ -93,33 +99,53 @@ REDIS_PASSWORD=infini_rag_flow
# The port used to expose RAGFlow's HTTP API service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
SVR_WEB_HTTP_PORT=80
SVR_WEB_HTTPS_PORT=443
SVR_HTTP_PORT=9380
ADMIN_SVR_HTTP_PORT=9381
SVR_MCP_PORT=9382
# The RAGFlow Docker image to download.
# The RAGFlow Docker image to download. v0.22+ doesn't include embedding models.
# Defaults to the v0.21.1-slim edition, which is the RAGFlow Docker image without embedding models.
RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.21.1
#
# The Docker image of the v0.21.1 edition includes built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - maidalun1020/bce-embedding-base_v1
#
# If you cannot download the RAGFlow Docker image:
#
# - For the `nightly-slim` edition, uncomment either of the following:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly-slim
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly-slim
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:v0.21.1
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:v0.21.1
#
# - For the `nightly` edition, uncomment either of the following:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly
# The embedding service image, model and port.
# Important: To enable the embedding service, you need to uncomment one of the following two lines:
# COMPOSE_PROFILES=${COMPOSE_PROFILES},tei-cpu
# COMPOSE_PROFILES=${COMPOSE_PROFILES},tei-gpu
# The embedding service image:
TEI_IMAGE_CPU=infiniflow/text-embeddings-inference:cpu-1.8
TEI_IMAGE_GPU=infiniflow/text-embeddings-inference:1.8
# The embedding service model:
# Available options:
# - `Qwen/Qwen3-Embedding-0.6B` (default, requires 25GB RAM/vRAM to load)
# - `BAAI/bge-m3` (requires 21GB RAM/vRAM to load)
# - `BAAI/bge-small-en-v1.5` (requires 1.2GB RAM/vRAM to load)
TEI_MODEL=${TEI_MODEL:-Qwen/Qwen3-Embedding-0.6B}
# The embedding service port:
TEI_HOST=tei
# The port used to expose the TEI service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
TEI_PORT=6380
# The local time zone.
TIMEZONE=Asia/Shanghai
TZ=Asia/Shanghai
# Uncomment the following line if you have limited access to huggingface.co:
# HF_ENDPOINT=https://hf-mirror.com
@ -165,8 +191,11 @@ EMBEDDING_BATCH_SIZE=${EMBEDDING_BATCH_SIZE:-16}
# - Disable registration: 0
REGISTER_ENABLED=1
# Important: To enable sandbox, you need to uncomment following two lines:
# SANDBOX_ENABLED=1
# COMPOSE_PROFILES=${COMPOSE_PROFILES},sandbox
# Sandbox settings
# Important: To enable sandbox, you must re-declare the compose profiles. See hints at the end of file.
# Double check if you add `sandbox-executor-manager` to your `/etc/hosts`
# Pull the required base images before running:
# docker pull infiniflow/sandbox-base-nodejs:latest
@ -175,7 +204,6 @@ REGISTER_ENABLED=1
# - Node.js base image: includes axios
# - Python base image: includes requests, numpy, and pandas
# Specify custom executor images below if you're using non-default environments.
# SANDBOX_ENABLED=1
# SANDBOX_HOST=sandbox-executor-manager
# SANDBOX_EXECUTOR_MANAGER_IMAGE=infiniflow/sandbox-executor-manager:latest
# SANDBOX_EXECUTOR_MANAGER_POOL_SIZE=3
@ -195,3 +223,5 @@ REGISTER_ENABLED=1
# COMPOSE_PROFILES=infinity,sandbox
# - For OpenSearch:
# COMPOSE_PROFILES=opensearch,sandbox
USE_DOCLING=false
USE_MINERU=false

View File

@ -18,7 +18,7 @@
Sets up environment for RAGFlow's dependencies: Elasticsearch/[Infinity](https://github.com/infiniflow/infinity), MySQL, MinIO, and Redis.
> [!CAUTION]
> We do not actively maintain **docker-compose-CN-oc9.yml**, **docker-compose-gpu-CN-oc9.yml**, or **docker-compose-gpu.yml**, so use them at your own risk. However, you are welcome to file a pull request to improve any of them.
> We do not actively maintain **docker-compose-CN-oc9.yml**, **docker-compose-macos.yml**, so use them at your own risk. However, you are welcome to file a pull request to improve any of them.
## 🐬 Docker environment variables
@ -89,16 +89,13 @@ The [.env](./.env) file contains important environment variables for Docker.
> [!TIP]
> If you cannot download the RAGFlow Docker image, try the following mirrors.
>
> - For the `nightly-slim` edition:
> - `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly-slim` or,
> - `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly-slim`.
> - For the `nightly` edition:
> - `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly` or,
> - `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly`.
### Timezone
- `TIMEZONE`
- `TZ`
The local time zone. Defaults to `'Asia/Shanghai'`.
### Hugging Face mirror site

View File

@ -4,12 +4,13 @@ include:
- ./docker-compose-base.yml
services:
ragflow:
ragflow-cpu:
depends_on:
mysql:
condition: service_healthy
profiles:
- cpu
image: edwardelric233/ragflow:oc9
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
@ -20,10 +21,6 @@ services:
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow
restart: on-failure
@ -31,3 +28,35 @@ services:
# If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
ragflow-gpu:
depends_on:
mysql:
condition: service_healthy
profiles:
- gpu
image: edwardelric233/ragflow:oc9
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
networks:
- ragflow
restart: on-failure
# https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@ -1,6 +1,5 @@
services:
es01:
container_name: ragflow-es-01
profiles:
- elasticsearch
image: elasticsearch:${STACK_VERSION}
@ -20,7 +19,6 @@ services:
- cluster.routing.allocation.disk.watermark.low=5gb
- cluster.routing.allocation.disk.watermark.high=3gb
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
@ -36,7 +34,6 @@ services:
restart: on-failure
opensearch01:
container_name: ragflow-opensearch-01
profiles:
- opensearch
image: hub.icert.top/opensearchproject/opensearch:2.19.1
@ -57,7 +54,6 @@ services:
- cluster.routing.allocation.disk.watermark.low=5gb
- cluster.routing.allocation.disk.watermark.high=3gb
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
- http.port=9201
mem_limit: ${MEM_LIMIT}
ulimits:
@ -74,10 +70,9 @@ services:
restart: on-failure
infinity:
container_name: ragflow-infinity
profiles:
- infinity
image: infiniflow/infinity:v0.6.1
image: infiniflow/infinity:v0.6.2
volumes:
- infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml
@ -87,8 +82,6 @@ services:
- ${INFINITY_HTTP_PORT}:23820
- ${INFINITY_PSQL_PORT}:5432
env_file: .env
environment:
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
nofile:
@ -104,7 +97,6 @@ services:
restart: on-failure
sandbox-executor-manager:
container_name: ragflow-sandbox-executor-manager
profiles:
- sandbox
image: ${SANDBOX_EXECUTOR_MANAGER_IMAGE-infiniflow/sandbox-executor-manager:latest}
@ -119,7 +111,6 @@ services:
security_opt:
- no-new-privileges:true
environment:
- TZ=${TIMEZONE}
- SANDBOX_EXECUTOR_MANAGER_POOL_SIZE=${SANDBOX_EXECUTOR_MANAGER_POOL_SIZE:-3}
- SANDBOX_BASE_PYTHON_IMAGE=${SANDBOX_BASE_PYTHON_IMAGE:-infiniflow/sandbox-base-python:latest}
- SANDBOX_BASE_NODEJS_IMAGE=${SANDBOX_BASE_NODEJS_IMAGE:-infiniflow/sandbox-base-nodejs:latest}
@ -136,11 +127,9 @@ services:
mysql:
# mysql:5.7 linux/arm64 image is unavailable.
image: mysql:8.0.39
container_name: ragflow-mysql
env_file: .env
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- TZ=${TIMEZONE}
command:
--max_connections=1000
--character-set-server=utf8mb4
@ -165,7 +154,6 @@ services:
minio:
image: quay.io/minio/minio:RELEASE.2025-06-13T11-33-47Z
container_name: ragflow-minio
command: server --console-address ":9001" /data
ports:
- ${MINIO_PORT}:9000
@ -174,7 +162,6 @@ services:
environment:
- MINIO_ROOT_USER=${MINIO_USER}
- MINIO_ROOT_PASSWORD=${MINIO_PASSWORD}
- TZ=${TIMEZONE}
volumes:
- minio_data:/data
networks:
@ -189,7 +176,6 @@ services:
redis:
# swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/valkey/valkey:8
image: valkey/valkey:8
container_name: ragflow-redis
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 128mb --maxmemory-policy allkeys-lru
env_file: .env
ports:
@ -207,16 +193,48 @@ services:
start_period: 10s
tei-cpu:
profiles:
- tei-cpu
image: ${TEI_IMAGE_CPU}
hostname: tei
ports:
- ${TEI_PORT-6380}:80
env_file: .env
networks:
- ragflow
command: ["--model-id", "/data/${TEI_MODEL}"]
restart: on-failure
tei-gpu:
profiles:
- tei-gpu
image: ${TEI_IMAGE_GPU}
hostname: tei
ports:
- ${TEI_PORT-6380}:80
env_file: .env
networks:
- ragflow
command: ["--model-id", "/data/${TEI_MODEL}"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: on-failure
kibana:
container_name: ragflow-kibana
profiles:
- kibana
image: kibana:${STACK_VERSION}
ports:
- ${KIBANA_PORT-5601}:5601
env_file: .env
environment:
- TZ=${TIMEZONE}
volumes:
- kibana_data:/usr/share/kibana/data
depends_on:
@ -245,6 +263,8 @@ volumes:
driver: local
redis_data:
driver: local
tei_data:
driver: local
kibana_data:
driver: local

View File

@ -1,40 +0,0 @@
# The RAGFlow team do not actively maintain docker-compose-gpu-CN-oc9.yml, so use them at your own risk.
# However, you are welcome to file a pull request to improve it.
include:
- ./docker-compose-base.yml
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
image: edwardelric233/ragflow:oc9
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow
restart: on-failure
# https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@ -1,40 +0,0 @@
# The RAGFlow team do not actively maintain docker-compose-gpu.yml, so use them at your own risk.
# Pull requests to improve it are welcome.
include:
- ./docker-compose-base.yml
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow
restart: on-failure
# https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]

View File

@ -10,7 +10,6 @@ services:
build:
context: ../
dockerfile: Dockerfile
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
@ -21,11 +20,6 @@ services:
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS:-1}
- LIGHTEN=${LIGHTEN:-1}
networks:
- ragflow
restart: on-failure
@ -38,15 +32,10 @@ services:
# mysql:
# condition: service_healthy
# image: ${RAGFLOW_IMAGE}
# container_name: ragflow-executor
# volumes:
# - ./ragflow-logs:/ragflow/logs
# - ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
# env_file: .env
# environment:
# - TZ=${TIMEZONE}
# - HF_ENDPOINT=${HF_ENDPOINT}
# - MACOS=${MACOS}
# entrypoint: "/ragflow/entrypoint_task_executor.sh 1 3"
# networks:
# - ragflow

View File

@ -2,10 +2,12 @@ include:
- ./docker-compose-base.yml
# To ensure that the container processes the locally modified `service_conf.yaml.template` instead of the one included in its image, you need to mount the local `service_conf.yaml.template` to the container.
services:
ragflow:
ragflow-cpu:
depends_on:
mysql:
condition: service_healthy
profiles:
- cpu
image: ${RAGFLOW_IMAGE}
# Example configuration to set up an MCP server:
# command:
@ -26,15 +28,12 @@ services:
# Example configration to start Admin server:
# command:
# - --enable-adminserver
container_name: ragflow-server
ports:
- ${SVR_WEB_HTTP_PORT}:80
- ${SVR_WEB_HTTPS_PORT}:443
- ${SVR_HTTP_PORT}:9380
- ${ADMIN_SVR_HTTP_PORT}:9381
- 80:80
- 443:443
- 5678:5678
- 5679:5679
- 9382:9382 # entry for MCP (host_port:docker_port). The docker_port must match the value you set for `mcp-port` above.
- ${SVR_MCP_PORT}:9382 # entry for MCP (host_port:docker_port). The docker_port must match the value you set for `mcp-port` above.
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
@ -44,10 +43,6 @@ services:
- ./service_conf.yaml.template:/ragflow/conf/service_conf.yaml.template
- ./entrypoint.sh:/ragflow/entrypoint.sh
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT-}
- MACOS=${MACOS-}
networks:
- ragflow
restart: on-failure
@ -55,20 +50,73 @@ services:
# If you use Docker Desktop, the --add-host flag is optional. This flag ensures that the host's internal IP is exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
ragflow-gpu:
depends_on:
mysql:
condition: service_healthy
profiles:
- gpu
image: ${RAGFLOW_IMAGE}
# Example configuration to set up an MCP server:
# command:
# - --enable-mcpserver
# - --mcp-host=0.0.0.0
# - --mcp-port=9382
# - --mcp-base-url=http://127.0.0.1:9380
# - --mcp-script-path=/ragflow/mcp/server/server.py
# - --mcp-mode=self-host
# - --mcp-host-api-key=ragflow-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Optional transport flags for MCP (customize if needed).
# Host mode need to combined with --no-transport-streamable-http-enabled flag, namely, host+streamable-http is not supported yet.
# The following are enabled by default unless explicitly disabled with --no-<flag>.
# - --no-transport-sse-enabled # Disable legacy SSE endpoints (/sse and /messages/)
# - --no-transport-streamable-http-enabled # Disable Streamable HTTP transport (/mcp endpoint)
# - --no-json-response # Disable JSON response mode in Streamable HTTP transport (instead of SSE over HTTP)
# Example configration to start Admin server:
# command:
# - --enable-adminserver
ports:
- ${SVR_WEB_HTTP_PORT}:80
- ${SVR_WEB_HTTPS_PORT}:443
- ${SVR_HTTP_PORT}:9380
- ${ADMIN_SVR_HTTP_PORT}:9381
- ${SVR_MCP_PORT}:9382 # entry for MCP (host_port:docker_port). The docker_port must match the value you set for `mcp-port` above.
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ../history_data_agent:/ragflow/history_data_agent
- ./service_conf.yaml.template:/ragflow/conf/service_conf.yaml.template
- ./entrypoint.sh:/ragflow/entrypoint.sh
env_file: .env
networks:
- ragflow
restart: on-failure
# https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# If you use Docker Desktop, the --add-host flag is optional. This flag ensures that the host's internal IP is exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
# executor:
# depends_on:
# mysql:
# condition: service_healthy
# image: ${RAGFLOW_IMAGE}
# container_name: ragflow-executor
# volumes:
# - ./ragflow-logs:/ragflow/logs
# - ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
# env_file: .env
# environment:
# - TZ=${TIMEZONE}
# - HF_ENDPOINT=${HF_ENDPOINT}
# - MACOS=${MACOS}
# entrypoint: "/ragflow/entrypoint_task_executor.sh 1 3"
# networks:
# - ragflow
@ -77,3 +125,10 @@ services:
# # If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
# extra_hosts:
# - "host.docker.internal:host-gateway"
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]

View File

@ -178,9 +178,53 @@ function start_mcp_server() {
"${MCP_JSON_RESPONSE_FLAG}" &
}
function ensure_docling() {
[[ "${USE_DOCLING}" == "true" ]] || return 0
python3 -c 'import pip' >/dev/null 2>&1 || python3 -m ensurepip --upgrade || true
DOCLING_PIN="${DOCLING_VERSION:-==2.58.0}"
python3 -c "import importlib.util,sys; sys.exit(0 if importlib.util.find_spec('docling') else 1)" \
|| python3 -m pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --extra-index-url https://pypi.org/simple --no-cache-dir "docling${DOCLING_PIN}"
}
function ensure_mineru() {
[[ "${USE_MINERU}" == "true" ]] || { echo "[mineru] disabled by USE_MINERU"; return 0; }
export HUGGINGFACE_HUB_ENDPOINT="${HF_ENDPOINT:-https://hf-mirror.com}"
local default_prefix="/ragflow/uv_tools"
local venv_dir="${default_prefix}/.venv"
local exe="${MINERU_EXECUTABLE:-${venv_dir}/bin/mineru}"
if [[ -x "${exe}" ]]; then
echo "[mineru] found: ${exe}"
export MINERU_EXECUTABLE="${exe}"
return 0
fi
echo "[mineru] not found, bootstrapping with uv ..."
(
set -e
mkdir -p "${default_prefix}"
cd "${default_prefix}"
[[ -d "${venv_dir}" ]] || uv venv "${venv_dir}"
source "${venv_dir}/bin/activate"
uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple --extra-index-url https://pypi.org/simple
deactivate
)
export MINERU_EXECUTABLE="${exe}"
if ! "${MINERU_EXECUTABLE}" --help >/dev/null 2>&1; then
echo "[mineru] installation failed: ${MINERU_EXECUTABLE} not working" >&2
return 1
fi
echo "[mineru] installed: ${MINERU_EXECUTABLE}"
}
# -----------------------------------------------------------------------------
# Start components based on flags
# -----------------------------------------------------------------------------
ensure_docling
ensure_mineru
if [[ "${ENABLE_WEBSERVER}" -eq 1 ]]; then
echo "Starting nginx..."
@ -203,6 +247,7 @@ if [[ "${ENABLE_MCP_SERVER}" -eq 1 ]]; then
start_mcp_server
fi
if [[ "${ENABLE_TASKEXECUTOR}" -eq 1 ]]; then
if [[ "${CONSUMER_NO_END}" -gt "${CONSUMER_NO_BEG}" ]]; then
echo "Starting task executors on host '${HOST_ID}' for IDs in [${CONSUMER_NO_BEG}, ${CONSUMER_NO_END})..."

View File

@ -1,5 +1,5 @@
[general]
version = "0.6.1"
version = "0.6.2"
time_zone = "utc-8"
[network]

View File

@ -11,7 +11,7 @@ server {
gzip_disable "MSIE [1-6]\.";
location ~ ^/(v1|api) {
proxy_pass http://ragflow:9380;
proxy_pass http://localhost:9380;
include proxy.conf;
}

View File

@ -22,6 +22,11 @@ server {
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
location ~ ^/api/v1/admin {
proxy_pass http://ragflow:9381;
include proxy.conf;
}
location ~ ^/(v1|api) {
proxy_pass http://ragflow:9380;
include proxy.conf;

View File

@ -32,6 +32,11 @@ redis:
db: 1
password: '${REDIS_PASSWORD:-infini_rag_flow}'
host: '${REDIS_HOST:-redis}:6379'
user_default_llm:
default_models:
embedding_model:
api_key: 'xxx'
base_url: 'http://${TEI_HOST}:80'
# postgres:
# name: '${POSTGRES_DBNAME:-rag_flow}'
# user: '${POSTGRES_USER:-rag_flow}'
@ -133,3 +138,9 @@ redis:
# - "RAGFlow" # display name
# - "" # sender email address
# mail_frontend_url: "https://your-frontend.example.com"
# tcadp_config:
# secret_id: '${TENCENT_SECRET_ID}'
# secret_key: '${TENCENT_SECRET_KEY}'
# region: '${TENCENT_REGION}'
# table_result_type: '1'
# markdown_image_response_type: '1'

View File

@ -35,7 +35,7 @@ docker compose -f docker/docker-compose.yml up -d
Sets up environment for RAGFlow's dependencies: Elasticsearch/[Infinity](https://github.com/infiniflow/infinity), MySQL, MinIO, and Redis.
:::danger IMPORTANT
We do not actively maintain **docker-compose-CN-oc9.yml**, **docker-compose-gpu-CN-oc9.yml**, or **docker-compose-gpu.yml**, so use them at your own risk. However, you are welcome to file a pull request to improve any of them.
We do not actively maintain **docker-compose-CN-oc9.yml**, **docker-compose-macos.yml**, so use them at your own risk. However, you are welcome to file a pull request to improve them.
:::
## Docker environment variables
@ -109,18 +109,23 @@ RAGFlow utilizes MinIO as its object storage solution, leveraging its scalabilit
:::tip NOTE
If you cannot download the RAGFlow Docker image, try the following mirrors.
- For the `nightly-slim` edition:
- `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly-slim` or,
- `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly-slim`.
- For the `nightly` edition:
- `RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly` or,
- `RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly`.
:::
### Embedding service
- `TEI_MODEL`
The embedding model which text-embeddings-inference serves. Allowed values are one of `Qwen/Qwen3-Embedding-0.6B`(default), `BAAI/bge-m3`, and `BAAI/bge-small-en-v1.5`.
- `TEI_PORT`
The port used to expose the text-embeddings-inference service to the host machine, allowing **external** access to the text-embeddings-inference service running inside the Docker container. Defaults to `6380`.
### Timezone
- `TIMEZONE`
The local time zone. Defaults to `'Asia/Shanghai'`.
- `TZ`
The local time zone. Defaults to `Asia/Shanghai`.
### Hugging Face mirror site

View File

@ -39,25 +39,6 @@ This image is approximately 2 GB in size and relies on external LLM and embeddin
- For ARM64 platforms, please upgrade the `xgboost` version in **pyproject.toml** to `1.6.0` and ensure **unixODBC** is properly installed.
:::
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv run download_deps.py
docker build -f Dockerfile.deps -t infiniflow/ragflow_deps .
docker build --build-arg LIGHTEN=1 -f Dockerfile -t infiniflow/ragflow:nightly-slim .
```
</TabItem>
<TabItem value="including">
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
:::danger IMPORTANT
- While we also test RAGFlow on ARM64 platforms, we do not maintain RAGFlow Docker images for ARM. However, you can build an image yourself on a `linux/arm64` or `darwin/arm64` host machine as well.
- For ARM64 platforms, please upgrade the `xgboost` version in **pyproject.toml** to `1.6.0` and ensure **unixODBC** is properly installed.
:::
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
@ -66,18 +47,19 @@ docker build -f Dockerfile.deps -t infiniflow/ragflow_deps .
docker build -f Dockerfile -t infiniflow/ragflow:nightly .
```
</TabItem>
</Tabs>
## Launch a RAGFlow Service from Docker for MacOS
After building the infiniflow/ragflow:nightly-slim image, you are ready to launch a fully-functional RAGFlow service with all the required components, such as Elasticsearch, MySQL, MinIO, Redis, and more.
After building the infiniflow/ragflow:nightly image, you are ready to launch a fully-functional RAGFlow service with all the required components, such as Elasticsearch, MySQL, MinIO, Redis, and more.
## Example: Apple M2 Pro (Sequoia)
1. Edit Docker Compose Configuration
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.21.1-slim` to `infiniflow/ragflow:nightly-slim` to use the pre-built image.
Open the `docker/.env` file. Find the `RAGFLOW_IMAGE` setting and change the image reference from `infiniflow/ragflow:v0.21.1` to `infiniflow/ragflow:nightly` to use the pre-built image.
2. Launch the Service

View File

@ -48,7 +48,7 @@ cd ragflow/
```
- full:
```bash
uv sync --python 3.10 --all-extras # install RAGFlow dependent python modules
uv sync --python 3.10 # install RAGFlow dependent python modules
```
*A virtual environment named `.venv` is created, and all Python dependencies are installed into the new environment.*

View File

@ -116,51 +116,51 @@ Run `docker compose -f docker-compose.yml up` to launch the RAGFlow server toget
*The following ASCII art confirms a successful launch:*
```bash
ragflow-server | Starting MCP Server on 0.0.0.0:9382 with base URL http://127.0.0.1:9380...
ragflow-server | Starting 1 task executor(s) on host 'dd0b5e07e76f'...
ragflow-server | 2025-04-18 15:41:18,816 INFO 27 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
ragflow-server |
ragflow-server | __ __ ____ ____ ____ _____ ______ _______ ____
ragflow-server | | \/ |/ ___| _ \ / ___|| ____| _ \ \ / / ____| _ \
ragflow-server | | |\/| | | | |_) | \___ \| _| | |_) \ \ / /| _| | |_) |
ragflow-server | | | | | |___| __/ ___) | |___| _ < \ V / | |___| _ <
ragflow-server | |_| |_|\____|_| |____/|_____|_| \_\ \_/ |_____|_| \_\
ragflow-server |
ragflow-server | MCP launch mode: self-host
ragflow-server | MCP host: 0.0.0.0
ragflow-server | MCP port: 9382
ragflow-server | MCP base_url: http://127.0.0.1:9380
ragflow-server | INFO: Started server process [26]
ragflow-server | INFO: Waiting for application startup.
ragflow-server | INFO: Application startup complete.
ragflow-server | INFO: Uvicorn running on http://0.0.0.0:9382 (Press CTRL+C to quit)
ragflow-server | 2025-04-18 15:41:20,469 INFO 27 found 0 gpus
ragflow-server | 2025-04-18 15:41:23,263 INFO 27 init database on cluster mode successfully
ragflow-server | 2025-04-18 15:41:25,318 INFO 27 load_model /ragflow/rag/res/deepdoc/det.onnx uses CPU
ragflow-server | 2025-04-18 15:41:25,367 INFO 27 load_model /ragflow/rag/res/deepdoc/rec.onnx uses CPU
ragflow-server | ____ ___ ______ ______ __
ragflow-server | / __ \ / | / ____// ____// /____ _ __
ragflow-server | / /_/ // /| | / / __ / /_ / // __ \| | /| / /
ragflow-server | / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
ragflow-server | /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
ragflow-server |
ragflow-server |
ragflow-server | 2025-04-18 15:41:29,088 INFO 27 RAGFlow version: v0.18.0-285-gb2c299fa full
ragflow-server | 2025-04-18 15:41:29,088 INFO 27 project base: /ragflow
ragflow-server | 2025-04-18 15:41:29,088 INFO 27 Current configs, from /ragflow/conf/service_conf.yaml:
ragflow-server | ragflow: {'host': '0.0.0.0', 'http_port': 9380}
docker-ragflow-cpu-1 | Starting MCP Server on 0.0.0.0:9382 with base URL http://127.0.0.1:9380...
docker-ragflow-cpu-1 | Starting 1 task executor(s) on host 'dd0b5e07e76f'...
docker-ragflow-cpu-1 | 2025-04-18 15:41:18,816 INFO 27 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | __ __ ____ ____ ____ _____ ______ _______ ____
docker-ragflow-cpu-1 | | \/ |/ ___| _ \ / ___|| ____| _ \ \ / / ____| _ \
docker-ragflow-cpu-1 | | |\/| | | | |_) | \___ \| _| | |_) \ \ / /| _| | |_) |
docker-ragflow-cpu-1 | | | | | |___| __/ ___) | |___| _ < \ V / | |___| _ <
docker-ragflow-cpu-1 | |_| |_|\____|_| |____/|_____|_| \_\ \_/ |_____|_| \_\
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | MCP launch mode: self-host
docker-ragflow-cpu-1 | MCP host: 0.0.0.0
docker-ragflow-cpu-1 | MCP port: 9382
docker-ragflow-cpu-1 | MCP base_url: http://127.0.0.1:9380
docker-ragflow-cpu-1 | INFO: Started server process [26]
docker-ragflow-cpu-1 | INFO: Waiting for application startup.
docker-ragflow-cpu-1 | INFO: Application startup complete.
docker-ragflow-cpu-1 | INFO: Uvicorn running on http://0.0.0.0:9382 (Press CTRL+C to quit)
docker-ragflow-cpu-1 | 2025-04-18 15:41:20,469 INFO 27 found 0 gpus
docker-ragflow-cpu-1 | 2025-04-18 15:41:23,263 INFO 27 init database on cluster mode successfully
docker-ragflow-cpu-1 | 2025-04-18 15:41:25,318 INFO 27 load_model /ragflow/rag/res/deepdoc/det.onnx uses CPU
docker-ragflow-cpu-1 | 2025-04-18 15:41:25,367 INFO 27 load_model /ragflow/rag/res/deepdoc/rec.onnx uses CPU
docker-ragflow-cpu-1 | ____ ___ ______ ______ __
docker-ragflow-cpu-1 | / __ \ / | / ____// ____// /____ _ __
docker-ragflow-cpu-1 | / /_/ // /| | / / __ / /_ / // __ \| | /| / /
docker-ragflow-cpu-1 | / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
docker-ragflow-cpu-1 | /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 RAGFlow version: v0.18.0-285-gb2c299fa full
docker-ragflow-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 project base: /ragflow
docker-ragflow-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 Current configs, from /ragflow/conf/service_conf.yaml:
docker-ragflow-cpu-1 | ragflow: {'host': '0.0.0.0', 'http_port': 9380}
...
ragflow-server | * Running on all addresses (0.0.0.0)
ragflow-server | * Running on http://127.0.0.1:9380
ragflow-server | * Running on http://172.19.0.6:9380
ragflow-server | ______ __ ______ __
ragflow-server | /_ __/___ ______/ /__ / ____/ _____ _______ __/ /_____ _____
ragflow-server | / / / __ `/ ___/ //_/ / __/ | |/_/ _ \/ ___/ / / / __/ __ \/ ___/
ragflow-server | / / / /_/ (__ ) ,< / /____> </ __/ /__/ /_/ / /_/ /_/ / /
ragflow-server | /_/ \__,_/____/_/|_| /_____/_/|_|\___/\___/\__,_/\__/\____/_/
ragflow-server |
ragflow-server | 2025-04-18 15:41:34,501 INFO 32 TaskExecutor: RAGFlow version: v0.18.0-285-gb2c299fa full
ragflow-server | 2025-04-18 15:41:34,501 INFO 32 Use Elasticsearch http://es01:9200 as the doc engine.
docker-ragflow-cpu-1 | * Running on all addresses (0.0.0.0)
docker-ragflow-cpu-1 | * Running on http://127.0.0.1:9380
docker-ragflow-cpu-1 | * Running on http://172.19.0.6:9380
docker-ragflow-cpu-1 | ______ __ ______ __
docker-ragflow-cpu-1 | /_ __/___ ______/ /__ / ____/ _____ _______ __/ /_____ _____
docker-ragflow-cpu-1 | / / / __ `/ ___/ //_/ / __/ | |/_/ _ \/ ___/ / / / __/ __ \/ ___/
docker-ragflow-cpu-1 | / / / /_/ (__ ) ,< / /____> </ __/ /__/ /_/ / /_/ /_/ / /
docker-ragflow-cpu-1 | /_/ \__,_/____/_/|_| /_____/_/|_|\___/\___/\__,_/\__/\____/_/
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | 2025-04-18 15:41:34,501 INFO 32 TaskExecutor: RAGFlow version: v0.18.0-285-gb2c299fa full
docker-ragflow-cpu-1 | 2025-04-18 15:41:34,501 INFO 32 Use Elasticsearch http://es01:9200 as the doc engine.
...
```
@ -176,7 +176,7 @@ This section is contributed by our community contributor [yiminghub2024](https:/
iii. Copy [docker/entrypoint.sh](https://github.com/infiniflow/ragflow/blob/main/docker/entrypoint.sh) locally.
iv. Install the required dependencies using `uv`:
- Run `uv add mcp` or
- Copy [pyproject.toml](https://github.com/infiniflow/ragflow/blob/main/pyproject.toml) locally and run `uv sync --python 3.10 --all-extras`.
- Copy [pyproject.toml](https://github.com/infiniflow/ragflow/blob/main/pyproject.toml) locally and run `uv sync --python 3.10`.
2. Edit **docker-compose.yml** to enable MCP (disabled by default).
3. Launch the MCP server:
@ -189,7 +189,7 @@ docker compose -f docker-compose.yml up -d
Run the following to check the logs the RAGFlow server and the MCP server:
```bash
docker logs ragflow-server
docker logs docker-ragflow-cpu-1
```
## Security considerations

View File

@ -99,7 +99,7 @@ _The server replies with an `initialize` response, including the supported proto
```bash
event: message
data: {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2025-03-26","capabilities":{"experimental":{"headers":{"host":"127.0.0.1:9382","user-agent":"curl/8.7.1","accept":"*/*","api_key":"ragflow-xxxxxxxxxxxx","accept-encoding":"gzip"}},"tools":{"listChanged":false}},"serverInfo":{"name":"ragflow-server","version":"1.9.4"}}}
data: {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2025-03-26","capabilities":{"experimental":{"headers":{"host":"127.0.0.1:9382","user-agent":"curl/8.7.1","accept":"*/*","api_key":"ragflow-xxxxxxxxxxxx","accept-encoding":"gzip"}},"tools":{"listChanged":false}},"serverInfo":{"name":"docker-ragflow-cpu-1","version":"1.9.4"}}}
```
### 3. Acknowledge readiness

View File

@ -33,6 +33,8 @@ Each RAGFlow release is available in two editions:
- **Slim edition**: excludes built-in embedding models and is identified by a **-slim** suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1-slim`
- **Full edition**: includes built-in embedding models and has no suffix added to the version name. Example: `infiniflow/ragflow:v0.21.1`
Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
---
### Which embedding models can be deployed locally?
@ -44,6 +46,8 @@ RAGFlow offers two Docker image editions, `v0.21.1-slim` and `v0.21.1`:
- `BAAI/bge-large-zh-v1.5`
- `maidalun1020/bce-embedding-base_v1`
Note: Starting with `v0.22.0`, we ship only the slim edition and no longer append the **-slim** suffix to the image tag.
---
### Where to find the version of RAGFlow? How to interpret it?
@ -220,7 +224,7 @@ Ignore this warning and continue. All system warnings can be ignored.
![anomaly](https://github.com/infiniflow/ragflow/assets/93570324/beb7ad10-92e4-4a58-8886-bfb7cbd09e5d)
You will not log in to RAGFlow unless the server is fully initialized. Run `docker logs -f ragflow-server`.
You will not log in to RAGFlow unless the server is fully initialized. Run `docker logs -f docker-ragflow-cpu-1`.
*The server is successfully initialized, if your system displays the following:*
@ -256,7 +260,7 @@ Click the red cross beside the 'parsing status' bar, then restart the parsing pr
1. Check the log of your RAGFlow server to see if it is running properly:
```bash
docker logs -f ragflow-server
docker logs -f docker-ragflow-cpu-1
```
2. Check if the **task_executor.py** process exists.
@ -310,7 +314,7 @@ tail -f ragflow/docker/ragflow-logs/*.log
*The following is an example result:*
```bash
5bc45806b680 infiniflow/ragflow:latest "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
5bc45806b680 infiniflow/ragflow:latest "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp docker-ragflow-cpu-1
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up 16 seconds (healthy) 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp ragflow-mysql
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
@ -532,5 +536,15 @@ uv pip install -U "mineru[core]" -i https://mirrors.aliyun.com/pypi/simple
4. In the web UI, navigate to the **Configuration** page of your dataset. Click **Built-in** in the **Ingestion pipeline** section, select a chunking method from the **Built-in** dropdown, which supports PDF parsing, and slect **MinerU** in **PDF parser**.
5. If you use a custom ingestion pipeline instead, you must also complete the first three steps before selecting **MinerU** in the **Parsing method** section of the **Parser** component.
---
### How to configure MinerU-specific settings?
1. Set `MINERU_EXECUTABLE` (default: `mineru`) to the path of the MinerU executable.
2. Set `MINERU_DELETE_OUTPUT` to `0` to keep MinerU's output. (Default: `1`, which deletes temporary output)
3. Set `MINERU_OUTPUT_DIR` to specify the output directory for MinerU.
4. Set `MINERU_BACKEND` to `"pipeline"`. (Options: `"pipeline"` (default) | `"vlm-transformers"`)
:::tip NOTE
For information about other environment variables natively supported by MinerU, see [here](https://opendatalab.github.io/MinerU/usage/cli_tools/#environment-variables-description).
:::

Some files were not shown because too many files have changed in this diff Show More