Yongteng Lei 17ea5c1dee Fix: MCP cannot handle empty Auth field properly (#11034)
### What problem does this PR solve?

Fix MCP cannot handle empty Auth field properly, then result in 

```bash
2025-11-05 11:10:41,919 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:10:41,920 INFO     51209 client_session initialized successfully
2025-11-05 11:10:41,994 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:10:41] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:10:41,999 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:10:42,000 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:10:42,001 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:10:42] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:11:30,441 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:11:30,442 INFO     51209 client_session initialized successfully
2025-11-05 11:11:30,520 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:30] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:11:30,525 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:11:30,526 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:11:30,527 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:30] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:11:31,476 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:11:31,476 INFO     51209 client_session initialized successfully
2025-11-05 11:11:31,549 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:31] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:11:31,552 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:11:31,553 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:11:31,554 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:31] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:11:51,930 ERROR    51209 unhandled errors in a TaskGroup (1 sub-exception)
  + Exception Group Traceback (most recent call last):
  |   File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 86, in _mcp_server_loop
  |     async with streamablehttp_client(url, headers) as (read_stream, write_stream, _):
  |   File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 217, in __aexit__
  |     await self.gen.athrow(typ, value, traceback)
  |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/mcp/client/streamable_http.py", line 478, in streamablehttp_client
  |     async with anyio.create_task_group() as tg:
  |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 781, in __aexit__
  |     raise BaseExceptionGroup(
  | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/mcp/client/streamable_http.py", line 409, in handle_request_async
    |     await self._handle_post_request(ctx)
    |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/mcp/client/streamable_http.py", line 278, in _handle_post_request
    |     response.raise_for_status()
    |   File "/home/xxxxxxxxx/workspace/ragflow/.venv/lib/python3.10/site-packages/httpx/_models.py", line 829, in raise_for_status
    |     raise HTTPStatusError(message, request=request, response=self)
    | httpx.HTTPStatusError: Server error '502 Bad Gateway' for url 'http://192.168.1.38:9382/mcp'
    | For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/502
    +------------------------------------
2025-11-05 11:11:51,942 ERROR    51209 Error fetching tools from MCP server: streamable-http: http://192.168.1.38:9382/mcp
Traceback (most recent call last):
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 168, in get_tools
    return future.result(timeout=timeout)
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._get_tools_from_mcp_server) at 0x7d58f02e2c20>", line 40, in _get_tools_from_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 160, in _get_tools_from_mcp_server
    result: ListToolsResult = await self._call_mcp_server("list_tools", timeout=timeout)
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._call_mcp_server) at 0x7d58f02e2b00>", line 63, in _call_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 139, in _call_mcp_server
    raise result
ValueError: Connection failed (possibly due to auth error). Please check authentication settings first
2025-11-05 11:11:51,943 ERROR    51209 Test MCP error: Connection failed (possibly due to auth error). Please check authentication settings first
Traceback (most recent call last):
  File "/home/xxxxxxxxx/workspace/ragflow/api/apps/mcp_server_app.py", line 429, in test_mcp
    tools = tool_call_session.get_tools(timeout)
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession.get_tools) at 0x7d58f02e2cb0>", line 40, in get_tools
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 168, in get_tools
    return future.result(timeout=timeout)
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._get_tools_from_mcp_server) at 0x7d58f02e2c20>", line 40, in _get_tools_from_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 160, in _get_tools_from_mcp_server
    result: ListToolsResult = await self._call_mcp_server("list_tools", timeout=timeout)
  File "<@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._call_mcp_server) at 0x7d58f02e2b00>", line 63, in _call_mcp_server
  File "/home/xxxxxxxxx/workspace/ragflow/rag/utils/mcp_tool_call_conn.py", line 139, in _call_mcp_server
    raise result
ValueError: Connection failed (possibly due to auth error). Please check authentication settings first
2025-11-05 11:11:51,944 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:11:51,945 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:11:51,946 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:11:51] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:12:20,484 INFO     51209 Negotiated protocol version: 2025-06-18
2025-11-05 11:12:20,485 INFO     51209 client_session initialized successfully
2025-11-05 11:12:20,570 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:12:20] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:12:20,573 INFO     51209 Want to clean up 1 MCP sessions
2025-11-05 11:12:20,574 INFO     51209 1 MCP sessions has been cleaned up. 0 in global context.
2025-11-05 11:12:20,575 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:12:20] "POST /v1/mcp_server/test_mcp HTTP/1.1" 200 -
2025-11-05 11:15:02,119 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:15:02] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:16:24,967 INFO     51209 127.0.0.1 - - [05/Nov/2025 11:16:24] "GET /api/v1/datasets?page=1&page_size=1000&orderby=create_time&desc=True HTTP/1.1" 200 -
2025-11-05 11:30:24,284 ERROR    51209 Task was destroyed but it is pending!
task: <Task pending name='Task-58' coro=<MCPToolCallSession._mcp_server_loop() running at <@beartype(rag.utils.mcp_tool_call_conn.MCPToolCallSession._mcp_server_loop) at 0x7d58f02e29e0>:11> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[_chain_future.<locals>._call_set_state() at /home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/futures.py:392]>
2025-11-05 11:30:24,285 ERROR    51209 Task was destroyed but it is pending!
task: <Task pending name='Task-67' coro=<Queue.get() running at /home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/queues.py:159> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[_release_waiter(<Future pendi...ask_wakeup()]>)() at /home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/tasks.py:387]>
Exception ignored in: <coroutine object Queue.get at 0x7d585480ace0>
Traceback (most recent call last):
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/queues.py", line 161, in get
    getter.cancel()  # Just in case getter is not done yet.
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 753, in call_soon
    self._check_closed()
  File "/home/xxxxxxxxx/.local/share/uv/python/cpython-3.10.16-linux-x86_64-gnu/lib/python3.10/asyncio/base_events.py", line 515, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-05 19:15:27 +08:00
2025-10-30 21:08:00 +08:00
2025-09-22 11:03:33 +08:00
2025-09-05 09:59:27 +08:00
2025-10-30 11:34:42 +08:00
2025-10-23 23:02:27 +08:00
2025-10-14 21:06:27 +08:00
2023-12-12 14:13:13 +08:00
2025-10-29 00:34:39 +08:00
2024-09-06 16:02:44 +08:00

README in English 简体中文版自述文件 繁體版中文自述文件 日本語のREADME 한국어 Bahasa Indonesia Português(Brasil)

follow on X(Twitter) Static Badge docker pull infiniflow/ragflow:v0.21.1 Latest Release license Ask DeepWiki

Document | Roadmap | Twitter | Discord | Demo

infiniflow%2Fragflow | Trendshift
📕 Table of Contents

💡 What is RAGFlow?

RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged context engine and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.

🎮 Demo

Try our demo at https://demo.ragflow.io.

🔥 Latest Updates

  • 2025-10-23 Supports MinerU & Docling as document parsing methods.
  • 2025-10-15 Supports orchestrable ingestion pipeline.
  • 2025-08-08 Supports OpenAI's latest GPT-5 series models.
  • 2025-08-01 Supports agentic workflow and MCP.
  • 2025-05-23 Adds a Python/JavaScript code executor component to Agent.
  • 2025-05-05 Supports cross-language query.
  • 2025-03-19 Supports using a multi-modal model to make sense of images within PDF or DOCX files.
  • 2025-02-28 Combined with Internet search (Tavily), supports reasoning like Deep Research for any LLMs.
  • 2024-12-18 Upgrades Document Layout Analysis model in DeepDoc.
  • 2024-08-22 Support text to SQL statements through RAG.

🎉 Stay Tuned

Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟

🌟 Key Features

🍭 "Quality in, quality out"

  • Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
  • Finds "needle in a data haystack" of literally unlimited tokens.

🍱 Template-based chunking

  • Intelligent and explainable.
  • Plenty of template options to choose from.

🌱 Grounded citations with reduced hallucinations

  • Visualization of text chunking to allow human intervention.
  • Quick view of the key references and traceable citations to support grounded answers.

🍔 Compatibility with heterogeneous data sources

  • Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.

🛀 Automated and effortless RAG workflow

  • Streamlined RAG orchestration catered to both personal and large businesses.
  • Configurable LLMs as well as embedding models.
  • Multiple recall paired with fused re-ranking.
  • Intuitive APIs for seamless integration with business.

🔎 System Architecture

🎬 Get Started

📝 Prerequisites

  • CPU >= 4 cores
  • RAM >= 16 GB
  • Disk >= 50 GB
  • Docker >= 24.0.0 & Docker Compose >= v2.26.1
  • gVisor: Required only if you intend to use the code executor (sandbox) feature of RAGFlow.

Tip

If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.

🚀 Start up the server

  1. Ensure vm.max_map_count >= 262144:

    To check the value of vm.max_map_count:

    $ sysctl vm.max_map_count
    

    Reset vm.max_map_count to a value at least 262144 if it is not.

    # In this case, we set it to 262144:
    $ sudo sysctl -w vm.max_map_count=262144
    

    This change will be reset after a system reboot. To ensure your change remains permanent, add or update the vm.max_map_count value in /etc/sysctl.conf accordingly:

    vm.max_map_count=262144
    
  2. Clone the repo:

    $ git clone https://github.com/infiniflow/ragflow.git
    
  3. Start up the server using the pre-built Docker images:

Caution

All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. If you are on an ARM64 platform, follow this guide to build a Docker image compatible with your system.

The command below downloads the v0.21.1-slim edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different from v0.21.1-slim, update the RAGFLOW_IMAGE variable accordingly in docker/.env before using docker compose to start the server.

   $ cd ragflow/docker
   # Use CPU for embedding and DeepDoc tasks:
   $ docker compose -f docker-compose.yml up -d

   # To use GPU to accelerate embedding and DeepDoc tasks:
   # sed -i '1i DEVICE=gpu' .env
   # docker compose -f docker-compose.yml up -d
RAGFlow image tag Image size (GB) Has embedding models? Stable?
v0.21.1 ≈9 ✔️ Stable release
v0.21.1-slim ≈2 Stable release
nightly ≈2 Unstable nightly build

Note: Starting with v0.22.0, we ship only the slim edition and no longer append the -slim suffix to the image tag.

  1. Check the server status after having the server up and running:

    $ docker logs -f docker-ragflow-cpu-1
    

    The following output confirms a successful launch of the system:

    
          ____   ___    ______ ______ __
         / __ \ /   |  / ____// ____// /____  _      __
        / /_/ // /| | / / __ / /_   / // __ \| | /| / /
       / _, _// ___ |/ /_/ // __/  / // /_/ /| |/ |/ /
      /_/ |_|/_/  |_|\____//_/    /_/ \____/ |__/|__/
    
     * Running on all addresses (0.0.0.0)
    

    If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a network anormal error because, at that moment, your RAGFlow may not be fully initialized.

  2. In your web browser, enter the IP address of your server and log in to RAGFlow.

    With the default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.

  3. In service_conf.yaml.template, select the desired LLM factory in user_default_llm and update the API_KEY field with the corresponding API key.

    See llm_api_key_setup for more information.

    The show is on!

🔧 Configurations

When it comes to system configurations, you will need to manage the following files:

  • .env: Keeps the fundamental setups for the system, such as SVR_HTTP_PORT, MYSQL_PASSWORD, and MINIO_PASSWORD.
  • service_conf.yaml.template: Configures the back-end services. The environment variables in this file will be automatically populated when the Docker container starts. Any environment variables set within the Docker container will be available for use, allowing you to customize service behavior based on the deployment environment.
  • docker-compose.yml: The system relies on docker-compose.yml to start up.

The ./docker/README file provides a detailed description of the environment settings and service configurations which can be used as ${ENV_VARS} in the service_conf.yaml.template file.

To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80 to <YOUR_SERVING_PORT>:80.

Updates to the above configurations require a reboot of all containers to take effect:

$ docker compose -f docker-compose.yml up -d

Switch doc engine from Elasticsearch to Infinity

RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to Infinity, follow these steps:

  1. Stop all running containers:

    $ docker compose -f docker/docker-compose.yml down -v
    

Warning

-v will delete the docker container volumes, and the existing data will be cleared.

  1. Set DOC_ENGINE in docker/.env to infinity.

  2. Start the containers:

    $ docker compose -f docker-compose.yml up -d
    

Warning

Switching to Infinity on a Linux/arm64 machine is not yet officially supported.

🔧 Build a Docker image without embedding models

This image is approximately 2 GB in size and relies on external LLM and embedding services.

git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .

🔨 Launch service from source for development

  1. Install uv and pre-commit, or skip this step if they are already installed:

    pipx install uv pre-commit
    
  2. Clone the source code and install Python dependencies:

    git clone https://github.com/infiniflow/ragflow.git
    cd ragflow/
    uv sync --python 3.10 # install RAGFlow dependent python modules
    uv run download_deps.py
    pre-commit install
    
  3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:

    docker compose -f docker/docker-compose-base.yml up -d
    

    Add the following line to /etc/hosts to resolve all hosts specified in docker/.env to 127.0.0.1:

    127.0.0.1       es01 infinity mysql minio redis sandbox-executor-manager
    
  4. If you cannot access HuggingFace, set the HF_ENDPOINT environment variable to use a mirror site:

    export HF_ENDPOINT=https://hf-mirror.com
    
  5. If your operating system does not have jemalloc, please install it as follows:

    # Ubuntu
    sudo apt-get install libjemalloc-dev
    # CentOS
    sudo yum install jemalloc
    # OpenSUSE
    sudo zypper install jemalloc
    # macOS
    sudo brew install jemalloc
    
  6. Launch backend service:

    source .venv/bin/activate
    export PYTHONPATH=$(pwd)
    bash docker/launch_backend_service.sh
    
  7. Install frontend dependencies:

    cd web
    npm install
    
  8. Launch frontend service:

    npm run dev
    

    The following output confirms a successful launch of the system:

  9. Stop RAGFlow front-end and back-end service after development is complete:

    pkill -f "ragflow_server.py|task_executor.py"
    

📚 Documentation

📜 Roadmap

See the RAGFlow Roadmap 2025

🏄 Community

🙌 Contributing

RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.

Languages
Python 52.6%
TypeScript 45.8%
Less 0.4%
Shell 0.4%
HTML 0.3%
Other 0.4%