Commit Graph

198 Commits

Author SHA1 Message Date
3fcf2ee54c feat: add new LLM provider Jiekou.AI (#11300)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Jason <ggbbddjm@gmail.com>
2025-11-17 19:47:46 +08:00
883df22aa2 Update LLM factories ranks in llm_factories.json (#11184)
### What problem does this PR solve?

[Update LLM factory ranks in llm_factories.json]

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-12 09:38:06 +08:00
f441f8ffc2 Fix: waitForResponse component. (#11172)
### What problem does this PR solve?

#10056

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-11-11 16:58:47 +08:00
ba6470a7a5 Chore(config): Added rank values for the LLM vendors and remove deprecated LLM (#11133)
### What problem does this PR solve?

Added vendor ranking so that frequently used model providers appear
higher on the page for easier access.
Remove deprecated LLM configurations from llm_factories.json to
streamline model management

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 19:17:35 +08:00
a191933f81 Fix(config): Add raptor_kwd field to infinity mapping (#11146)
### What problem does this PR solve?

fix infinity "INSERT: Column raptor_kwd not found in table" error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 19:02:25 +08:00
d207291217 Fix: add download stats to kb logs. (#11112)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 13:28:07 +08:00
bf382e5c4d Fix: remove unsupported models in siliconflow api (#11126)
### What problem does this PR solve?

Fix: remove unsupported models in siliconflow api

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-11-10 13:27:42 +08:00
d016a06fd5 Feat/monitor task (#11116)
### What problem does this PR solve?

Show task executor.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-10 12:51:39 +08:00
9fcc4946e2 Feat: add kimi-k2-thinking and moonshot-v1-vision-preview (#11110)
### What problem does this PR solve?

Add kimi-k2-thinking and moonshot-v1-vision-preview.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-11-07 19:52:57 +08:00
40b2c48957 Chore(config): remove Youdao and BAAI embedding model providers (#10873)
### What problem does this PR solve?

This commit removes the Youdao and BAAI entries from the LLM factories
configuration as they are no longer needed or supported.

### Type of change

- [x] Config update
2025-10-29 19:38:57 +08:00
84d1ffe44c Feature/add new models for token pony and bug fix for use llm (#10823)
new models for token pony and bug fix for use llm

Co-authored-by: huangzl <huangzl@shinemo.com>
2025-10-28 10:04:41 +08:00
33a189f620 Feat: add TCADP Parser (#10775)
### What problem does this PR solve?

This PR adds a new TCADP (Tencent Cloud Advanced Document Processing)
parser to RAGFlow, enabling users to leverage Tencent Cloud's document
parsing capabilities for more accurate and structured document
processing. The implementation includes:
New TCADP Parser: A complete implementation of Tencent Cloud's document
parsing API without SDK dependency
Configuration Support: Added configuration options in service_conf.yaml
for Tencent Cloud API credentials
Frontend Integration: Updated UI components to support the new TCADP
parser option
Error Handling: Comprehensive error handling and retry mechanisms for
API calls
Result Processing: Support for both SSE streaming and JSON response
formats from Tencent Cloud API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-10-27 15:14:58 +08:00
73144e278b Don't release full image (#10654)
### What problem does this PR solve?

Introduced gpu profile in .env
Added Dockerfile_tei
fix datrie
Removed LIGHTEN flag

### Type of change

- [x] Documentation Update
- [x] Refactoring
2025-10-23 23:02:27 +08:00
d616354d66 Fix: model parameter (#10730)
### What problem does this PR solve?

Fix: fix model parameter  #10729

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-22 19:52:37 +08:00
5b2e5dd334 Feat: Gemini supports video parsing (#10671)
### What problem does this PR solve?

Gemini supports video parsing.


![img_v3_02r8_adbd5adc-d665-4756-9a00-3ae0f12224fg](https://github.com/user-attachments/assets/30d8d296-c336-4b55-9823-803979e705ca)


![img_v3_02r8_ab60c046-1727-4029-ad2e-66097fd3ccbg](https://github.com/user-attachments/assets/441b1487-a970-427e-98b6-6e1e002f2bad)

Close: #10617

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-20 16:49:47 +08:00
b15643bd80 Feat:VolcEngine Model type add IMAGE2TEXT (#10629)
### What problem does this PR solve?
issue:
[#9004](https://github.com/infiniflow/ragflow/issues/9004)
change:
VolcEngine Model type add IMAGE2TEXT

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-17 11:43:22 +08:00
8af769de41 Fix: add toc_kwd field and update page_num_int type (#10596)
### What problem does this PR solve?

- Added new field 'toc_kwd' to infinity_mapping.json for table of
contents keyword support
- Changed page_num_int from integer to array type in task_executor.py to
handle multiple page numbers

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-10-16 12:47:24 +08:00
9e73f799b2 Feat: add Zhipu GLM-ASR model (#10529)
### What problem does this PR solve?

Add Zhipu GLM-ASR model

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-10-14 09:32:45 +08:00
ad56137a59 Feat: ​​OpenSearch's support for newly embedding models​​ (#10494)
### What problem does this PR solve?

fix issues:https://github.com/infiniflow/ragflow/issues/10402

As the newly distributed embedding models support vector dimensions max
to 4096, while current OpenSearch's max dimension support is 1536.
As I tested, the 4096-dimensions vector will be treated as a float type
which is unacceptable in OpenSearch.

Besides, OpenSearch supports max to 16000 dimensions by defalut with the
vector engine(Faiss). According to:
https://docs.opensearch.org/2.19/field-types/supported-field-types/knn-methods-engines/

I added max to 10240 dimensions support for OpenSearch, as I think will
be sufficient in the future.

As I tested , it worked well on my own server (treated as knn_vector)by
using qwen3-embedding:8b as the embedding model:
<img width="1338" height="790" alt="image"
src="https://github.com/user-attachments/assets/a9b2d284-fcf6-4cea-859a-6aadccf36ace"
/>


### Type of change

- [x] New Feature (non-breaking change which adds functionality)


By the way, I will still focus on the stuff about
Elasticsearch/Opensearch as search engines and vector databases.

Co-authored-by: 张雨豪 <zhangyh80@chinatelecom.cn>
2025-10-11 19:58:12 +08:00
932781ea4e Fix: incorrect agent template #10393 (#10491)
### What problem does this PR solve?

Fix: incorrect agent template #10493

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2025-10-11 19:37:42 +08:00
cbf04ee470 Feat: Use data pipeline to visualize the parsing configuration of the knowledge base (#10423)
### What problem does this PR solve?

#9869

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: jinhai <haijin.chn@gmail.com>
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: chanx <1243304602@qq.com>
Co-authored-by: balibabu <cike8899@users.noreply.github.com>
Co-authored-by: Lynn <lynn_inf@hotmail.com>
Co-authored-by: 纷繁下的无奈 <zhileihuang@126.com>
Co-authored-by: huangzl <huangzl@shinemo.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
Co-authored-by: Wilmer <33392318@qq.com>
Co-authored-by: Adrian Weidig <adrianweidig@gmx.net>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Yongteng Lei <yongtengrey@outlook.com>
Co-authored-by: Liu An <asiro@qq.com>
Co-authored-by: buua436 <66937541+buua436@users.noreply.github.com>
Co-authored-by: BadwomanCraZY <511528396@qq.com>
Co-authored-by: cucusenok <31804608+cucusenok@users.noreply.github.com>
Co-authored-by: Russell Valentine <russ@coldstonelabs.org>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Billy Bao <newyorkupperbay@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: TensorNull <129579691+TensorNull@users.noreply.github.com>
Co-authored-by: TensorNull <tensor.null@gmail.com>
Co-authored-by: TeslaZY <TeslaZY@outlook.com>
Co-authored-by: Ajay <160579663+aybanda@users.noreply.github.com>
Co-authored-by: AB <aj@Ajays-MacBook-Air.local>
Co-authored-by: 天海蒼灆 <huangaoqin@tecpie.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: Atsushi Hatakeyama <atu729@icloud.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Mohamed Mathari <155896313+melmathari@users.noreply.github.com>
Co-authored-by: Mohamed Mathari <nocodeventure@Mac-mini-van-Mohamed.fritz.box>
Co-authored-by: Stephen Hu <stephenhu@seismic.com>
Co-authored-by: Shaun Zhang <zhangwfjh@users.noreply.github.com>
Co-authored-by: zhimeng123 <60221886+zhimeng123@users.noreply.github.com>
Co-authored-by: mxc <mxc@example.com>
Co-authored-by: Dominik Novotný <50611433+SgtMarmite@users.noreply.github.com>
Co-authored-by: EVGENY M <168018528+rjohny55@users.noreply.github.com>
Co-authored-by: mcoder6425 <mcoder64@gmail.com>
Co-authored-by: lemsn <lemsn@msn.com>
Co-authored-by: lemsn <lemsn@126.com>
Co-authored-by: Adrian Gora <47756404+adagora@users.noreply.github.com>
Co-authored-by: Womsxd <45663319+Womsxd@users.noreply.github.com>
Co-authored-by: FatMii <39074672+FatMii@users.noreply.github.com>
2025-10-09 12:36:19 +08:00
dfc5fa1f4d Feat: add DeerAPI support (#10303)
### Related issues
#10078 

### What problem does this PR solve?
Integrate DeerAPI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update

Co-authored-by: DeerAPI <tensor.null@gmail.com>
2025-10-09 11:14:49 +08:00
80f851922a Feat: add support for LongCat-Flash-Thinking and Claude Sonnet 4.5 (#10374)
### What problem does this PR solve?

Add support for LongCat-Flash-Thinking and Claude Sonnet 4.5.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-30 12:04:14 +08:00
3f595029d7 fix: Wrong Qwen models's ID (#10272)
### What problem does this PR solve?
fix: Wrong Qwen models's ID
[Bug]: ERROR: litellm.NotFoundError: DashscopeException - The model
Qwen/Qwen3-Omni-Flash does not exist or you do not have access to it.
change: delete wrong qwen model id

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-09-25 09:43:44 +08:00
e8f5a4da56 add model: qwen3-max and qewn3-vl series (#10256)
### What problem does this PR solve?
qwen3-max and qewn3-vl series
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 20:00:53 +08:00
a9472e3652 add Qwen models (#10263)
### What problem does this PR solve?

add Qwen models

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-24 16:52:12 +08:00
da82566304 Fix: resolve hash collisions by switching to UUID &correct logic for always-true statements & Update GPT api integration & Support qianwen-deepresearch (#10208)
### What problem does this PR solve?

Fix: resolve hash collisions by switching to UUID &correct logic for
always-true statements, solved: #10165
Feat: Update GPT api integration, solved: #10204 
Feat: Support qianwen-deepresearch, solved: #10163 
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-09-23 09:34:30 +08:00
d11b1628a1 Feat: add admin CLI and admin service (#10186)
### What problem does this PR solve?

Introduce new feature: RAGFlow system admin service and CLI

### Introduction

Admin Service is a dedicated management component designed to monitor,
maintain, and administrate the RAGFlow system. It provides comprehensive
tools for ensuring system stability, performing operational tasks, and
managing users and permissions efficiently.

The service offers monitoring of critical components, including the
RAGFlow server, Task Executor processes, and dependent services such as
MySQL, Infinity / Elasticsearch, Redis, and MinIO. It automatically
checks their health status, resource usage, and uptime, and performs
restarts in case of failures to minimize downtime.

For user and system management, it supports listing, creating,
modifying, and deleting users and their associated resources like
knowledge bases and Agents.

Built with scalability and reliability in mind, the Admin Service
ensures smooth system operation and simplifies maintenance workflows.

It consists of a server-side Service and a command-line client (CLI),
both implemented in Python. User commands are parsed using the Lark
parsing toolkit.

- **Admin Service**: A backend service that interfaces with the RAGFlow
system to execute administrative operations and monitor its status.
- **Admin CLI**: A command-line interface that allows users to connect
to the Admin Service and issue commands for system management.

### Starting the Admin Service

1. Before start Admin Service, please make sure RAGFlow system is
already started.

2.  Run the service script:
    ```bash
    python admin/admin_server.py
    ```
The service will start and listen for incoming connections from the CLI
on the configured port.

### Using the Admin CLI

1.  Ensure the Admin Service is running.
2.  Launch the CLI client:
    ```bash
    python admin/admin_client.py -h 0.0.0.0 -p 9381
## Supported Commands
Commands are case-insensitive and must be terminated with a semicolon
(`;`).
### Service Management Commands
-  [x] `LIST SERVICES;`
    -   Lists all available services within the RAGFlow system.
-  [ ] `SHOW SERVICE <id>;`
- Shows detailed status information for the service identified by
`<id>`.
-  [ ] `STARTUP SERVICE <id>;`
    -   Attempts to start the service identified by `<id>`.
-  [ ] `SHUTDOWN SERVICE <id>;`
- Attempts to gracefully shut down the service identified by `<id>`.
-  [ ] `RESTART SERVICE <id>;`
    -   Attempts to restart the service identified by `<id>`.
### User Management Commands
-  [x] `LIST USERS;`
    -   Lists all users known to the system.
-  [ ] `SHOW USER '<username>';`
- Shows details and permissions for the specified user. The username
must be enclosed in single or double quotes.
-  [ ] `DROP USER '<username>';`
    -   Removes the specified user from the system. Use with caution.
-  [ ] `ALTER USER PASSWORD '<username>' '<new_password>';`
    -   Changes the password for the specified user.
### Data and Agent Commands
-  [ ] `LIST DATASETS OF '<username>';`
    -   Lists the datasets associated with the specified user.
-  [ ] `LIST AGENTS OF '<username>';`
    -   Lists the agents associated with the specified user.
### Meta-Commands
Meta-commands are prefixed with a backslash (`\`).
-   `\?` or `\help`
    -   Shows help information for the available commands.
-   `\q` or `\quit`
    -   Exits the CLI application.
## Examples
```commandline
admin> list users;
+-------------------------------+------------------------+-----------+-------------+
| create_date                   | email                  | is_active | nickname    |
+-------------------------------+------------------------+-----------+-------------+
| Fri, 22 Nov 2024 16:03:41 GMT | jeffery@infiniflow.org | 1         | Jeffery     |
| Fri, 22 Nov 2024 16:10:55 GMT | aya@infiniflow.org     | 1         | Waterdancer |
+-------------------------------+------------------------+-----------+-------------+
admin> list services;
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| extra                                                                                     | host      | id | name          | port  | service_type   |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
| {}                                                                                        | 0.0.0.0   | 0  | ragflow_0     | 9380  | ragflow_server |
| {'meta_type': 'mysql', 'password': 'infini_rag_flow', 'username': 'root'}                 | localhost | 1  | mysql         | 5455  | meta_data      |
| {'password': 'infini_rag_flow', 'store_type': 'minio', 'user': 'rag_flow'}                | localhost | 2  | minio         | 9000  | file_store     |
| {'password': 'infini_rag_flow', 'retrieval_type': 'elasticsearch', 'username': 'elastic'} | localhost | 3  | elasticsearch | 1200  | retrieval      |
| {'db_name': 'default_db', 'retrieval_type': 'infinity'}                                   | localhost | 4  | infinity      | 23817 | retrieval      |
| {'database': 1, 'mq_type': 'redis', 'password': 'infini_rag_flow'}                        | localhost | 5  | redis         | 6379  | message_queue  |
+-------------------------------------------------------------------------------------------+-----------+----+---------------+-------+----------------+
```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2025-09-22 10:37:49 +08:00
f12b9fdcd4 Feat: add CometAPI to LLMFactory and update related mappings (#10119)
### Related issues
#10078

### What problem does this PR solve?
Integrate CometAPI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2025-09-18 09:51:29 +08:00
e1d86cfee3 Feat: add TokenPony model provider (#9932)
### What problem does this PR solve?

Add TokenPony as a LLM provider

Co-authored-by: huangzl <huangzl@shinemo.com>
2025-09-11 17:25:31 +08:00
936f27e9e5 Feat: add LongCat-Flash-Chat (#9973)
### What problem does this PR solve?

Add LongCat-Flash-Chat from Meituan, deepseek v3.1 from SiliconFlow,
kimi-k2-09-05-preview and kimi-k2-turbo-preview from Moonshot.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-08 19:00:52 +08:00
994517495f add model: qwen3-max-preview (#9959)
### What problem does this PR solve?
add qwen3-max-preview model,
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-09-08 10:39:23 +08:00
45f52e85d7 Feat: refine dataflow and initialize dataflow app (#9952)
### What problem does this PR solve?

Refine dataflow and initialize dataflow app.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-09-05 18:50:46 +08:00
56cd576876 Refa: revise the implementation of LightRAG and enable response caching (#9828)
### What problem does this PR solve?

This revision performed a comprehensive check on LightRAG to ensure the
correctness of its implementation. It **did not involve** Entity
Resolution and Community Reports Generation. There is an example using
default entity types and the General chunking method, which shows good
results in both time and effectiveness. Moreover, response caching is
enabled for resuming failed tasks.


[The-Necklace.pdf](https://github.com/user-attachments/files/22042432/The-Necklace.pdf)

After:


![img_v3_02pk_177dbc6a-e7cc-4732-b202-ad4682d171fg](https://github.com/user-attachments/assets/5ef1d93a-9109-4fe9-8a7b-a65add16f82b)


```bash
Begin at:
Fri, 29 Aug 2025 16:48:03 GMT
Duration:
222.31 s
Progress:
16:48:04 Task has been received.
16:48:06 Page(1~7): Start to parse.
16:48:06 Page(1~7): OCR started
16:48:08 Page(1~7): OCR finished (1.89s)
16:48:11 Page(1~7): Layout analysis (3.72s)
16:48:11 Page(1~7): Table analysis (0.00s)
16:48:11 Page(1~7): Text merged (0.00s)
16:48:11 Page(1~7): Finish parsing.
16:48:12 Page(1~7): Generate 7 chunks
16:48:12 Page(1~7): Embedding chunks (0.29s)
16:48:12 Page(1~7): Indexing done (0.04s). Task done (7.84s)
16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: She had no dresses, no je...
16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: Her husband, already half...
16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: And this life lasted ten ...
16:48:17 Start processing for f421fb06849e11f0bdd32724b93a52b2: Then she asked, hesitatin...
16:49:30 Completed processing for f421fb06849e11f0bdd32724b93a52b2: She had no dresses, no je... after 1 gleanings, 21985 tokens.
16:49:30 Entities extraction of chunk 3 1/7 done, 12 nodes, 13 edges, 21985 tokens.
16:49:40 Completed processing for f421fb06849e11f0bdd32724b93a52b2: Finally, she replied, hes... after 1 gleanings, 22584 tokens.
16:49:40 Entities extraction of chunk 5 2/7 done, 19 nodes, 19 edges, 22584 tokens.
16:50:02 Completed processing for f421fb06849e11f0bdd32724b93a52b2: Then she asked, hesitatin... after 1 gleanings, 24610 tokens.
16:50:02 Entities extraction of chunk 0 3/7 done, 16 nodes, 28 edges, 24610 tokens.
16:50:03 Completed processing for f421fb06849e11f0bdd32724b93a52b2: And this life lasted ten ... after 1 gleanings, 24031 tokens.
16:50:04 Entities extraction of chunk 1 4/7 done, 24 nodes, 22 edges, 24031 tokens.
16:50:14 Completed processing for f421fb06849e11f0bdd32724b93a52b2: So they begged the jewell... after 1 gleanings, 24635 tokens.
16:50:14 Entities extraction of chunk 6 5/7 done, 27 nodes, 26 edges, 24635 tokens.
16:50:29 Completed processing for f421fb06849e11f0bdd32724b93a52b2: Her husband, already half... after 1 gleanings, 25758 tokens.
16:50:29 Entities extraction of chunk 2 6/7 done, 25 nodes, 35 edges, 25758 tokens.
16:51:35 Completed processing for f421fb06849e11f0bdd32724b93a52b2: The Necklace By Guy de Ma... after 1 gleanings, 27491 tokens.
16:51:35 Entities extraction of chunk 4 7/7 done, 39 nodes, 37 edges, 27491 tokens.
16:51:35 Entities and relationships extraction done, 147 nodes, 177 edges, 171094 tokens, 198.58s.
16:51:35 Entities merging done, 0.01s.
16:51:35 Relationships merging done, 0.01s.
16:51:35 ignored 7 relations due to missing entities.
16:51:35 generated subgraph for doc f421fb06849e11f0bdd32724b93a52b2 in 198.68 seconds.
16:51:35 run_graphrag f421fb06849e11f0bdd32724b93a52b2 graphrag_task_lock acquired
16:51:35 set_graph removed 0 nodes and 0 edges from index in 0.00s.
16:51:35 Get embedding of nodes: 9/147
16:51:35 Get embedding of nodes: 109/147
16:51:37 Get embedding of edges: 9/170
16:51:37 Get embedding of edges: 109/170
16:51:40 set_graph converted graph change to 319 chunks in 4.21s.
16:51:40 Insert chunks: 4/319
16:51:40 Insert chunks: 104/319
16:51:40 Insert chunks: 204/319
16:51:40 Insert chunks: 304/319
16:51:40 set_graph added/updated 147 nodes and 170 edges from index in 0.53s.
16:51:40 merging subgraph for doc f421fb06849e11f0bdd32724b93a52b2 into the global graph done in 4.79 seconds.
16:51:40 Knowledge Graph done (204.29s)
```

Before:


![img_v3_02pk_63370edf-ecee-4ee8-8ac8-69c8d2c712fg](https://github.com/user-attachments/assets/1162eb0f-68c2-4de5-abe0-cdfa168f71de)

```bash
Begin at:
Fri, 29 Aug 2025 17:00:47 GMT
processDuration:
173.38 s
Progress:
17:00:49 Task has been received.
17:00:51 Page(1~7): Start to parse.
17:00:51 Page(1~7): OCR started
17:00:53 Page(1~7): OCR finished (1.82s)
17:00:57 Page(1~7): Layout analysis (3.64s)
17:00:57 Page(1~7): Table analysis (0.00s)
17:00:57 Page(1~7): Text merged (0.00s)
17:00:57 Page(1~7): Finish parsing.
17:00:57 Page(1~7): Generate 7 chunks
17:00:57 Page(1~7): Embedding chunks (0.31s)
17:00:57 Page(1~7): Indexing done (0.03s). Task done (7.88s)
17:00:57 created task graphrag
17:01:00 Task has been received.
17:02:17 Entities extraction of chunk 1 1/7 done, 9 nodes, 9 edges, 10654 tokens.
17:02:31 Entities extraction of chunk 2 2/7 done, 12 nodes, 13 edges, 11066 tokens.
17:02:33 Entities extraction of chunk 4 3/7 done, 9 nodes, 10 edges, 10433 tokens.
17:02:42 Entities extraction of chunk 5 4/7 done, 11 nodes, 14 edges, 11290 tokens.
17:02:52 Entities extraction of chunk 6 5/7 done, 13 nodes, 15 edges, 11039 tokens.
17:02:55 Entities extraction of chunk 3 6/7 done, 14 nodes, 13 edges, 11466 tokens.
17:03:32 Entities extraction of chunk 0 7/7 done, 19 nodes, 18 edges, 13107 tokens.
17:03:32 Entities and relationships extraction done, 71 nodes, 89 edges, 79055 tokens, 149.66s.
17:03:32 Entities merging done, 0.01s.
17:03:32 Relationships merging done, 0.01s.
17:03:32 ignored 1 relations due to missing entities.
17:03:32 generated subgraph for doc b1d9d3b6848711f0aacd7ddc0714c4d3 in 149.69 seconds.
17:03:32 run_graphrag b1d9d3b6848711f0aacd7ddc0714c4d3 graphrag_task_lock acquired
17:03:32 set_graph removed 0 nodes and 0 edges from index in 0.00s.
17:03:32 Get embedding of nodes: 9/71
17:03:33 Get embedding of edges: 9/88
17:03:34 set_graph converted graph change to 161 chunks in 2.27s.
17:03:34 Insert chunks: 4/161
17:03:34 Insert chunks: 104/161
17:03:34 set_graph added/updated 71 nodes and 88 edges from index in 0.28s.
17:03:34 merging subgraph for doc b1d9d3b6848711f0aacd7ddc0714c4d3 into the global graph done in 2.60 seconds.
17:03:34 Knowledge Graph done (153.18s)

```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
- [x] Performance Improvement
2025-08-29 17:58:36 +08:00
209ef09dc3 Feat: add Zhipu GLM-4.5 model series (#9715)
### What problem does this PR solve?

Add Zhipu GLM-4.5 model series. #9708.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-26 13:48:00 +08:00
ycz
370c8bc25b Update llm_factories.json (#9714)
### What problem does this PR solve?

add ZhipuAI GLM-4.5 model series

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-26 11:49:01 +08:00
787e0c6786 Refa: OpenAI whisper-1 (#9552)
### What problem does this PR solve?

Refactor OpenAI to enable audio parsing.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Refactoring
2025-08-19 16:41:18 +08:00
fe32952825 Fix: Gemini parameters error (#9520)
### What problem does this PR solve?

Fix Gemini parameters error.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-08-18 14:51:10 +08:00
99df0766fe Feat: add SMTP support for user invitation emails (#9479)
### What problem does this PR solve?

Add SMTP support for user invitation emails

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-15 18:12:20 +08:00
421657f64b Feat: allows setting multiple types of default models in service config (#9404)
### What problem does this PR solve?

Allows set multiple types of default models in service config.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-13 09:46:05 +08:00
79481becea Feat: supports GPT-5 (#9320)
### What problem does this PR solve?

Supports GPT-5.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-08-08 11:54:40 +08:00
46a35f44da Feat: add Claude Opus 4.1 (#9268)
### What problem does this PR solve?

Add Claude Opus 4.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring
2025-08-06 10:57:03 +08:00
b26088ab70 Add a series of qwen3 latest SOTA models (#9140)
### What problem does this PR solve?

Add a series of qwen3 latest SOTA models:
qwen3-coder-480b-a35b-instruct, qwen3-30b-a3b-instruct-2507,
qwen3-30b-a3b-thinking-2507, qwen3-235b-a22b-instruct-2507,
qwen3-235b-a22b-thinking-2507
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 15:19:51 +08:00
4b98119c52 Fix: kimi-latest is not authorized (#9151)
### What problem does this PR solve?

Fix kimi-latest is not authorized.

Add kimi-thinking-preview.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2025-08-01 12:40:58 +08:00
aeaeb169e4 Feat/support 302ai provider (#8742)
### What problem does this PR solve?

Support 302.AI provider.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 14:48:30 +08:00
46ded9d329 add Kimi-K2-Instruct from Tongyi-Qianwen API (#9125)
### What problem does this PR solve?

add Kimi-K2-Instruct from Tongyi-Qianwen API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-31 14:42:32 +08:00
342a04ec8a Added infinity rank_feature support (#9044)
### What problem does this PR solve?

Added infinity rank_feature support

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-29 09:14:23 +08:00
7ebc1f0943 Feat: add model provider DeepInfra (#9003)
### What problem does this PR solve?

Add model provider DeepInfra. This model list comes from our community. 

NOTE: most endpoints haven't been tested, but they should work as OpenAI
does.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-23 18:10:35 +08:00
0d7244e4a4 Fix: Adds newest Gemini models to fit google's standard API rate limits (#8970)
### What problem does this PR solve?

Adds configurations for gemini-2.5-flash and Gemini 2.5-pro models,
including tags, maximum token limits, and model types.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-07-23 10:18:04 +08:00
ed7bea060f Feat: add Kimi model series support (#8866)
### What problem does this PR solve?

Add Kimi model series support.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-07-16 15:31:57 +08:00