Commit Graph

419 Commits

Author SHA1 Message Date
df3890827d Refa: change LLM chat output from full to delta (incremental) (#6534)
### What problem does this PR solve?

Change LLM chat output from full to delta (incremental)

### Type of change

- [x] Refactoring
2025-03-26 19:33:14 +08:00
cc8029a732 Fix: uploading in chat box issue. (#6547)
### What problem does this PR solve?

#6228

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-26 15:37:48 +08:00
12ad746ee6 Fix: Bedrock model invocation error. (#6533)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-26 11:27:12 +08:00
60c3a253ad Fix: api-key issue for xinference. (#6490)
### What problem does this PR solve?

#2792

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-25 15:01:13 +08:00
095fc84cf2 Fix: claude max tokens. (#6484)
### What problem does this PR solve?

#6458

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-25 10:41:55 +08:00
b77ce4e846 Feat: support api-key for Ollama. (#6448)
### What problem does this PR solve?

#6189

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-24 14:53:17 +08:00
85eb3775d6 Refa: update Anthropic models. (#6445)
### What problem does this PR solve?

#6421

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-24 12:34:57 +08:00
a6aed0da46 Fix: rerank with YoudaoRerank issue. (#6396)
### What problem does this PR solve?

Fix rerank with YoudaoRerank issue,"'YoudaoRerank' object has no
attribute '_dynamic_batch_size'"


![17425412353825](https://github.com/user-attachments/assets/9ed304c7-317a-440e-acff-fe895fc20f07)


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-24 10:09:16 +08:00
efc4796f01 Fix ratelimit errors during document parsing (#6413)
### What problem does this PR solve?

When using the online large model API knowledge base to extract
knowledge graphs, frequent Rate Limit Errors were triggered,
causing document parsing to fail. This commit fixes the issue by
optimizing API calls in the following way:
Added exponential backoff and jitter to the API call to reduce the
frequency of Rate Limit Errors.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-03-22 23:07:03 +08:00
a2a4bfe3e3 Fix: change ollama default num_ctx. (#6395)
### What problem does this PR solve?

#6163

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-21 16:22:03 +08:00
85480f6292 Fix: the error of Ollama embeddings interface returning "500 Internal Server Error" (#6350)
### What problem does this PR solve?

Fix the error where the Ollama embeddings interface returns a “500
Internal Server Error” when using models such as xiaobu-embedding-v2 for
embedding.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-21 15:25:48 +08:00
d83911b632 Fix: huggingface rerank model issue. (#6385)
### What problem does this PR solve?

#6348

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-21 12:43:32 +08:00
5b04b7d972 Fix: rerank with vllm issue. (#6306)
### What problem does this PR solve?

#6301

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-20 11:52:42 +08:00
c6e1a2ca8a Feat: add TTS support for SILICONFLOW. (#6264)
### What problem does this PR solve?

#6244

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-03-19 12:52:12 +08:00
5cf610af40 Feat: add vision LLM PDF parser (#6173)
### What problem does this PR solve?

Add vision LLM PDF parser

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-03-18 14:52:20 +08:00
e9a6675c40 Fix: enable ollama api-key. (#6205)
### What problem does this PR solve?

#6189

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-18 13:37:34 +08:00
7e4d693054 Fix: in case response.choices[0].message.content is None. (#6190)
### What problem does this PR solve?

#6164

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-18 10:00:27 +08:00
56b228f187 Refa: remove max toekns for image2txt models. (#6078)
### What problem does this PR solve?

#6063

### Type of change


- [x] Refactoring
2025-03-14 13:51:45 +08:00
9c8060f619 0.17.1 release notes (#6021)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2025-03-13 14:43:24 +08:00
3571270191 Refa: refine the context window size warning. (#5993)
### What problem does this PR solve?


### Type of change
- [x] Refactoring
2025-03-12 19:40:54 +08:00
6e13922bdc Feat: Add qwq model support to Tongyi-Qianwen factory (#5981)
### What problem does this PR solve?

add qwq model support to Tongyi-Qianwen factory
https://github.com/infiniflow/ragflow/issues/5869

### Type of change

- [x] New Feature (non-breaking change which adds functionality)


![image](https://github.com/user-attachments/assets/49f5c6a0-ecaf-41dd-a23a-2009f854d62c)


![image](https://github.com/user-attachments/assets/93ffa303-920e-4942-8188-bcd6b7209204)


![1741774779438](https://github.com/user-attachments/assets/25f2fd1d-8640-4df0-9a08-78ee9daaa8fe)


![image](https://github.com/user-attachments/assets/4763cf6c-1f76-43c4-80ee-74dfd666a184)

Co-authored-by: zhaozhicheng <zhicheng.zhao@fastonetech.com>
2025-03-12 18:54:15 +08:00
b29539b442 Fix: CoHereRerank not respecting base_url when provided (#5784)
### What problem does this PR solve?

vLLM provider with a reranking model does not work : as vLLM uses under
the hood the [CoHereRerank
provider](https://github.com/infiniflow/ragflow/blob/v0.17.0/rag/llm/__init__.py#L250)
with a `base_url`, if this URL [is not passed to the Cohere
client](https://github.com/infiniflow/ragflow/blob/v0.17.0/rag/llm/rerank_model.py#L379-L382)
any attempt will endup on the Cohere SaaS (sending your private api key
in the process) instead of your vLLM instance.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2025-03-10 11:22:06 +08:00
df9b7b2fe9 Fix: rerank issue. (#5696)
### What problem does this PR solve?

#5673

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-03-06 15:05:19 +08:00
251ba7f058 Refa: remove max tokens since no one needs it. (#5690)
### What problem does this PR solve?

#5646 #5640

### Type of change

- [x] Refactoring
2025-03-06 11:29:40 +08:00
b8da2eeb69 Feat: support huggingface re-rank model. (#5684)
### What problem does this PR solve?

#5658

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-03-06 10:44:04 +08:00
4c9a3e918f Fix: add image2text issue. (#5431)
### What problem does this PR solve?

#5356

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-27 14:06:49 +08:00
0284248c93 Fix: correct wrong vLLM rerank model (#5399)
### What problem does this PR solve?

Correct wrong vLLM rerank model #4316 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-26 18:59:36 +08:00
cdcaae17c6 Feat: add VLLM (#5380)
### What problem does this PR solve?

Read to add VLMM.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-02-26 16:04:53 +08:00
4e2afcd3b8 Fix FlagRerank max_length issue. (#5366)
### What problem does this PR solve?

#5352

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-26 11:01:13 +08:00
955801db2e Resolve super class invokation error. (#5337)
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-25 17:42:29 +08:00
daddfc9e1b Remove dup gb2312, solve currupt error. (#5326)
### What problem does this PR solve?

#5252 
#5325

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-25 12:22:37 +08:00
df3d0f61bd Fix base url missing for deepseek from Tongyi. (#5294)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-24 15:43:32 +08:00
ec96426c00 Tongyi adapts deepseek. (#5285)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-02-24 14:04:25 +08:00
569e40544d Refactor rerank model with dynamic batch processing and memory manage… (#5273)
…ment

### What problem does this PR solve?
Issue:https://github.com/infiniflow/ragflow/issues/5262
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: wenju.li <wenju.li@deepctr.cn>
2025-02-24 11:32:08 +08:00
4f2816c01c Add support to boto3 default connection (#5246)
### What problem does this PR solve?
 
This pull request includes changes to the initialization logic of the
`ChatModel` and `EmbeddingModel` classes to enhance the handling of AWS
credentials.

Use cases:
- Use env variables for credentials instead of managing them on the DB 
- Easy connection when deploying on an AWS machine

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2025-02-24 11:01:14 +08:00
7ce675030b Support downloading models from ModelScope Community. (#5073)
This PR supports downloading models from ModelScope. The main
modifications are as follows:
-New Feature (non-breaking change which adds functionality)
-Documentation Update

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2025-02-24 10:12:20 +08:00
1a755e75c5 Remove v1 (#5220)
### What problem does this PR solve?

#5201

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-21 15:15:38 +08:00
d2929e432e Feat: add LLM provider PPIO (#5013)
### What problem does this PR solve?

Add a LLM provider: PPIO

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2025-02-17 12:03:26 +08:00
b08bb56f6c Display thinking for deepseek r1 (#4904)
### What problem does this PR solve?
#4903
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-02-12 15:43:13 +08:00
2aa0cdde8f Fix Gemini chat issue. (#4757)
### What problem does this PR solve?

#4753

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-02-07 12:00:19 +08:00
036f37a627 fix: err object has no attribute 'iter_lines' (#4686)
### What problem does this PR solve?

ERROR: 'Stream' object has no attribute 'iter_lines' with reference to
Claude/Anthropic chat streams

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kyle Olmstead <k.olmstead@offensive-security.com>
2025-02-01 22:39:30 +08:00
4776fa5e4e Refactor for total_tokens. (#4652)
### What problem does this PR solve?

#4567
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-26 13:54:26 +08:00
2cb8edc42c Added GPUStack (#4649)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2025-01-26 12:25:02 +08:00
f1d9f4290e Fix TogetherAIEmbed. (#4623)
### What problem does this PR solve?

#4567

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-24 10:29:30 +08:00
dd0ebbea35 Light GraphRAG (#4585)
### What problem does this PR solve?

#4543

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-01-22 19:43:14 +08:00
3805621564 Fix xinference rerank issue. (#4499)
### What problem does this PR solve?
#4495
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-16 11:35:51 +08:00
be5f830878 Truncate text for zhipu embedding. (#4490)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-15 14:36:27 +08:00
7944aacafa Feat: add gpustack model provider (#4469)
### What problem does this PR solve?

Add GPUStack as a new model provider.
[GPUStack](https://github.com/gpustack/gpustack) is an open-source GPU
cluster manager for running LLMs. Currently, locally deployed models in
GPUStack cannot integrate well with RAGFlow. GPUStack provides both
OpenAI compatible APIs (Models / Chat Completions / Embeddings /
Speech2Text / TTS) and other APIs like Rerank. We would like to use
GPUStack as a model provider in ragflow.

[GPUStack Docs](https://docs.gpustack.ai/latest/quickstart/)

Related issue: https://github.com/infiniflow/ragflow/issues/4064.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)



### Testing Instructions
1. Install GPUStack and deploy the `llama-3.2-1b-instruct` llm, `bge-m3`
text embedding model, `bge-reranker-v2-m3` rerank model,
`faster-whisper-medium` Speech-to-Text model, `cosyvoice-300m-sft` in
GPUStack.
2. Add provider in ragflow settings.
3. Testing in ragflow.
2025-01-15 14:15:58 +08:00
b93c136797 Fix gemini embedding error. (#4356)
### What problem does this PR solve?

#4314

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-01-06 14:41:29 +08:00
50f209204e Synchronize with enterprise version (#4325)
### Type of change

- [x] Refactoring
2025-01-02 13:44:44 +08:00