Added release notes for v0.15.0 (#4056)

### What problem does this PR solve?



### Type of change


- [x] Documentation Update
This commit is contained in:
writinwaters
2024-12-18 15:46:31 +08:00
committed by GitHub
parent a45ba3a91e
commit bfdc4944a3
7 changed files with 52 additions and 13 deletions

View File

@ -9,6 +9,8 @@ import TabItem from '@theme/TabItem';
Run models locally using Ollama, Xinference, or other frameworks.
---
RAGFlow supports deploying models locally using Ollama, Xinference, IPEX-LLM, or jina. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local "server" for interacting with your local models.
RAGFlow seamlessly integrates with Ollama and Xinference, without the need for further environment configurations. You can use them to deploy two types of local models in RAGFlow: chat models and embedding models.

View File

@ -7,6 +7,8 @@ slug: /run_health_check
Double-check the health status of RAGFlow's dependencies.
---
The operation of RAGFlow depends on four services:
- **Elasticsearch** (default) or [Infinity](https://github.com/infiniflow/infinity) as the document engine