mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
Added release notes (#3660)
### What problem does this PR solve? ### Type of change - [x] Documentation Update
This commit is contained in:
@ -7,6 +7,8 @@ slug: /deploy_local_llm
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
Run models locally using Ollama, Xinference, or other frameworks.
|
||||
|
||||
RAGFlow supports deploying models locally using Ollama, Xinference, IPEX-LLM, or jina. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local "server" for interacting with your local models.
|
||||
|
||||
RAGFlow seamlessly integrates with Ollama and Xinference, without the need for further environment configurations. You can use them to deploy two types of local models in RAGFlow: chat models and embedding models.
|
||||
|
||||
@ -27,7 +27,7 @@ _On the **Team** page, you can view the information about members of your team a
|
||||
|
||||
You are, by default, the owner of your own team and the only person permitted to invite users to join your team or remove team members.
|
||||
|
||||

|
||||

|
||||
|
||||
## Remove team members
|
||||
|
||||
@ -36,4 +36,3 @@ You are, by default, the owner of your own team and the only person permitted to
|
||||
## Accept or decline team invite
|
||||
|
||||

|
||||
|
||||
|
||||
Reference in New Issue
Block a user