mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
Update docs (#11204)
### What problem does this PR solve? as title ### Type of change - [x] Documentation Update Signed-off-by: Jin Hai <haijin.chn@gmail.com>
This commit is contained in:
@ -107,7 +107,6 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
|
||||
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
|
||||
|
||||
- *You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
|
||||
- _If your local model is an embedding model, you should find it under **Embedding model**._
|
||||
|
||||
### 6. Update Chat Configuration
|
||||
|
||||
@ -158,14 +157,10 @@ Click on your logo **>** **Model providers** **>** **System Model Settings** to
|
||||
|
||||
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
|
||||
|
||||
> If your local model is an embedding model, you should find your local model under **Embedding model**.
|
||||
|
||||
### 7. Update Chat Configuration
|
||||
|
||||
Update your chat model accordingly in **Chat Configuration**:
|
||||
|
||||
> If your local model is an embedding model, update it on the configuration page of your dataset.
|
||||
|
||||
## Deploy a local model using IPEX-LLM
|
||||
|
||||
[IPEX-LLM](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLMs on local Intel CPUs or GPUs (including iGPU or discrete GPUs like Arc, Flex, and Max) with low latency. It supports Ollama on Linux and Windows systems.
|
||||
|
||||
Reference in New Issue
Block a user