Update docs (#11204)

### What problem does this PR solve?

as title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
This commit is contained in:
Jin Hai
2025-11-12 14:01:47 +08:00
committed by GitHub
parent 33cc9cafa9
commit 20b6dafbd8
5 changed files with 3 additions and 18 deletions

View File

@ -38,7 +38,7 @@ By default, you can use `sys.query`, which is the user query and the default out
### 3. Select dataset(s) to query
You can specify one or multiple datasets to retrieve data from. If selecting mutiple, ensure they use the same embedding model.
You can specify one or multiple datasets to retrieve data from. If selecting multiple, ensure they use the same embedding model.
### 4. Expand **Advanced Settings** to configure the retrieval method

View File

@ -12,7 +12,6 @@ A checklist to speed up document parsing and indexing.
Please note that some of your settings may consume a significant amount of time. If you often find that document parsing is time-consuming, here is a checklist to consider:
- Use GPU to reduce embedding time.
- On the configuration page of your dataset, switch off **Use RAPTOR to enhance retrieval**.
- Extracting knowledge graph (GraphRAG) is time-consuming.
- Disable **Auto-keyword** and **Auto-question** on the configuration page of your dataset, as both depend on the LLM.

View File

@ -107,7 +107,6 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
- *You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
- _If your local model is an embedding model, you should find it under **Embedding model**._
### 6. Update Chat Configuration
@ -158,14 +157,10 @@ Click on your logo **>** **Model providers** **>** **System Model Settings** to
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
> If your local model is an embedding model, you should find your local model under **Embedding model**.
### 7. Update Chat Configuration
Update your chat model accordingly in **Chat Configuration**:
> If your local model is an embedding model, update it on the configuration page of your dataset.
## Deploy a local model using IPEX-LLM
[IPEX-LLM](https://github.com/intel-analytics/ipex-llm) is a PyTorch library for running LLMs on local Intel CPUs or GPUs (including iGPU or discrete GPUs like Arc, Flex, and Max) with low latency. It supports Ollama on Linux and Windows systems.