Updated outdated descriptions and added multi-turn optimization (#4362)

### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update
This commit is contained in:
writinwaters
2025-01-06 16:54:22 +08:00
committed by GitHub
parent b93c136797
commit 45619702ff
7 changed files with 48 additions and 12 deletions

View File

@ -81,9 +81,13 @@ No, this feature is not supported.
---
### Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
### Do you support multiple rounds of dialogues, referencing previous dialogues as context for the current query?
This feature and the related APIs are still in development. Contributions are welcome.
Yes, we support enhancing user queries based on existing context of an ongoing conversation:
1. On the **Chat** page, hover over the desired assistant and select **Edit**.
2. In the **Chat Configuration** popup, click the **Prompt Engine** tab.
3. Toggle on **Multi-turn optimization** to enable this feature.
---
@ -388,6 +392,14 @@ You can use Ollama or Xinference to deploy local LLM. See [here](../guides/deplo
---
### Is it possible to add an LLM that is not supported?
If your model is not currently supported but has APIs compatible with those of OpenAI, click **OpenAI-API-Compatible** on the **Model providers** page to configure your model:
![openai-api-compatible](https://github.com/user-attachments/assets/b1e964f2-b86e-41af-8528-fd8a96dc5f6f)
---
### How to interconnect RAGFlow with Ollama?
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.