mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-29 16:05:35 +08:00
Updated outdated descriptions and added multi-turn optimization (#4362)
### What problem does this PR solve? _Briefly describe what this PR aims to solve. Include background context that will help reviewers understand the purpose of the PR._ ### Type of change - [x] Documentation Update
This commit is contained in:
@ -81,9 +81,13 @@ No, this feature is not supported.
|
||||
|
||||
---
|
||||
|
||||
### Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
|
||||
### Do you support multiple rounds of dialogues, referencing previous dialogues as context for the current query?
|
||||
|
||||
This feature and the related APIs are still in development. Contributions are welcome.
|
||||
Yes, we support enhancing user queries based on existing context of an ongoing conversation:
|
||||
|
||||
1. On the **Chat** page, hover over the desired assistant and select **Edit**.
|
||||
2. In the **Chat Configuration** popup, click the **Prompt Engine** tab.
|
||||
3. Toggle on **Multi-turn optimization** to enable this feature.
|
||||
|
||||
---
|
||||
|
||||
@ -388,6 +392,14 @@ You can use Ollama or Xinference to deploy local LLM. See [here](../guides/deplo
|
||||
|
||||
---
|
||||
|
||||
### Is it possible to add an LLM that is not supported?
|
||||
|
||||
If your model is not currently supported but has APIs compatible with those of OpenAI, click **OpenAI-API-Compatible** on the **Model providers** page to configure your model:
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### How to interconnect RAGFlow with Ollama?
|
||||
|
||||
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
|
||||
|
||||
Reference in New Issue
Block a user