UI updates (#6290)

### What problem does this PR solve?



### Type of change


- [x] Documentation Update
This commit is contained in:
writinwaters
2025-03-20 10:26:16 +08:00
committed by GitHub
parent dbf2ee56c6
commit e0c436b616
8 changed files with 24 additions and 14 deletions

View File

@ -53,23 +53,31 @@ Using a rerank model will *significantly* increase the system's response time.
### Tavily API key
If an API key is correctly set here, Tavily-based web searches will be used to supplement knowledge base retrieval.
*Optional*
Enter your Tavily API key here to enable Tavily web search during retrieval. See [here](https://app.tavily.com/home) for instructions on getting a Tavily API key.
### Use knowledge graph
It will retrieve descriptions of relevant entities,relations and community reports, which will enhance inference of multi-hop and complex question.
Whether to use knowledge graph(s) in the specified knowledge base(s) during retrieval for multi-hop question answering. When enabled, this would involve iterative searches across entity, relationship, and community report chunks, greatly increasing retrieval time.
### Knowledge bases
*Optional*
Select the knowledge base(s) to retrieve data from.
:::danger IMPORTANT
If you select multiple knowledge bases, you must ensure that the knowledge bases (datasets) you select use the same embedding model; otherwise, an error message would occur.
:::
- If no knowledge base is selected, meaning conversations with the agent will not be based on any knowledge base, ensure that the **Empty response** field is left blank to avoid an error.
- If you select multiple knowledge bases, you must ensure that the knowledge bases (datasets) you select use the same embedding model; otherwise, an error message would occur.
### Empty response
Set this as a response if no results are retrieved from the knowledge bases for your query, or leave this field blank to allow the LLM to improvise when nothing is found.
- Set this as a response if no results are retrieved from the knowledge base(s) for your query, or
- Leave this field blank to allow the chat model to improvise when nothing is found.
:::caution WARNING
If you do not specify a knowledge base, you must leave this field blank; otherwise, an error would occur.
:::
## Examples

View File

@ -37,6 +37,8 @@ You start an AI conversation by creating an assistant.
- If **Rerank model** is selected, the hybrid score system uses keyword similarity and reranker score, and the default weight assigned to the reranker score is 1-0.7=0.3.
- **Top N** determines the *maximum* number of chunks to feed to the LLM. In other words, even if more chunks are retrieved, only the top N chunks are provided as input.
- **Multi-turn optimization** enhances user queries using existing context in a multi-round conversation. It is enabled by default. When enabled, it will consume additional LLM tokens and significantly increase the time to generate answers.
- **Use knowledge graph** indicates whether to use knowledge graph(s) in the specified knowledge base(s) during retrieval for multi-hop question answering. When enabled, this would involve iterative searches across entity, relationship, and community report chunks, greatly increasing retrieval time.
- **Reasoning** indicates whether to generate answers through reasoning processes like Deepseek-R1/OpenAI o1. Once enabled, the chat model autonomously integrates Deep Research during question answering when encountering an unknown topic. This involves the chat model dynamically searching external knowledge and generating final answers through reasoning.
- **Rerank model** sets the reranker model to use. It is left empty by default.
- If **Rerank model** is left empty, the hybrid score system uses keyword similarity and vector similarity, and the default weight assigned to the vector similarity component is 1-0.7=0.3.
- If **Rerank model** is selected, the hybrid score system uses keyword similarity and reranker score, and the default weight assigned to the reranker score is 1-0.7=0.3.

View File

@ -13,7 +13,7 @@ Retrieval accuracy is the touchstone for a production-ready RAG framework. In ad
To use this feature, ensure you have at least one properly configured tag set, specify the tag set(s) on the **Configuration** page of your knowledge base (dataset), and then re-parse your documents to initiate the auto-tag process. During this process, each chunk in your dataset is compared with every entry in the specified tag set(s), and tags are automatically applied based on similarity.
:::danger IMPORTANT
:::caution NOTE
The auto-tagging feature is *unavailable* on the [Infinity](https://github.com/infiniflow/infinity) document engine.
:::