mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
Refactor UI text (#3911)
### What problem does this PR solve? Refactor UI text ### Type of change - [x] Documentation Update - [x] Refactoring Signed-off-by: jinhai <haijin.chn@gmail.com>
This commit is contained in:
@ -78,7 +78,7 @@ Ollama is running
|
||||
|
||||
### 4. Add Ollama
|
||||
|
||||
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Ollama to RAGFlow:
|
||||
In RAGFlow, click on your logo on the top right of the page **>** **Model providers** and add Ollama to RAGFlow:
|
||||
|
||||

|
||||
|
||||
@ -101,7 +101,7 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
|
||||
|
||||
### 6. Update System Model Settings
|
||||
|
||||
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model:
|
||||
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
|
||||
|
||||
*You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
|
||||
|
||||
@ -143,7 +143,7 @@ $ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --
|
||||
```
|
||||
### 4. Add Xinference
|
||||
|
||||
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Xinference to RAGFlow:
|
||||
In RAGFlow, click on your logo on the top right of the page **>** **Model providers** and add Xinference to RAGFlow:
|
||||
|
||||

|
||||
|
||||
@ -154,7 +154,7 @@ Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:
|
||||
|
||||
### 6. Update System Model Settings
|
||||
|
||||
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model.
|
||||
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model.
|
||||
|
||||
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
|
||||
|
||||
|
||||
@ -20,7 +20,7 @@ If you find your online LLM is not on the list, don't feel disheartened. The lis
|
||||
You have two options for configuring your model API key:
|
||||
|
||||
- Configure it in **service_conf.yaml.template** before starting RAGFlow.
|
||||
- Configure it on the **Model Providers** page after logging into RAGFlow.
|
||||
- Configure it on the **Model providers** page after logging into RAGFlow.
|
||||
|
||||
### Configure model API key before starting up RAGFlow
|
||||
|
||||
@ -32,7 +32,7 @@ You have two options for configuring your model API key:
|
||||
3. Reboot your system for your changes to take effect.
|
||||
4. Log into RAGFlow.
|
||||
|
||||
*After logging into RAGFlow, you will find your chosen model appears under **Added models** on the **Model Providers** page.*
|
||||
*After logging into RAGFlow, you will find your chosen model appears under **Added models** on the **Model providers** page.*
|
||||
|
||||
### Configure model API key after logging into RAGFlow
|
||||
|
||||
@ -40,9 +40,9 @@ You have two options for configuring your model API key:
|
||||
After logging into RAGFlow, configuring your model API key through the **service_conf.yaml.template** file will no longer take effect.
|
||||
:::
|
||||
|
||||
After logging into RAGFlow, you can *only* configure API Key on the **Model Providers** page:
|
||||
After logging into RAGFlow, you can *only* configure API Key on the **Model providers** page:
|
||||
|
||||
1. Click on your logo on the top right of the page **>** **Model Providers**.
|
||||
1. Click on your logo on the top right of the page **>** **Model providers**.
|
||||
2. Find your model card under **Models to be added** and click **Add the model**:
|
||||

|
||||
3. Paste your model API key.
|
||||
|
||||
@ -21,7 +21,7 @@ You start an AI conversation by creating an assistant.
|
||||
- **Empty response**:
|
||||
- If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then, when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
|
||||
- If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
|
||||
- **Show Quote**: This is a key feature of RAGFlow and enabled by default. RAGFlow does not work like a black box. instead, it clearly shows the sources of information that its responses are based on.
|
||||
- **Show quote**: This is a key feature of RAGFlow and enabled by default. RAGFlow does not work like a black box. instead, it clearly shows the sources of information that its responses are based on.
|
||||
- Select the corresponding knowledge bases. You can select one or multiple knowledge bases, but ensure that they use the same embedding model, otherwise an error would occur.
|
||||
|
||||
3. Update **Prompt Engine**:
|
||||
@ -35,7 +35,7 @@ You start an AI conversation by creating an assistant.
|
||||
4. Update **Model Setting**:
|
||||
|
||||
- In **Model**: you select the chat model. Though you have selected the default chat model in **System Model Settings**, RAGFlow allows you to choose an alternative chat model for your dialogue.
|
||||
- **Freedom** refers to the level that the LLM improvises. From **Improvise**, **Precise**, to **Balance**, each freedom level corresponds to a unique combination of **Temperature**, **Top P**, **Presence Penalty**, and **Frequency Penalty**.
|
||||
- **Freedom** refers to the level that the LLM improvises. From **Improvise**, **Precise**, to **Balance**, each freedom level corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
|
||||
- **Temperature**: Level of the prediction randomness of the LLM. The higher the value, the more creative the LLM is.
|
||||
- **Top P** is also known as "nucleus sampling". See [here](https://en.wikipedia.org/wiki/Top-p_sampling) for more information.
|
||||
- **Max Tokens**: The maximum length of the LLM's responses. Note that the responses may be curtailed if this value is set too low.
|
||||
|
||||
@ -235,7 +235,7 @@ RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalA
|
||||
|
||||
To add and configure an LLM:
|
||||
|
||||
1. Click on your logo on the top right of the page **>** **Model Providers**:
|
||||
1. Click on your logo on the top right of the page **>** **Model providers**:
|
||||
|
||||

|
||||
|
||||
|
||||
Reference in New Issue
Block a user