mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
Miscellaneous editorial updates (#6805)
### What problem does this PR solve? ### Type of change - [x] Documentation Update
This commit is contained in:
@ -31,10 +31,6 @@ An opening greeting is the agent's first message to the user. It can be a welcom
|
||||
|
||||
You can set global variables within the **Begin** component, which can be either required or optional. Once established, users will need to provide values for these variables when interacting or chatting with the agent. Click **+ Add variable** to add a global variable, each with the following attributes:
|
||||
|
||||
:::caution WARNING
|
||||
If your agent's **Begin** component takes a variable, you *cannot* embed it into a webpage.
|
||||
:::
|
||||
|
||||
- **Key**: *Required*
|
||||
The unique variable name.
|
||||
- **Name**: *Required*
|
||||
@ -50,8 +46,15 @@ If your agent's **Begin** component takes a variable, you *cannot* embed it into
|
||||
- **boolean**: Requires the user to toggle between on and off.
|
||||
- **Optional**: A toggle indicating whether the variable is optional.
|
||||
|
||||
:::danger IMPORTAN
|
||||
If you set the key type as **file**, ensure the token count of the uploaded file does not exceed your model provider's maximum token limit; otherwise, the plain text in your file will be truncated and incomplete.
|
||||
:::tip NOTE
|
||||
To pass in parameters from a client, call:
|
||||
- HTTP method [Converse with agent](../../references/http_api_reference.md#converse-with-agent), or
|
||||
- Python method [Converse with agent](../../referencespython_api_reference.md#converse-with-agent).
|
||||
:::
|
||||
|
||||
:::danger IMPORTANT
|
||||
- If you set the key type as **file**, ensure the token count of the uploaded file does not exceed your model provider's maximum token limit; otherwise, the plain text in your file will be truncated and incomplete.
|
||||
- If your agent's **Begin** component takes a variable, you *cannot* embed it into a webpage.
|
||||
:::
|
||||
|
||||
## Examples
|
||||
|
||||
@ -44,6 +44,9 @@ You start an AI conversation by creating an assistant.
|
||||
- If **Rerank model** is selected, the hybrid score system uses keyword similarity and reranker score, and the default weight assigned to the reranker score is 1-0.7=0.3.
|
||||
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
|
||||
- If you are uncertain about the logic behind **Variable**, leave it *as-is*.
|
||||
- As of v0.17.2, if you add custom variables here, the only way you can pass in their values is to call:
|
||||
- HTTP method [Converse with chat assistant](../../references/http_api_reference.md#converse-with-chat-assistant), or
|
||||
- Python method [Converse with chat assistant](../../references/python_api_reference.md#converse-with-chat-assistant).
|
||||
|
||||
4. Update **Model Setting**:
|
||||
|
||||
|
||||
@ -28,7 +28,7 @@ This user guide does not intend to cover much of the installation or configurati
|
||||
- For a complete list of supported models and variants, see the [Ollama model library](https://ollama.com/library).
|
||||
:::
|
||||
|
||||
### 1. Deploy ollama using docker
|
||||
### 1. Deploy Ollama using Docker
|
||||
|
||||
```bash
|
||||
sudo docker run --name ollama -p 11434:11434 ollama/ollama
|
||||
@ -36,14 +36,14 @@ time=2024-12-02T02:20:21.360Z level=INFO source=routes.go:1248 msg="Listening on
|
||||
time=2024-12-02T02:20:21.360Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
|
||||
```
|
||||
|
||||
Ensure ollama is listening on all IP address:
|
||||
Ensure Ollama is listening on all IP address:
|
||||
```bash
|
||||
sudo ss -tunlp | grep 11434
|
||||
tcp LISTEN 0 4096 0.0.0.0:11434 0.0.0.0:* users:(("docker-proxy",pid=794507,fd=4))
|
||||
tcp LISTEN 0 4096 [::]:11434 [::]:* users:(("docker-proxy",pid=794513,fd=4))
|
||||
```
|
||||
|
||||
Pull models as you need. It's recommended to start with `llama3.2` (a 3B chat model) and `bge-m3` (a 567M embedding model):
|
||||
Pull models as you need. We recommend that you start with `llama3.2` (a 3B chat model) and `bge-m3` (a 567M embedding model):
|
||||
```bash
|
||||
sudo docker exec ollama ollama pull llama3.2
|
||||
pulling dde5aa3fc5ff... 100% ▕████████████████▏ 2.0 GB
|
||||
@ -58,20 +58,20 @@ success
|
||||
|
||||
### 2. Ensure Ollama is accessible
|
||||
|
||||
If RAGFlow runs in Docker and Ollama runs on the same host machine, check if ollama is accessible from inside the RAGFlow container:
|
||||
- If RAGFlow runs in Docker and Ollama runs on the same host machine, check if Ollama is accessible from inside the RAGFlow container:
|
||||
```bash
|
||||
sudo docker exec -it ragflow-server bash
|
||||
root@8136b8c3e914:/ragflow# curl http://host.docker.internal:11434/
|
||||
curl http://host.docker.internal:11434/
|
||||
Ollama is running
|
||||
```
|
||||
|
||||
If RAGFlow runs from source code and Ollama runs on the same host machine, check if ollama is accessible from RAGFlow host machine:
|
||||
- If RAGFlow is launched from source code and Ollama runs on the same host machine as RAGFlow, check if Ollama is accessible from RAGFlow's host machine:
|
||||
```bash
|
||||
curl http://localhost:11434/
|
||||
Ollama is running
|
||||
```
|
||||
|
||||
If RAGFlow and Ollama run on different machines, check if ollama is accessible from RAGFlow host machine:
|
||||
- If RAGFlow and Ollama run on different machines, check if Ollama is accessible from RAGFlow's host machine:
|
||||
```bash
|
||||
curl http://${IP_OF_OLLAMA_MACHINE}:11434/
|
||||
Ollama is running
|
||||
@ -88,8 +88,8 @@ In RAGFlow, click on your logo on the top right of the page **>** **Model provid
|
||||
|
||||
In the popup window, complete basic settings for Ollama:
|
||||
|
||||
1. Ensure model name and type match those been pulled at step 1, For example, (`llama3.2`, `chat`), (`bge-m3`, `embedding`).
|
||||
2. Ensure that the base URL match which been determined at step 2.
|
||||
1. Ensure that your model name and type match those been pulled at step 1 (Deploy Ollama using Docker). For example, (`llama3.2` and `chat`) or (`bge-m3` and `embedding`).
|
||||
2. Ensure that the base URL match the URL determined at step 2 (Ensure Ollama is accessible).
|
||||
3. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
|
||||
|
||||
|
||||
@ -104,15 +104,12 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
|
||||
|
||||
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
|
||||
|
||||
*You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
|
||||
|
||||
> If your local model is an embedding model, you should find your local model under **Embedding model**.
|
||||
- *You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
|
||||
- _If your local model is an embedding model, you should find it under **Embedding model**._
|
||||
|
||||
### 7. Update Chat Configuration
|
||||
|
||||
Update your chat model accordingly in **Chat Configuration**:
|
||||
|
||||
> If your local model is an embedding model, update it on the configuration page of your knowledge base.
|
||||
Update your model(s) accordingly in **Chat Configuration**.
|
||||
|
||||
## Deploy a local model using Xinference
|
||||
|
||||
|
||||
@ -23,7 +23,7 @@ You cannot invite users to a team unless you are its owner.
|
||||
## Prerequisites
|
||||
|
||||
1. Ensure that your Email address that received the team invitation is associated with a RAGFlow user account.
|
||||
2. To view and update the team owner's shared knowledge base, The team owner must set a knowledge base's **Permissions** to **Team**.
|
||||
2. To view and update the team owner's shared knowledge base, the team owner must set a knowledge base's **Permissions** to **Team**.
|
||||
|
||||
## Accept or decline team invite
|
||||
|
||||
@ -39,4 +39,4 @@ _After accepting the team invite, you should be able to view and update the team
|
||||
|
||||
## Leave a joined team
|
||||
|
||||

|
||||

|
||||
@ -3,7 +3,7 @@ sidebar_position: 1
|
||||
slug: /manage_team_members
|
||||
---
|
||||
|
||||
# Team
|
||||
# Manage team members
|
||||
|
||||
Invite or remove team members.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user