docs: update docs icons (#12465)

### What problem does this PR solve?

Update icons for docs.
Trailing spaces are auto truncated by the editor, does not affect real
content.

### Type of change

- [x] Documentation Update
This commit is contained in:
Jimmy Ben Klieve
2026-01-07 10:00:09 +08:00
committed by GitHub
parent ca9645f39b
commit 6814ace1aa
88 changed files with 922 additions and 661 deletions

View File

@ -1,6 +1,9 @@
---
sidebar_position: 2
slug: /agent_component
sidebar_custom_props: {
categoryIcon: RagAiAgent
}
---
# Agent component
@ -16,7 +19,7 @@ An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.5 onwa
## Scenarios
An **Agent** component is essential when you need the LLM to assist with summarizing, translating, or controlling various tasks.
An **Agent** component is essential when you need the LLM to assist with summarizing, translating, or controlling various tasks.
## Prerequisites
@ -28,13 +31,13 @@ An **Agent** component is essential when you need the LLM to assist with summari
## Quickstart
### 1. Click on an **Agent** component to show its configuration panel
### 1. Click on an **Agent** component to show its configuration panel
The corresponding configuration panel appears to the right of the canvas. Use this panel to define and fine-tune the **Agent** component's behavior.
### 2. Select your model
Click **Model**, and select a chat model from the dropdown menu.
Click **Model**, and select a chat model from the dropdown menu.
:::tip NOTE
If no model appears, check if your have added a chat model on the **Model providers** page.
@ -55,7 +58,7 @@ In this quickstart, we assume your **Agent** component is used standalone (witho
### 5. Skip Tools and Agent
The **+ Add tools** and **+ Add agent** sections are used *only* when you need to configure your **Agent** component as a planner (with tools or sub-Agents beneath). In this quickstart, we assume your **Agent** component is used standalone (without tools or sub-Agents beneath).
The **+ Add tools** and **+ Add agent** sections are used *only* when you need to configure your **Agent** component as a planner (with tools or sub-Agents beneath). In this quickstart, we assume your **Agent** component is used standalone (without tools or sub-Agents beneath).
### 6. Choose the next component
@ -71,7 +74,7 @@ In this section, we assume your **Agent** will be configured as a planner, with
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/mcp_page.jpg)
### 2. Configure your Tavily MCP server
### 2. Configure your Tavily MCP server
Update your MCP server's name, URL (including the API key), server type, and other necessary settings. When configured correctly, the available tools will be displayed.
@ -110,7 +113,7 @@ On the canvas, click the newly-populated Tavily server to view and select its av
Click the dropdown menu of **Model** to show the model configuration window.
- **Model**: The chat model to use.
- **Model**: The chat model to use.
- Ensure you set the chat model correctly on the **Model providers** page.
- You can use different models for different components to increase flexibility or improve overall performance.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
@ -118,21 +121,21 @@ Click the dropdown menu of **Model** to show the model configuration window.
- **Improvise**: Produces more creative responses.
- **Precise**: (Default) Produces more conservative responses.
- **Balance**: A middle ground between **Improvise** and **Precise**.
- **Temperature**: The randomness level of the model's output.
- **Temperature**: The randomness level of the model's output.
Defaults to 0.1.
- Lower values lead to more deterministic and predictable outputs.
- Higher values lead to more creative and varied outputs.
- A temperature of zero results in the same output for the same prompt.
- **Top P**: Nucleus sampling.
- **Top P**: Nucleus sampling.
- Reduces the likelihood of generating repetitive or unnatural text by setting a threshold *P* and restricting the sampling to tokens with a cumulative probability exceeding *P*.
- Defaults to 0.3.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- A higher **presence penalty** value results in the model being more likely to generate tokens not yet been included in the generated text.
- Defaults to 0.4.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.
- **Max tokens**:
- **Max tokens**:
This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). It is disabled by default, allowing the model to determine the number of tokens in its responses.
:::tip NOTE
@ -142,7 +145,7 @@ Click the dropdown menu of **Model** to show the model configuration window.
### System prompt
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering. However, please be aware that the system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM.
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering. However, please be aware that the system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM.
An **Agent** component relies on keys (variables) to specify its data inputs. Its immediate upstream component is *not* necessarily its data input, and the arrows in the workflow indicate *only* the processing sequence. Keys in a **Agent** component are used in conjunction with the system prompt to specify data inputs for the LLM. Use a forward slash `/` or the **(x)** button to show the keys to use.
@ -190,11 +193,11 @@ From v0.20.5 onwards, four framework-level prompt blocks are available in the **
The user-defined prompt. Defaults to `sys.query`, the user query. As a general rule, when using the **Agent** component as a standalone module (not as a planner), you usually need to specify the corresponding **Retrieval** components output variable (`formalized_content`) here as part of the input to the LLM.
### Tools
### Tools
You can use an **Agent** component as a collaborator that reasons and reflects with the aid of other tools; for instance, **Retrieval** can serve as one such tool for an **Agent**.
### Agent
### Agent
You use an **Agent** component as a collaborator that reasons and reflects with the aid of subagents or other tools, forming a multi-agent system.