Refact: update description for max_token in embedding #12792 (#12845)

### What problem does this PR solve?

Refact: update description for max_token in embedding #12792

### Type of change


- [x] Refactoring

Co-authored-by: Liu An <asiro@qq.com>
This commit is contained in:
Magicbook1108
2026-01-28 09:52:32 +08:00
committed by GitHub
parent ceff119f89
commit ee654f08d2
15 changed files with 85 additions and 45 deletions

View File

@ -135,7 +135,7 @@ Click the dropdown menu of **Model** to show the model configuration window.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.
- **Max tokens**:
This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). It is disabled by default, allowing the model to determine the number of tokens in its responses.
- The maximum context size of the agent.
:::tip NOTE
- It is not necessary to stick with the same model for all components. If a specific model is not performing well for a particular task, consider using a different one.

View File

@ -46,7 +46,7 @@ Click the dropdown menu of **Model** to show the model configuration window.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.
- **Max tokens**:
This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). It is disabled by default, allowing the model to determine the number of tokens in its responses.
- The maximum context size of the agent.
:::tip NOTE
- It is not necessary to stick with the same model for all components. If a specific model is not performing well for a particular task, consider using a different one.