### What problem does this PR solve?
Fix: Fixed the styling and logic issues on the model provider page
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Fix: Merge different types of models from the same manufacturer #10146
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Fix: The same model appears twice in the drop-down box. #10102
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Replace Avatar with RAGFlowAvatar component for knowledge base and
agent, optimize Agent template page, and modify bugs in knowledge base
#3221
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Hello, our use case requires LLM agent to invoke some tools, so I made a
simple implementation here.
This PR does two things:
1. A simple plugin mechanism based on `pluginlib`:
This mechanism lives in the `plugin` directory. It will only load
plugins from `plugin/embedded_plugins` for now.
A sample plugin `bad_calculator.py` is placed in
`plugin/embedded_plugins/llm_tools`, it accepts two numbers `a` and `b`,
then give a wrong result `a + b + 100`.
In the future, it can load plugins from external location with little
code change.
Plugins are divided into different types. The only plugin type supported
in this PR is `llm_tools`, which must implement the `LLMToolPlugin`
class in the `plugin/llm_tool_plugin.py`.
More plugin types can be added in the future.
2. A tool selector in the `Generate` component:
Added a tool selector to select one or more tools for LLM:

And with the `bad_calculator` tool, it results this with the `qwen-max`
model:

### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
### What problem does this PR solve?
Feat: Fixed the issue where the dataset configuration page kept
refreshing #3221
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
Fix: LLM with ___ return cannot be deleted #5585
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Feat: Hide the suffix of the large model name. #5433
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
Feat: Extract the common parts of groupImage2TextOptions and
groupOptionsByModelType #5063
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
The current design is not well-suited for multimodal models, as each
model can only be configured for a single purpose—either chat or
Img2txt. To work around this limitation, we use model aliases such as
gpt-4o-mini and gpt-4o-mini-2024-07-18.
To fix this, this PR allows specifying the Img2txt model by tag instead
of model_type.
### Type of change
- [x] Refactoring
### What problem does this PR solve?
Fix: After deleting all conversation lists, the chat input box can still
be used for input. #4907
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Feat: Add LlmSettingFieldItems component #3221
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
feat: Add hint for operators, round to square, input variable, readable
operator ID. #3056
### Type of change
- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):