Chore(config): remove Youdao and BAAI embedding model providers (#10873)

### What problem does this PR solve?

This commit removes the Youdao and BAAI entries from the LLM factories
configuration as they are no longer needed or supported.

### Type of change

- [x] Config update
This commit is contained in:
Liu An
2025-10-29 19:38:57 +08:00
committed by GitHub
parent 55eb525fdc
commit 40b2c48957
2 changed files with 1 additions and 29 deletions

View File

@ -368,7 +368,7 @@ def my_llms():
@manager.route('/list', methods=['GET']) # noqa: F821
@login_required
def list_app():
self_deployed = ["Youdao", "FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
self_deployed = ["FastEmbed", "Ollama", "Xinference", "LocalAI", "LM-Studio", "GPUStack"]
weighted = []
model_type = request.args.get("model_type")
try: