Fix: Reduce excessive IO operations by loading LLM factory configurations (#6047)

…ions

### What problem does this PR solve?

This PR fixes an issue where the application was repeatedly reading the
llm_factories.json file from disk in multiple places, which could lead
to "Too many open files" errors under high load conditions. The fix
centralizes the file reading operation in the settings.py module and
stores the data in a global variable that can be accessed by other
modules.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [x] Performance Improvement
- [ ] Other (please describe):
This commit is contained in:
utopia2077
2025-03-14 09:54:38 +08:00
committed by GitHub
parent 47926f7d21
commit 2d4a60cae6
4 changed files with 18 additions and 17 deletions

View File

@ -13,13 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import logging
import os
from api.db.services.user_service import TenantService
from api.utils.file_utils import get_project_base_directory
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel, Seq2txtModel, TTSModel
from api import settings
from api.db import LLMType
from api.db.db_models import DB
from api.db.db_models import LLMFactories, LLM, TenantLLM
@ -75,7 +73,7 @@ class TenantLLMService(CommonService):
# model name must be xxx@yyy
try:
model_factories = json.load(open(os.path.join(get_project_base_directory(), "conf/llm_factories.json"), "r"))["factory_llm_infos"]
model_factories = settings.FACTORY_LLM_INFOS
model_providers = set([f["name"] for f in model_factories])
if arr[-1] not in model_providers:
return model_name, None