mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
Feature/feat1017 (#2872)
### What problem does this PR solve? 1. fix: mid map show error in knowledge graph, juse because ```@antv/g6```version changed 2. feat: concurrent threads configuration support in graph extractor 3. fix: used tokens update failed for tenant 4. feat: timeout configuration support for llm 5. fix: regex error in graph extractor 6. feat: qwen rerank(```gte-rerank```) support 7. fix: timeout deal in knowledge graph index process. Now chat by stream output, also, it is configuratable. 8. feat: ```qwen-long``` model configuration ### Type of change - [x] Bug Fix (non-breaking change which fixes an issue) - [x] New Feature (non-breaking change which adds functionality) --------- Co-authored-by: chongchuanbing <chongchuanbing@gmail.com> Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
This commit is contained in:
@ -16,6 +16,7 @@
|
||||
|
||||
import collections
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import logging
|
||||
import traceback
|
||||
@ -89,7 +90,8 @@ class MindMapExtractor:
|
||||
prompt_variables = {}
|
||||
|
||||
try:
|
||||
exe = ThreadPoolExecutor(max_workers=12)
|
||||
max_workers = int(os.environ.get('MINDMAP_EXTRACTOR_MAX_WORKERS', 12))
|
||||
exe = ThreadPoolExecutor(max_workers=max_workers)
|
||||
threads = []
|
||||
token_count = max(self._llm.max_length * 0.8, self._llm.max_length-512)
|
||||
texts = []
|
||||
|
||||
Reference in New Issue
Block a user