## Summary
This PR fixes two critical bugs in `chunk_list()` method that prevent
processing large documents (>128 chunks) in GraphRAG and
other workflows.
## Bugs Fixed
### Bug 1: Incorrect pagination offset calculation
**Location:** `rag/nlp/search.py` lines 530-531
**Problem:** The loop variable `p` was used directly as offset, causing
incorrect pagination:
```python
# BEFORE (BUGGY):
for p in range(offset, max_count, bs): # p = 0, 128, 256, 384...
es_res = self.dataStore.search(..., p, bs, ...) # p used as offset
Fix: Use page number multiplied by batch size:
# AFTER (FIXED):
for page_num, p in enumerate(range(offset, max_count, bs)):
es_res = self.dataStore.search(..., page_num * bs, bs, ...)
Bug 2: Premature loop termination
Location: rag/nlp/search.py lines 538-539
Problem: Loop terminates when any page returns fewer than 128 chunks,
even when thousands more remain:
# BEFORE (BUGGY):
if len(dict_chunks.values()) < bs: # Breaks at 126 chunks even if 3,000+
remain
break
Fix: Only terminate when zero chunks returned:
# AFTER (FIXED):
if len(dict_chunks.values()) == 0:
break
Enhancement: Add max_count parameter to GraphRAG
Location: graphrag/general/index.py line 60
Added max_count=10000 parameter to chunk loading for both LightRAG and
General GraphRAG paths to ensure all chunks are
processed.
Testing
Validated with a 314-page legal document containing 3,207 chunks:
Before fixes:
- Only 2-126 chunks processed
- GraphRAG generated 25 nodes, 8 edges
After fixes:
- All 3,209 chunks processed ✅
- GraphRAG processing complete dataset
Impact
These bugs affect any workflow using chunk_list() with large documents,
particularly:
- GraphRAG knowledge graph generation
- RAPTOR hierarchical summarization
- Document processing pipelines with >128 chunks
Related Issue
Fixes#11687
Checklist
- Code follows project style guidelines
- Tested with large documents (3,207+ chunks)
- Both bugs validated by Dosu bot in issue #11687
- No breaking changes to API
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### What problem does this PR solve?
Fix: doc_aggs not correctly returned when no chunks retrieved.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Incorrect retrieval total count with pagination enabled.
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Fix: OpenSearch retrieval no return #11006
Add documentation #11072
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Documentation Update
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
### What problem does this PR solve?
Fix: opensearch retrieval error #10828
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
- rename rmSpace to remove_redundant_spaces
- move clean_markdown_block to common module
- add unit tests for remove_redundant_spaces and clean_markdown_block
### Type of change
- [x] Refactoring
---------
Signed-off-by: Jin Hai <haijin.chn@gmail.com>
### What problem does this PR solve?
Fix: parsing excel with chartsheet #10815
Fix: Clamp begin to a minimum of 0 to prevent negative indexing #10804
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
Don't need rerank for infinity since Infinity normalizes each way score
before fusion.
### Type of change
- [x] Refactoring
### What problem does this PR solve?
#9082#6365
<u> **WARNING: it's not compatible with the older version of `Agent`
module, which means that `Agent` from older versions can not work
anymore.**</u>
### Type of change
- [x] New Feature (non-breaking change which adds functionality)
### What problem does this PR solve?
1. Remove the useless pop logic due to already been checked at the if
logic
2. merge log logic
### Type of change
- [x] Refactoring
### What problem does this PR solve?
Fix the restriction of forcing similarity_threshold=0 and page_size=30
when doc_ids is not empty
#8228
---------
Co-authored-by: shiqing.wusq <shiqing.wusq@dtzhejiang.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### What problem does this PR solve?
Change citation mark as [ID:n], it's easier for LLMs to follow the
instruction :) #7904
### Type of change
- [x] Refactoring
### What problem does this PR solve?
Fix retrieval testing wrong pagination. #7171
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
---------
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
### Related Issue:
https://github.com/infiniflow/ragflow/issues/6741
### Environment:
Using nightly version
Commit version:
[[6051abb](6051abb4a3)]
### Bug Description:
The retrieval function in rag/nlp/search.py returns the original total
chunks number
even after chunks are filtered by similarity_threshold. This creates
inconsistency
between the actual returned chunks and the reported total.
### Changes Made:
Added code to count how many search results actually meet or exceed the
configured similarity threshold
Positioned the calculation after the doc_ids conditional logic to ensure
special cases are handled correctly
Updated the ranks["total"] value to store this filtered count instead of
using the raw search result count
Using NumPy leverages optimized C-level batch operations to optimize
speed
### What problem does this PR solve?
This PR fixes an AttributeError in the all_tags method of the Dealer
class. Previously, the method incorrectly called
self.docStoreConn.indexExist instead of self.dataStore.indexExist. Since
self.docStoreConn was never set (and self.dataStore is already
initialized in init), this resulted in an error when attempting to check
if the index exists. This change ensures that the proper connector is
used for the index existence check, thereby resolving the issue._
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
Use `json.loads()` instead.
### What problem does this PR solve?
Using `eval()` can lead to code injections. I think this loads a JSON
field, right? If yes, why is this done via `eval()` and not
`json.loads()`?
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
### What problem does this PR solve?
stop rerank by model when search result is empty, otherwise rerank may
raise an error (qwen).
### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
Co-authored-by: 刘博 <liubo@ynby.cn>