Compare commits

...

391 Commits

Author SHA1 Message Date
49494d4e3c Update version number for poetry (#3637)
### What problem does this PR solve?



### Type of change

- [x] Refactoring
2024-11-25 17:57:08 +08:00
3839d8abc7 Updated FAQ and Upgrade guide (#3636)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-25 17:29:56 +08:00
d8b150a34c Add support for folder deletion (#3635)
### What problem does this PR solve?

Add support for folder deletion.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-25 16:59:43 +08:00
4454b33e51 Feat: Add PricingCard component #3221 (#3634)
### What problem does this PR solve?

Feat: Add PricingCard component #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-25 16:21:47 +08:00
ce6b4c0e05 Fix a bug in completions (#3632)
### What problem does this PR solve?

Fix a bug in completions

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-25 16:21:28 +08:00
ddf01e0450 Fix missing info in README and config error (#3633)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-25 15:53:29 +08:00
86e48179a1 Fix bugs in api (#3624)
### What problem does this PR solve?

#3488

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-25 15:25:48 +08:00
b2c33b4df7 DOC: Added health check and team management (#3630)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-25 14:31:01 +08:00
9348616659 Handle infinity empty response (#3627)
### What problem does this PR solve?

Handle infinity empty response. Close #3623
Show version in docker build log

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-25 14:09:42 +08:00
a0e9b62de5 Feat: Add TeamManagement component #3221 (#3626)
### What problem does this PR solve?

Feat: Add TeamManagement component #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-25 12:21:37 +08:00
7874aaaf60 doc:rewrite some confusing statement (#3607)
rewrite some confusing statement to improve user experience
2024-11-25 11:57:31 +08:00
08ead81dde Bump infinity to v0.5.0-dev5 (#3520)
### What problem does this PR solve?

Bump infinity to v0.5.0-dev5

### Type of change

- [x] Refactoring
2024-11-25 11:53:58 +08:00
e5af18d5ea Update docs for v0.14.0 (#3625)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-25 11:37:56 +08:00
609236f5c1 Let 'One' applicable for tables in docx (#3619)
### What problem does this PR solve?

#3598

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Performance Improvement
2024-11-25 09:57:54 +08:00
6a3f9bc32a Restore version info to v0.13.0 (#3605)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-11-23 17:06:22 +08:00
934d6d9ad1 Update quickstart.mdx (#3603)
### What problem does this PR solve?

Fixed typo from RAGFlow_IMAGE to RAGFlOW_IMAGE

### Type of change

- [X] Documentation Update
2024-11-23 15:27:13 +08:00
875096384b when qwen rerank model not return ok, raise exception to notice user (#3593)
### What problem does this PR solve?

When calling the Qwen rerank model, if the model does not return
correctly, an exception should be raised to notify the user, rather than
simply returning a value of 0, as this would be confusing to the user.
### Type of change          

- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 22:34:34 +08:00
a10c2f2eff Fix: Solve the problem of model files in the image being soft links pointing to a non-existent address. #3584 (#3586)
### What problem does this PR solve?

Fix: Solve the problem of model files in the image being soft links
pointing to a non-existent address. #3584

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-22 21:54:39 +08:00
646ac1f2b4 Improved image build instructions (#3580)
### What problem does this PR solve?

Improved arm64 image build instructions

### Type of change

- [x] Documentation Update
- [x] Refactoring
2024-11-22 20:24:32 +08:00
8872aed512 Feat: Modify the prompt text for deleting team members #2834 (#3599)
### What problem does this PR solve?
Feat: Modify the prompt text for deleting team members #2834

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 18:50:53 +08:00
55692e4da6 Feat: Capitalize the first letter of the team's role #2834 (#3597)
### What problem does this PR solve?

Feat: Capitalize the first letter of the team's role #2834

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 18:19:34 +08:00
6314d3c727 Updates for readme (#3595)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-22 16:47:45 +08:00
06b9256972 Feat: remove useSetLlmSetting from GenerateForm #3591 (#3592)
### What problem does this PR solve?

Feat: remove useSetLlmSetting from GenerateForm #3591

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 16:31:44 +08:00
cc219ff648 Fix agent session API (#3589)
### What problem does this PR solve?

#3585
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 16:19:00 +08:00
ee33bf71eb Feat: Add ProfileSetting page #3221 (#3588)
### What problem does this PR solve?

Feat: Add ProfileSetting page  #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 15:11:38 +08:00
ee7fd71fdc Add an agent template: SEO bloger (#3582)
### What problem does this PR solve?

#3560

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 13:12:05 +08:00
d56f52eef8 Fix agent api invokation issue (#3581)
### What problem does this PR solve?

#3570 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 12:37:27 +08:00
9f3141804f Fix chunk enable/disable issue (#3579)
### What problem does this PR solve?

#3576

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-22 12:25:42 +08:00
60a3e1a8dc Update Docker .env (#3576)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-22 12:03:46 +08:00
9541d7e7bc Added TRACE_MALLOC_DELTA and TRACE_MALLOC_FULL (#3555)
### What problem does this PR solve?

Added TRACE_MALLOC_DELTA and TRACE_MALLOC_FULL to debug task_executor.py
heap. Relates to #3518

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-22 12:00:25 +08:00
811c49d7a2 Updated obsolete faqs (#3575)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-11-22 11:11:06 +08:00
482c1b59c8 Check tika.parser return result (#3564)
### What problem does this PR solve?

Check tika.parser return result. Close #3229

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Yingfeng <yingfeng.zhang@gmail.com>
2024-11-22 11:05:06 +08:00
691ea287c2 Added RAGFlow image version into bug report template (#3561)
### What problem does this PR solve?

Add RAGFlow image version into bug report template

### Type of change

- [x] Other (please describe):
2024-11-22 10:50:13 +08:00
b87d14492f Revert "Updated obsolete faqs (#3554)" (#3573)
This reverts commit 13ff463845.

### What problem does this PR solve?


### Type of change

- [x] Other (please describe):
2024-11-22 10:42:10 +08:00
cc5960b88e Vietnamese language support on web (#3549)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-11-21 18:43:08 +08:00
ee50f78d99 Add component 'Template' (#3562)
### What problem does this PR solve?

#3560

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 18:26:22 +08:00
193b08a3ed Fix: Limit node version #3547 (#3563)
### What problem does this PR solve?

Fix: Limit node version #3547

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-21 18:14:22 +08:00
3a3e23d8d9 Feat: Add Template operator #3556 (#3559)
### What problem does this PR solve?

Feat: Add Template operator #3560

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 18:05:31 +08:00
30f111edb3 Fixs for translation agent (#3557)
### What problem does this PR solve?

#3556 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 16:22:25 +08:00
d47ee88454 Feat: When saving the canvas, other dls parameters passed from the backend are spliced ​​into the dsl parameters #3355 (#3558)
### What problem does this PR solve?

Feat: When saving the canvas, other dls parameters passed from the
backend are spliced ​​into the dsl parameters #3355
#3556

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-21 16:21:54 +08:00
13ff463845 Updated obsolete faqs (#3554)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-21 16:15:37 +08:00
bf9ebda3c8 Add tests for frontend API (#3552)
### What problem does this PR solve?

Add tests for frontend API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-21 15:39:25 +08:00
85dd9fde43 Fix a bug in agent api (#3551)
### What problem does this PR solve?

Fix a bug in agent api
#3539

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-21 12:45:22 +08:00
c7c8b3812f Add test for document (#3548)
### What problem does this PR solve?

Add test for document

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-21 12:02:26 +08:00
0ac6dc8f8c Cut down the attempt times of ES (#3550)
### What problem does this PR solve?

#3541
### Type of change


- [x] Refactoring
- [x] Performance Improvement
2024-11-21 11:37:45 +08:00
58a2200b80 Fix: search issue for graphrag (#3546)
### What problem does this PR solve?

#3538

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-21 11:13:29 +08:00
e10b0e6b60 Fix: Inaccurate description (#3540)
### What problem does this PR solve?

Correction of inaccurate front-end description.

![image](https://github.com/user-attachments/assets/4a7330b5-1384-4092-825d-56675fb5799c)

![image](https://github.com/user-attachments/assets/b3737912-4f5a-4928-a629-404a4fa4da6e)

### Type of change

- [x] Refactoring
2024-11-21 10:09:20 +08:00
d9c882399d Ensure LIGHTEN work (#3542)
### What problem does this PR solve?
#3531

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-21 09:56:28 +08:00
8930bfcff8 Fix bugs (#3535)
### What problem does this PR solve?

1. system monitor icon and text missing
2. Team knowledge base can't be search

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-20 20:55:42 +08:00
9b9afa9d6e Feat: Display the input parameters of begin in the output result table #3355 (#3534)
### What problem does this PR solve?

Feat: Display the input parameters of begin in the output result table
#3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:55:31 +08:00
362db857d0 New: a new interpretor based on Andrew Ng theory. (#3532)
### What problem does this PR solve?

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:55:22 +08:00
541272eb99 Fix: authorization issue (#3530)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:55:10 +08:00
5b44b99cfd Removed beartype (#3528)
### What problem does this PR solve?

The beartype configuration of
main(64f50992e0) is:
```
from beartype import BeartypeConf
from beartype.claw import beartype_all  # <-- you didn't sign up for this
beartype_all(conf=BeartypeConf(violation_type=UserWarning))    # <-- emit warnings from all code
```

ragflow_server failed at a third-party package:

```
(ragflow-py3.10) zhichyu@iris:~/github.com/infiniflow/ragflow$ rm -rf logs/* && bash docker/launch_backend_service.sh 
Starting task_executor.py for task 0 (Attempt 1)
Starting ragflow_server.py (Attempt 1)
Traceback (most recent call last):
  File "/home/zhichyu/github.com/infiniflow/ragflow/api/ragflow_server.py", line 22, in <module>
    from api.utils.log_utils import initRootLogger
  File "/home/zhichyu/github.com/infiniflow/ragflow/api/utils/__init__.py", line 25, in <module>
    import requests
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/requests/__init__.py", line 43, in <module>
    import urllib3
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/__init__.py", line 15, in <module>
    from ._base_connection import _TYPE_BODY
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/_base_connection.py", line 5, in <module>
    from .util.connection import _TYPE_SOCKET_OPTIONS
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/__init__.py", line 4, in <module>
    from .connection import is_connection_dropped
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 7, in <module>
    from .timeout import _DEFAULT_TIMEOUT, _TYPE_TIMEOUT
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/timeout.py", line 20, in <module>
    _DEFAULT_TIMEOUT: Final[_TYPE_DEFAULT] = _TYPE_DEFAULT.token
NameError: name 'Final' is not defined
Traceback (most recent call last):
  File "/home/zhichyu/github.com/infiniflow/ragflow/rag/svr/task_executor.py", line 22, in <module>
    from api.utils.log_utils import initRootLogger
  File "/home/zhichyu/github.com/infiniflow/ragflow/api/utils/__init__.py", line 25, in <module>
    import requests
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/requests/__init__.py", line 43, in <module>
    import urllib3
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/__init__.py", line 15, in <module>
    from ._base_connection import _TYPE_BODY
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/_base_connection.py", line 5, in <module>
    from .util.connection import _TYPE_SOCKET_OPTIONS
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/__init__.py", line 4, in <module>
    from .connection import is_connection_dropped
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 7, in <module>
    from .timeout import _DEFAULT_TIMEOUT, _TYPE_TIMEOUT
  File "/home/zhichyu/github.com/infiniflow/ragflow/.venv/lib/python3.10/site-packages/urllib3/util/timeout.py", line 20, in <module>
    _DEFAULT_TIMEOUT: Final[_TYPE_DEFAULT] = _TYPE_DEFAULT.token
NameError: name 'Final' is not defined
```

This third-package is out of our control. I have to remove beartype
entirely.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:54:57 +08:00
6be7901df2 Warning instead of exception on type mismatch (#3523)
### What problem does this PR solve?

Warning instead of exception on type mismatch.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:54:42 +08:00
4d42bcd517 Feat: merge the begin operator's url and file into one field #3355 (#3521)
### What problem does this PR solve?

Feat: merge the begin operator's url and file into one field #3355
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:54:27 +08:00
9b4c2868bd Fix set_output type hint (#3516)
### What problem does this PR solve?

Fix set_output type hint

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:54:09 +08:00
d02a2b131a Fix: potential risk (#3515)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-20 20:53:58 +08:00
81c7b6afc5 Make spark model robuster to model name (#3514)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:53:44 +08:00
cad341e794 Added kb_id filter to knn. Fix #3458 (#3513)
### What problem does this PR solve?

Added kb_id filter to knn. Fix #3458

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:53:30 +08:00
e559cebcdc fix: keyerror issue (#3512)
### What problem does this PR solve?

#3511

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:53:20 +08:00
8b4407a68c feat: Add Datasets component to home page #3221 (#3508)
### What problem does this PR solve?

feat: Add Datasets component to home page #3221
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-20 20:53:07 +08:00
289034f36e smooth term weight (#3510)
### What problem does this PR solve?

#3499

### Type of change

- [x] Performance Improvement
2024-11-20 20:52:51 +08:00
17a7ea42eb fix synonym bug (#3506)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-20 20:52:36 +08:00
2044bb0039 Fix bugs (#3502)
### What problem does this PR solve?

1. Remove unused code
2. Fix type mismatch, in nlp search and infinity search interface
3. Fix chunk list, get all chunks of this user.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-20 20:52:23 +08:00
c4f2464935 fix: laws.py added missing import logging (#3501)
### What problem does this PR solve?

_Choosing Laws Chunk Method results in an error when parsing a document.
The error is caused by a missing import in the `laws.py` file._

```
Traceback (most recent call last):
  File "/ragflow/rag/svr/task_executor.py", line 445, in handle_task
    do_handle_task(task)
  File "/ragflow/rag/svr/task_executor.py", line 384, in do_handle_task
    cks = build(r)
          ^^^^^^^^
  File "/ragflow/rag/svr/task_executor.py", line 196, in build
    cks = chunker.chunk(row["name"], binary=binary, from_page=row["from_page"],
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ragflow/rag/app/laws.py", line 161, in chunk
    for txt, poss in pdf_parser(filename if not binary else binary,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ragflow/rag/app/laws.py", line 124, in __call__
    logging.debug("layouts:".format(
    ^^^^^^^
NameError: name 'logging' is not defined. Did you forget to import 'logging'

```

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

Co-authored-by: Michal Masrna <m.marna1@gmail.com>
2024-11-20 20:52:05 +08:00
bcb6f7168f [bug]import typing.cast for leiden.py (#3493)
### What problem does this PR solve?

leiden alg throws exception for lack func cast definition

### Type of change

- Bug Fix (non-breaking change which fixes an issue)
2024-11-19 18:42:30 +08:00
361cff34fc add input variables to begin component (#3498)
### What problem does this PR solve?

#3355 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 18:41:48 +08:00
0cd5b64c3b Changed requirement to python 3.10 (#3496)
### What problem does this PR solve?

Changed requirement to python 3.10.
Changed image base to Ubuntu 22.04 since it contains python 3.10.

### Type of change

- [x] Refactoring
2024-11-19 18:25:04 +08:00
16fbe9920d feat: Stream the greetings of the agent dialogue #3355 (#3490)
### What problem does this PR solve?

feat: Stream the greetings of the agent dialogue #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 15:14:35 +08:00
e4280be5e5 minor (#3483)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-19 14:59:21 +08:00
d42362deb6 Add api for sessions and add max_tokens for tenant_llm (#3472)
### What problem does this PR solve?

Add api for sessions and add max_tokens for tenant_llm

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-19 14:51:33 +08:00
883fafde72 Fix elasticsearch status display (#3487)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-19 14:40:58 +08:00
568322aeaf fix(rag): fix error in viewing document chunk and cannot start task_executor server (#3481)
### What problem does this PR solve?

1. Fix error in viewing document chunk

<img width="1677" alt="Pasted Graphic"
src="https://github.com/user-attachments/assets/acd84cde-f38c-4190-b135-5e5139ae2613">

Viewing document chunk details in a BeartypeCallHintParamViolation
error.

Traceback (most recent call last):
File "ragflow/.venv/lib/python3.12/site-packages/flask/app.py", line
880, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
File "ragflow/.venv/lib/python3.12/site-packages/flask/app.py", line
865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
# type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ragflow/.venv/lib/python3.12/site-packages/flask_login/utils.py",
line 290, in decorated_view
    return current_app.ensure_sync(func)(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ragflow/api/apps/chunk_app.py", line 311, in knowledge_graph
sres = settings.retrievaler.search(req, search.index_name(tenant_id),
kb_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<@beartype(rag.nlp.search.Dealer.search) at 0x3381fd800>", line
39, in search
beartype.roar.BeartypeCallHintParamViolation: Method
rag.nlp.search.Dealer.search() parameter
idx_names='ragflow_0e1e67f431d711ef98fc00155d29195d' violates type hint
list[str], as str 'ragflow_0e1e67f431d711ef98fc00155d29195d' not
instance of list.
2024-11-19 11:30:29,817 ERROR 91013 Method
rag.nlp.search.Dealer.search() parameter
idx_names='ragflow_0e1e67f431d711ef98fc00155d29195d' violates type hint
list[str], as str 'ragflow_0e1e67f431d711ef98fc00155d29195d' not
instance of list.
Traceback (most recent call last):
  File "ragflow/api/apps/chunk_app.py", line 60, in list_chunk
sres = settings.retrievaler.search(query, search.index_name(tenant_id),
kb_ids, highlight=True)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<@beartype(rag.nlp.search.Dealer.search) at 0x3381fd800>", line
39, in search
beartype.roar.BeartypeCallHintParamViolation: Method
rag.nlp.search.Dealer.search() parameter
idx_names='ragflow_0e1e67f431d711ef98fc00155d29195d' violates type hint
list[str], as str 'ragflow_0e1e67f431d711ef98fc00155d29195d' not
instance of list.


because in nlp/search.py,the idx_names is only list

<img width="1098" alt="Pasted Graphic 2"
src="https://github.com/user-attachments/assets/4998cb1e-94bc-470b-b2f4-41ecb5b08f8a">

but the DocStoreConnection.search method accept list or str
<img width="1175" alt="Pasted Graphic 3"
src="https://github.com/user-attachments/assets/ee918b4a-87a5-42c9-a6d2-d0db0884b875">


and his implements also list and str
es_conn.py

<img width="1121" alt="Pasted Graphic 4"
src="https://github.com/user-attachments/assets/3e6dc030-0a0d-416c-8fd4-0b4cfd576f8c">

infinity_conn.py

<img width="1221" alt="Pasted Graphic 5"
src="https://github.com/user-attachments/assets/44edac2b-6b81-45b0-a3fc-cb1c63219015">

2. Fix cannot star task_executor server with Unresolved reference
'Mapping'
<img width="1283" alt="Pasted Graphic 6"
src="https://github.com/user-attachments/assets/421f17b8-d0a5-46d3-bc4d-d05dc9dfc934">

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-19 14:36:10 +08:00
31decadd8e feat: Let the top navigation bar open in a tab when you right click on it #3018 (#3486)
### What problem does this PR solve?

feat: Let the top navigation bar open in a tab when you right click on
it #3018

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 14:20:00 +08:00
dec9b3e540 Fix logs. Use dict.pop instead of del. Close #3473 (#3484)
### What problem does this PR solve?

Fix logs. Use dict.pop instead of del.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-19 14:15:25 +08:00
d0f94a42ff Refactor ragflow server (#3482)
### What problem does this PR solve?

1. Fix document error
2. Move start server out of if __main__ branch.

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-19 12:31:20 +08:00
ed0d47fc8a feat: "Open link in new tab" doesn't work for profile image #3018 (#3480)
### What problem does this PR solve?

feat:  "Open link in new tab" doesn't work for profile image #3018

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 11:40:59 +08:00
aa9a16e073 Fix document error (#3456)
### What problem does this PR solve?

1. Update README.md
2. Fix error description in docker/.env

### Type of change

- [x] Documentation Update

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-19 11:31:11 +08:00
eef84a86bf feat: Disable clicking the Next button while uploading files in RunDrawer #3355 (#3477)
### What problem does this PR solve?

feat: Disable clicking the Next button while uploading files in
RunDrawer #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-19 11:03:39 +08:00
ed72d1100b minor format updates (#3471)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-18 19:19:28 +08:00
f4e9dae33a Updated upgrade guide to avoid confusion (#3469)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-18 19:07:05 +08:00
50f7b7e0a3 feat: Capture task executor null value exception #3409 (#3468)
### What problem does this PR solve?

feat: Capture task executor null value exception #3409

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 19:06:52 +08:00
01c2712941 feat: Display task executor tooltip with json-view #3409 (#3467)
### What problem does this PR solve?

feat: Display task executor  tooltip with json-view #3409

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 18:36:18 +08:00
4413683898 Introduced beartype (#3460)
### What problem does this PR solve?

Introduced [beartype](https://github.com/beartype/beartype) for runtime
type-checking.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 17:38:17 +08:00
3824c1fec0 feat: Show task_executor heartbeat #3409 (#3461)
### What problem does this PR solve?

feat: Show task_executor heartbeat #3409
### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 17:23:49 +08:00
4b3eeaa6ef Added LocalAI support for rerank models (#3446)
### What problem does this PR solve?

Hi there!
LocalAI added support of rerank models
https://localai.io/features/reranker/

I've implemented LocalAIRerank class (typically copied it from
OpenAI_APIRerank class).
Also, LocalAI model response with 500 error code if len of "documents"
is less than 2 in similarity check.
So I've added the second "document" on RERANK model connection check in
`api/apps/llm_app.py`.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-18 12:05:52 +08:00
70cd5c1599 Remove unused code (#3448)
### What problem does this PR solve?

1. Remove unused code.
2. Move some codes from settings to constants

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-18 12:05:38 +08:00
f9643adc43 Adjusted heartbeat format (#3459)
### What problem does this PR solve?

Adjusted heartbeat format

### Type of change

- [x] Refactoring
2024-11-18 12:03:28 +08:00
7b9e0723d6 build(deps): bump cross-spawn from 7.0.3 to 7.0.5 in /web (#3455)
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from
7.0.3 to 7.0.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md">cross-spawn's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.4...v7.0.5">7.0.5</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>fix escaping bug introduced by backtracking (<a
href="640d391fde">640d391</a>)</li>
</ul>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.4">7.0.4</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)
(<a
href="5ff3a07d9a">5ff3a07</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="085268352d"><code>0852683</code></a>
chore(release): 7.0.5</li>
<li><a
href="640d391fde"><code>640d391</code></a>
fix: fix escaping bug introduced by backtracking</li>
<li><a
href="bff0c87c8b"><code>bff0c87</code></a>
chore: remove codecov</li>
<li><a
href="a7c6abc6fe"><code>a7c6abc</code></a>
chore: replace travis with github workflows</li>
<li><a
href="9b9246e096"><code>9b9246e</code></a>
chore(release): 7.0.4</li>
<li><a
href="5ff3a07d9a"><code>5ff3a07</code></a>
fix: disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)</li>
<li><a
href="9521e2da94"><code>9521e2d</code></a>
chore: fix tests in recent node js versions</li>
<li><a
href="97ded399e9"><code>97ded39</code></a>
chore: convert package lock</li>
<li><a
href="d52b6b9da4"><code>d52b6b9</code></a>
chore: remove unused argument (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/156">#156</a>)</li>
<li><a
href="5d843849e1"><code>5d84384</code></a>
chore: add travis jobs on ppc64le (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/142">#142</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cross-spawn&package-manager=npm_and_yarn&previous-version=7.0.3&new-version=7.0.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 11:19:44 +08:00
a1d01a1b2f enlarge the default token length of RAPTOR summarization (#3454)
### What problem does this PR solve?

#3426

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-18 10:15:26 +08:00
dc05f43eee Minor: Fixed a broken link (#3451)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-16 20:22:53 +08:00
77bdeb32bd Added current task into task executor's hearbeat (#3444)
### What problem does this PR solve?

Added current task into task executor's hearbeat

### Type of change

- [x] Refactoring
2024-11-15 22:55:41 +08:00
af18217d78 feat: Add banner #3221 (#3442)
### What problem does this PR solve?

feat: Add banner #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-15 19:18:23 +08:00
4ed5ca2666 handle_task catch all exception (#3441)
### What problem does this PR solve?

handle_task catch all exception
Report heartbeats

### Type of change

- [x] Refactoring
2024-11-15 18:51:09 +08:00
1e90a1bf36 Move settings initialization after module init phase (#3438)
### What problem does this PR solve?

1. Module init won't connect database any more.
2. Config in settings need to be used with settings.CONFIG_NAME

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-15 17:30:56 +08:00
ac033b62cf fix tika-server issue (#3439)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 16:57:01 +08:00
cb3b9d7ada refine the message of queuing a task (#3437)
### What problem does this PR solve?



### Type of change

- [x] Refactoring
2024-11-15 15:59:54 +08:00
ca9e97d2f2 Enlarge the term weight difference (#3435)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-15 15:41:50 +08:00
6d451dbe06 Updated retrieval testing UI (#3433)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-15 15:17:34 +08:00
e0659a4f0e feat: Add RunDrawer #3355 (#3434)
### What problem does this PR solve?

feat: Translation test run form #3355
feat: Wrap QueryTable with Collapse #3355
feat: If the required fields are not filled in, the submit button will
be grayed out. #3355
feat: Add RunDrawer #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-15 15:17:23 +08:00
a854bc22d1 Rework task executor heartbeat (#3430)
### What problem does this PR solve?

Rework task executor heartbeat, and print in console.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-15 14:43:55 +08:00
48e060aa53 rm es query escape chars (#3428)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 13:19:07 +08:00
47abfc32d4 Remove unused settings (#3427)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-15 13:18:16 +08:00
a1ba228bc2 fix: empty token bug (#3424)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 10:33:03 +08:00
996c94a8e7 Move clk100k_base tokenizer to docker image (#3411)
### What problem does this PR solve?

Move the tiktoken of cl100k_base into docker image

issue: #3338 

### Type of change

- [x] Refactoring

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-15 10:18:40 +08:00
220aaddc62 fix: synonym bug (#3423)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-15 10:14:51 +08:00
6878d23a57 Print configs when startup RAGFlow server (#3414)
### What problem does this PR solve?

Print configs at the RAGFlow startup phase.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

```
2024-11-14 21:27:53,090 INFO     962231 Current configs, from /home/weilongma/Documents/development/ragflow/conf/service_conf.yaml:
2024-11-14 21:27:53,090 INFO     962231 ragflow: {'host': '0.0.0.0', 'http_port': 9380}
2024-11-14 21:27:53,090 INFO     962231 mysql: {'name': 'rag_flow', 'user': 'root', 'password': 'infini_rag_flow', 'host': 'mysql', 'port': 5455, 'max_connections': 100, 'stale_timeout': 30}
2024-11-14 21:27:53,090 INFO     962231 minio: {'user': 'rag_flow', 'password': 'infini_rag_flow', 'host': 'minio:9000'}
2024-11-14 21:27:53,090 INFO     962231 es: {'hosts': 'http://es01:1200', 'username': 'elastic', 'password': 'infini_rag_flow'}
2024-11-14 21:27:53,090 INFO     962231 redis: {'db': 1, 'password': 'infini_rag_flow', 'host': 'redis:6379'}
```

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-15 09:29:40 +08:00
df9d054551 Updated descriptions of agent APIs (#3407)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-14 18:44:37 +08:00
30c1f7ee29 make variables access robuster (#3406)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-14 18:28:41 +08:00
e4c4fdabbd Update version display on web UI (#3405)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
2024-11-14 17:51:21 +08:00
30f6421760 Use consistent log file names, introduced initLogger (#3403)
### What problem does this PR solve?

Use consistent log file names, introduced initLogger

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-14 17:13:48 +08:00
ab4384e011 Updates on parsing progress, including more detailed time cost inform… (#3402)
### What problem does this PR solve?

#3401 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-14 16:28:10 +08:00
201bbef7c0 Print version when RAGFlow server startup (#3393)
### What problem does this PR solve?

Print version when RAGFlow server startup

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-14 15:51:30 +08:00
95d21e5d9f fix: standardize property name from 'chat' to 'chat_id' (#3383)
### What problem does this PR solve?

This PR addresses the inconsistency in property naming within the
codebase by renaming the 'chat' property to 'chat_id' in the session.py
file. This change aims to align the naming convention with other parts
of the application that refer to chat identifiers as 'chat_id', thereby
improving code clarity and maintainability.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 15:49:03 +08:00
c5368c7745 resolve halt while starting up (#3397)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 13:20:17 +08:00
0657a09e2c Update llm_factories.json (#3396)
### What problem does this PR solve?
Added: Claude-3-5-sonnet-20241022 version. 

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2024-11-14 13:00:16 +08:00
4caf932808 fix bug about fetching knowledge graph (#3394)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 12:29:15 +08:00
400fc3f5e9 Add enviroment validation at server startup phase (#3388)
### What problem does this PR solve?

1. Validate the Python version should >= 3.11
2. Download nltk data

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Refactoring

---------

Signed-off-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: jinhai <haijin.chn@gmail.com>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-11-14 11:01:08 +08:00
e44e3a67b0 adapt to lower case cohere (#3392)
### What problem does this PR solve?

#3384

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-14 10:18:25 +08:00
9d395ab74e Added doc for switching elasticsearch to infinity (#3370)
### What problem does this PR solve?

Added doc for switching elasticsearch to infinity

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2024-11-14 00:08:55 +08:00
83c6b1f308 set DLA active for KG (#3386)
### What problem does this PR solve?

### Type of change


- [x] Refactoring
2024-11-13 16:59:19 +08:00
7ab9715b0e refine the error message (#3382)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-13 16:20:59 +08:00
632b23486f Fix the value issue of anthropic (#3351)
### What problem does this PR solve?

This pull request fixes the issue mentioned in
https://github.com/infiniflow/ragflow/issues/3263.

1. response should be parsed as dict, prevent the following code from
failing to take values:
ans = response["content"][0]["text"]
2. API Model ```claude-instant-1.2``` has retired (by
[model-deprecations](https://docs.anthropic.com/en/docs/resources/model-deprecations)),
it will trigger errors in the code, so I deleted it from the
conf/llm_factories.json file and updated the latest API Model
```claude-3-5-sonnet-20241022```



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chenhaodong <chenhaodong@ctrlvideo.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-13 16:13:52 +08:00
ccf189cb7f mv service_conf.yaml to conf/ and fix: add 'answer' as a parameter to 'generate' (#3379)
### What problem does this PR solve?
#3373

### Type of change

- [x] Refactoring
- [x] Bug fix
2024-11-13 15:56:40 +08:00
1fe9a2e6fd feat: Add input parameter to begin operator #3355 (#3375)
### What problem does this PR solve?

feat: Add input parameter to begin operator #3355

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-13 14:54:10 +08:00
9fc092a911 Updated RAGFlow's dataset configuration UI (#3376)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-13 14:45:55 +08:00
fa54cd5f5c exstract model dir from model‘s full name (#3368)
### What problem does this PR solve?

When model’s group name contains 0-9,we can't find downloaded
model,because we do not correctly exstract model dir's name from model‘s
full name

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: 王志鹏 <zhipeng3.wang@midea.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-13 14:10:16 +08:00
667d0e5537 Turn the relative date in question to absolute date (#3372)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-13 13:49:18 +08:00
91332fa0f8 Refine english synonym (#3371)
### What problem does this PR solve?

#3361

### Type of change

- [x] Performance Improvement
2024-11-13 12:58:37 +08:00
0c95a3382b Dynamically create the service_conf.yaml file by replacing environment variables from .env (#3341)
### What problem does this PR solve?

This pull request implements the feature mentioned in #3322. 

Instead of manually having to edit the `service_conf.yaml` file when
changes have been made to `.env` and mapping it into the docker
container at runtime, a template file is used and the values replaced by
the environment variables from the `.env` file when the container is
started.
 

### Type of change

- [X] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-11-12 22:56:53 +08:00
7274420ecd Updated RAGFlow UI (#3362)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-12 19:56:56 +08:00
a2a5631da4 Rework logging (#3358)
Unified all log files into one.

### What problem does this PR solve?

Unified all log files into one.

### Type of change

- [x] Refactoring
2024-11-12 17:35:13 +08:00
567a7563e7 Fix bugs in api and add examples (#3353)
### What problem does this PR solve?

Fix bugs in api.
Add simple examples for api.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-12 17:14:33 +08:00
62a9afd382 Change launch backend script to handle errors gracefully (#3334)
### What problem does this PR solve?

The `launch_backend_service.sh` script enters infinite loops for both
the task executors and the backend server. When an error occurs in any
of these processes, the script continuously restarts them without
properly handling termination signals. This behavior causes the script
to even ignore interrupts, leading to persistent error messages and
making it difficult to exit the script gracefully.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

### Explanation of Modifications

1. **Signal Trapping with `trap`:** 
- The `trap cleanup SIGINT SIGTERM` line ensures that when a `SIGINT` or
`SIGTERM` signal is received, the cleanup function is invoked.
- The `cleanup` function sets the `STOP` flag to `true`, iterates
through all child process IDs stored in the `PIDS` array, and sends a
`kill` signal to each process to terminate them gracefully.
2. **Retry Limits:**
- Introduced a `MAX_RETRIES` variable to limit the number of restart
attempts for both `task_executor.py` and `ragflow_server.py`
- The loops now check if the retry count has reached the maximum limit.
If so, they invoke the `cleanup` function to terminate all processes and
exit the script.
3. **Process Tracking with `PIDS` Array:**
- After launching each background process (`task_exe` and `run_server`),
their Process IDs (PIDs) are stored in the `PIDS` array.
- This allows the `cleanup` function to terminate all child processes
effectively when needed.
4. **Graceful Shutdown:**
- When the `cleanup` function is called, it iterates over all child PIDs
and sends a termination signal (`kill`) to each, ensuring that all
subprocesses are stopped before the script exits.
5. **Logging Enhancements:**
- Added `echo` statements to provide clearer logs about the state of
each process, including attempts, successes, failures, and retries.
6. **Exit on Successful Completion:**
- If `ragflow_server.py` or a `task_executor.py` process exits with a
success code (0), the loop breaks, preventing unnecessary retries.

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-12 15:51:38 +08:00
aa68d3b8db fix: Cannot copy and paste text on agent page #3356 (#3357)
### What problem does this PR solve?

fix: Cannot copy and paste text on agent page #3356

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-12 15:49:45 +08:00
784ae896d1 add dependencies of chrome (#3352)
### What problem does this PR solve?



### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 15:49:33 +08:00
f4c52371ab Integration with Infinity (#2894)
### What problem does this PR solve?

Integration with Infinity

- Replaced ELASTICSEARCH with dataStoreConn
- Renamed deleteByQuery with delete
- Renamed bulk to upsertBulk
- getHighlight, getAggregation
- Fix KGSearch.search
- Moved Dealer.sql_retrieval to es_conn.py


### Type of change

- [x] Refactoring
2024-11-12 14:59:41 +08:00
00b6000b76 feat: Disable automatic saving of agent during running agent #3349 (#3350)
### What problem does this PR solve?

feat: Disable automatic saving of agent during running agent #3349

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 12:47:36 +08:00
db23d62827 feat: Add background colors such as inverse-strong #3221 (#3346)
### What problem does this PR solve?

feat: Add background colors such as inverse-strong #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 11:29:48 +08:00
70ea6661ed add new models for zhipu-ai (#3348)
### What problem does this PR solve?

#3345

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-12 11:27:43 +08:00
a01fceb328 feat: updates all readme to have a link to redirect to new locale (#3339)
### What problem does this PR solve?

Updates all readme to have a link to redirect to README_id.md for
documentation in Bahasa Indonesia

### Type of change

- [x] Documentation Update
2024-11-12 09:26:14 +08:00
e9e98ea093 feat: documentation updates to support Bahasa Indonesia (#3315)
### What problem does this PR solve?

Add Readme docs in Indonesia's native language (Bahasa Indonesia) for
ragflow

### Type of change

- [x] Documentation Update
2024-11-11 19:33:23 +08:00
528646a958 feat: Add custom background color #3221 (#3336)
### What problem does this PR solve?

feat: Add custom background color #3221

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-11 19:29:56 +08:00
8536335e63 Miscellaneous edits to RAGFlow's UI (#3337)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-11 19:29:34 +08:00
88072b1e90 fix: double brace issue (#3328)
### What problem does this PR solve?

#3299

### Type of change

- [x] Performance Improvement
2024-11-11 12:07:02 +08:00
34d1daac67 fix: Anthropic param error (#3327)
### What problem does this PR solve?

#3263

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-11 11:54:14 +08:00
3faae0b2c2 trival (#3325)
### What problem does this PR solve?



### Type of change


- [x] Performance Improvement
2024-11-11 10:39:49 +08:00
5e5a35191e fix benchmark issue (#3324)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-11 10:14:30 +08:00
7c486ee3f9 Fix typo (#3319)
### What problem does this PR solve?

Fix typo

### Type of change

- [x] Refactoring
2024-11-11 09:36:39 +08:00
20d686737a feat: Switch the login page to the registration component by changing the routing parameters #3221 (#3307)
### What problem does this PR solve?
feat: Switch the login page to the registration component by changing
the routing parameters #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 19:47:22 +08:00
85047e7e36 Added configuration guideline (#3309)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-08 19:46:18 +08:00
ac64e35a45 feat: Translate autosaved #3301 (#3304)
### What problem does this PR solve?

[_Briefly describe what this PR aims to solve. Include background
context that will help reviewers understand the purpose of the
PR._](feat: Translate autosaved #3301)

### Type of change


- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-08 18:27:42 +08:00
004487cca0 fix term weight issue (#3306)
### What problem does this PR solve?

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 18:25:23 +08:00
74d1eeb4d3 feat: Automatically save agent page data #3301 (#3302)
### What problem does this PR solve?

feat: Automatically save agent page data #3301

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 17:28:11 +08:00
464a4d6ead Added env. MACOS (#3297)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-08 16:50:35 +08:00
3d3913419b Updated .env and Docker README (#3295)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-11-08 16:31:52 +08:00
63f7d3bae2 feat: Support shortcut keys to copy nodes #3283 (#3293)
### What problem does this PR solve?

feat: Support shortcut keys to copy nodes #3283

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 15:50:01 +08:00
8b6e272197 fix: term weight issue (#3294)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 15:49:44 +08:00
5205bdab24 feat: Modify the name of the answer operator #1739 (#3288)
### What problem does this PR solve?
feat: Modify the name of the answer operator #1739

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 13:56:43 +08:00
37d4708880 Archivo ts con la traducción para el idioma español (#3274)
¿Qué problema resuelve este PR?
Archivo TS para la traducción al idioma español
Este archivo TS es un archivo de traducción que se utiliza para
proporcionar soporte multilingüe en las aplicaciones. Al agregar la
traducción al español, se facilita el cambio del idioma de la interfaz
de usuario para los usuarios hispanohablantes.

Tipo de cambio
- [x] Nueva funcionalidad (cambio no disruptivo que agrega una nueva
característica)

### What problem does this PR solve?

This TS file is a translation file used to provide multilingual support
in applications. By adding the Spanish translation, it facilitates
changing the user interface language for Spanish-speaking users._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 13:44:03 +08:00
d88f0d43ea make language judgement robuster (#3287)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-11-08 12:48:11 +08:00
a2153d61ce feat: Support shortcut keys to delete nodes #3283 (#3284)
### What problem does this PR solve?
feat: Support shortcut keys to delete nodes #3283

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-08 12:07:26 +08:00
f16ef57979 fix switch bug (#3280)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 10:08:57 +08:00
ff2bbb487f fix index of range (#3279)
### What problem does this PR solve?

#3273

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-08 09:58:11 +08:00
416efbe7e8 Updated Docker README (#3272)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-11-08 09:22:40 +08:00
9c6cc20356 Fix:#3230 When parsing a docx file using the Book parsing method, to_page is always -1, resulting in a block count of 0 even if parsing is successful (#3249)
### What problem does this PR solve?

When parsing a docx file using the Book parsing method, to_page is
always -1, resulting in a block count of 0 even if parsing is successful

Fix:#3230

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-11-08 09:21:42 +08:00
7c0d28b62d Refactored Docker README (#3269)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-07 19:38:50 +08:00
48ab6d7a45 Update authorization for team (#3262)
### What problem does this PR solve?

Update authorization for team.
#3253 #3233
### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-07 19:26:03 +08:00
96b5d2b3a9 fix: The name of the copy operator is displayed the same as before ##3265 (#3266)
### What problem does this PR solve?

fix: The name of the copy operator is displayed the same as before
##3265

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-07 17:53:31 +08:00
f45c29360c New locale for Bahasa Indonesia support (#3255)
### What problem does this PR solve?
Add native translation in locales for Bahasa Indonesia to support new
local language

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-07 17:53:19 +08:00
cdcbe6c2b3 Fixed broken links (#3256)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-07 14:12:32 +08:00
5038552ed9 fix: improve embedding model validation logic for dataset operations (#3235)
What problem does this PR solve?
When creating or updating datasets with custom embedding models (e.g.,
Ollama), the validation logic was too restrictive and prevented valid
models from being used. The previous implementation would reject valid
custom models if they weren't in the predefined list, even when they
existed in TenantLLMService.

Changes:
- Simplify and improve the embedding model validation flow in
create/update endpoints
- Check TenantLLMService for custom models before rejecting
- Make validation logic more consistent between create and update
operations

### What problem does this PR solve?

This fix allows users to successfully create and update datasets with
custom embedding models while maintaining proper validation checks.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
Co-authored-by: liuhua <10215101452@stu.ecnu.edu.cn>
2024-11-07 10:36:28 +08:00
1b3e39dd12 fix ci issue (#3245)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-06 19:32:30 +08:00
fbcc0bb408 accelerate tokenize (#3244)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-11-06 18:54:41 +08:00
d3bb5e9f3d feat: Display input parameters on operator nodes #3240 (#3241)
### What problem does this PR solve?
feat: Display input parameters on operator nodes #3240


### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-06 18:48:05 +08:00
4097912d59 add inputs to display to every components (#3242)
### What problem does this PR solve?

#3240

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-06 18:47:53 +08:00
f3aaa0d453 Add sdk for Agent API (#3220)
### What problem does this PR solve?

Add sdk for Agent API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-06 18:03:45 +08:00
0dff64f6ad fix: TypeError: only length-1 arrays can be converted to Python scalars (#3211)
### What problem does this PR solve?
fix "TypeError: only length-1 arrays can be converted to Python scalars"
while using cohere embedding model.

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)


![image](https://github.com/user-attachments/assets/2c21a69f-cd76-4d25-b320-058964812db8)
2024-11-06 11:15:00 +08:00
601a128cd3 feat: Add next login page with shadcn/ui #3221 (#3231)
### What problem does this PR solve?

feat: Add next login page with shadcn/ui  #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-11-06 11:13:04 +08:00
af74bf01c0 Editorial updates to Docker README (#3223)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-06 09:43:54 +08:00
a418a343d1 doc updates (#3216)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-11-05 16:41:43 +08:00
ab6e6019a7 Added a list of supported models (#3214)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-11-05 15:21:37 +08:00
13053172cb fix: Fixed the issue of api markdown document display being messy #2346 (#3212)
### What problem does this PR solve?

fix: Fixed the issue of api markdown document display being messy #2346

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-05 15:00:07 +08:00
38ebf6b2c0 Remove dead code (#3210)
### What problem does this PR solve?

Remove dead code

### Type of change

- [x] Refactoring
2024-11-05 14:34:49 +08:00
a7bf4ca8fc Fix bugs in API (#3204)
### What problem does this PR solve?

Fix bugs in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-05 14:07:31 +08:00
7e89be5ed1 feat: add qwen 2.5 models for silicon flow (#3203)
### What problem does this PR solve?

add qwen 2.5 models for silicon flow

### Type of change

- [X] New Feature (non-breaking change which adds functionality)
2024-11-05 13:58:29 +08:00
b7b30c4b57 add keyword extraction time elapsed to the little lamp. (#3207)
### What problem does this PR solve?



### Type of change

- [x] Refactoring
2024-11-05 13:39:50 +08:00
55953819c1 accelerate term weight calculation (#3206)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-11-05 13:11:26 +08:00
677f02c2a7 rm unused file (#3205)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-11-05 11:56:09 +08:00
185c6a0c71 Unified API response json schema (#3170)
### What problem does this PR solve?

Unified API response json schema

### Type of change

- [x] Refactoring
2024-11-05 11:02:31 +08:00
339639a9db add assistant to canvas chat history (#3201)
### What problem does this PR solve?

#3185

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-05 10:04:31 +08:00
18ae8a4091 raise exception if embedding model not found (#3199)
### What problem does this PR solve?

#3173 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-05 09:29:01 +08:00
cbca7dfce6 fix bugs in test (#3196)
### What problem does this PR solve?

fix bugs in test

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-11-04 20:03:14 +08:00
a9344e6838 Docs update for upgrading DLA models (#3198)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-11-04 20:02:40 +08:00
aa733b1ea4 update poetry.lock (#3197)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-04 19:18:08 +08:00
8305632852 add agent completion API (#3192)
### What problem does this PR solve?

#3105

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-04 17:20:16 +08:00
57f23e0808 feat: Add agent interface document link to agent page #3189 (#3190)
### What problem does this PR solve?

feat: Add agent interface document link to agent page #3189

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-04 17:04:35 +08:00
16b6a78c1e feat: Add tooltip to clean_html item #2908 (#3183)
### What problem does this PR solve?

feat: Add tooltip to clean_html item #2908

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-04 15:36:57 +08:00
dd1146ec64 feat: docs for api endpoints to generate openapi specification (#3109)
### What problem does this PR solve?

**Added openapi specification for API routes. This creates swagger UI
similar to FastAPI to better use the API.**
Using python package `flasgger`

### Type of change
- [x] New Feature (non-breaking change which adds functionality)

Not all routes are included since this is a work in progress.

Docs can be accessed on: `{host}:{port}/apidocs`
2024-11-04 15:35:36 +08:00
07c453500b set default LLM to new registered user (#3180)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-04 15:03:07 +08:00
3e4fc12d30 Updated instructions on upgrading to RAGFlow dev (#3175)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-04 13:21:29 +08:00
285fd6ae14 feat: Proxy the SDK address to the backend #3166 (#3171)
### What problem does this PR solve?

feat: Proxy the SDK address to the backend #3166

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-04 11:37:19 +08:00
8d9238db14 fix es search parameter error (#3169)
### What problem does this PR solve?

#3151

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-04 09:53:41 +08:00
c06e765a5b Added tika jar into image to avoid downloading (#3167)
### What problem does this PR solve?

Added tika jar into image to avoid downloading. Close #3017

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-03 00:20:26 +08:00
c7ea7e9974 Added jdk to happify tika (#3165)
### What problem does this PR solve?

Added jdk to happify tika(https://pypi.org/project/tika/). The image
size become ~400MB bigger. Close #2886

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-11-02 22:21:17 +08:00
37d71dfa90 Replaced redis with Valkey (#3164)
### What problem does this PR solve?

Replaced redis with Valkey. Close #3070

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-11-02 20:05:12 +08:00
44ad9a6cd7 Add test for API (#3134)
### What problem does this PR solve?

Add test for API

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-11-01 22:59:17 +08:00
7eafccf78a Updated descriptions for knowledge graph (#3154)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-11-01 18:57:58 +08:00
b42d24575c Updated an obsolete response (#3152)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-11-01 18:00:15 +08:00
3963aaa23e feat: Add DynamicInputVariable to RetrievalForm #1739 (#3112)
### What problem does this PR solve?

feat: Add DynamicInputVariable to RetrievalForm #1739
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-11-01 13:31:48 +08:00
33e5e5db5b Update gif for readme and add input param to every components (#3145)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
2024-11-01 13:31:34 +08:00
039cde7893 Updated obsolete screenshot. (#3141)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-10-31 19:21:34 +08:00
fa9d76224b Changed version to 0.13.0 (#3140)
### What problem does this PR solve?

Changed version to 0.13.0

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe):
2024-10-31 19:11:09 +08:00
35a451c024 Updated screenshots for Starting an AI chat (#3139)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-31 19:10:49 +08:00
1d0a5606b2 minor (#3137)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-10-31 18:37:08 +08:00
4ad031e97d Reworded descriptions for development versions and latest version (#3132)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 17:09:52 +08:00
0081d0f05f Moved the Upgrade Manuel out of FAQ (#3131)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-31 16:35:13 +08:00
800c25a6b4 Updated list_documents() (#3126)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 14:10:35 +08:00
9aeb07d830 Add test for CI (#3114)
### What problem does this PR solve?

Add test for CI

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-31 14:07:23 +08:00
5590a823c6 Fixed a Docusaurus display issue. (#3125)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 13:34:04 +08:00
3fa570f49b fix: remove useless test code (#3122)
### What problem does this PR solve?

remove useless test code

### Type of change

- [X] Refactoring
2024-10-31 11:56:46 +08:00
60053e7b02 Fixed a docusaurus display issue. (#3124)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-31 11:56:30 +08:00
fa1b873280 feat: Delete http_api_reference.md from api folder #1102 (#3121)
### What problem does this PR solve?

feat: Delete http_api_reference.md from  api folder #1102

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-31 11:14:11 +08:00
578f70817e Fixed a docusaurus display issue (#3120)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-31 10:37:13 +08:00
6c6b658ffe add yi-lightning (#3119)
### What problem does this PR solve?

#3111
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-31 10:30:23 +08:00
9a5ff320f3 Manage ragflow-sdk with poetry (#3115)
### What problem does this PR solve?

Manage ragflow-sdk with poetry
### Type of change

- [x] Refactoring
2024-10-30 21:13:59 +08:00
48688afa5e Tried to fix a link issue (#3117)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 20:04:45 +08:00
a2b35098c6 Publish RAGFlow's HTTP and Python API references (#3116)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 19:40:39 +08:00
4d5354387b docs updates for 0.13 release (#3108)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-30 19:29:27 +08:00
c6512e689b Added chunk methods (#3110)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 17:59:23 +08:00
b7aff4f560 Differentiated API key names to avoid confusion. (#3107)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-30 16:56:55 +08:00
18dfa2900c Fix bugs in API (#3103)
### What problem does this PR solve?

Fix bugs in API


- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-30 16:15:42 +08:00
86b546f657 Updated parser_config description (#3104)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-30 15:33:36 +08:00
3fb2bc7613 Update README (#3092)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-29 21:05:38 +08:00
f4cb939317 Updated HTTP API reference and Python API reference based on test results (#3090)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-29 19:56:46 +08:00
d868c283c4 feat: The order of the category operator form is messed up after refreshing the page #3088 (#3089)
### What problem does this PR solve?

feat: The order of the category operator form is messed up after
refreshing the page #3088

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-29 19:21:52 +08:00
c7dfb0193b feat: Allow the component id drop-down box to select the answer operator #3085 (#3087)
### What problem does this PR solve?

feat: Allow the component id drop-down box to select the answer operator
#3085

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-29 18:01:26 +08:00
f7705d6bc9 let 'Generate' take user's input as parameter (#3086)
### What problem does this PR solve?

#3085

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-29 17:58:47 +08:00
3ed096fd3f feat: Add InvokeNode #1908 (#3081)
### What problem does this PR solve?

feat: Add InvokeNode #1908

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-10-29 16:39:56 +08:00
2d1fbefdb5 search between multiple indiices for team function (#3079)
### What problem does this PR solve?

#2834 
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-29 13:19:01 +08:00
c5a3146a8c fix: modify the response of metadata in Dify retrieval api (#3076)
### What problem does this PR solve?

Modify the response of metadata in Dify retrieval api

resolve   #2914

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-29 11:06:02 +08:00
1c364e0e5c feat: If the model supplier is not set, click the OK button to jump directly to the page for setting the model supplier. #3068 (#3069)
### What problem does this PR solve?
feat: If the model supplier is not set, click the OK button to jump
directly to the page for setting the model supplier. #3068

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-29 11:05:31 +08:00
9906526a91 Update 'api key' (#3078)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-29 11:04:27 +08:00
7e0148c058 fix local variable ans (#3077)
### What problem does this PR solve?
#3064

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-29 10:42:45 +08:00
f86826b7a0 refactor error message of qwen (#3074)
### What problem does this PR solve?
#3055

### Type of change
- [x] Refactoring
2024-10-29 10:08:08 +08:00
497bc1438a feat: Add component Invoke #2908 (#3067)
### What problem does this PR solve?

feat: Add component Invoke #2908

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-28 18:56:38 +08:00
d133cc043b remove file size check for sdk API (#3066)
### What problem does this PR solve?

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-28 16:13:40 +08:00
e56bd770ea agent template upgrade (#3060)
### What problem does this PR solve?
#3056

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-28 15:49:14 +08:00
07bb2a6fd6 Turn resource to plural form (#3061)
### What problem does this PR solve?

Turn resource to plural form

### Type of change
- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-28 15:06:18 +08:00
396feadd4b feat: Add hint for operators, round to square, input variable, readable operator ID. #3056 (#3057)
### What problem does this PR solve?

feat: Add hint for operators, round to square, input variable, readable
operator ID. #3056

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-28 14:31:19 +08:00
f93f485696 Turn resource to plural form (#3059)
### What problem does this PR solve?

Turn resource to plural form

### Type of change

- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-28 14:15:36 +08:00
a813736194 Bump werkzeug from 3.0.3 to 3.0.6 (#3026)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.3 to
3.0.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/releases">werkzeug's
releases</a>.</em></p>
<blockquote>
<h2>3.0.6</h2>
<p>This is the Werkzeug 3.0.6 security fix release, which fixes security
issues but does not otherwise change behavior and should not result in
breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.6/">https://pypi.org/project/Werkzeug/3.0.6/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-6</a></p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file fields. <a
href="https://github.com/advisories/GHSA-q34m-jh98-gwm2">GHSA-q34m-jh98-gwm2</a></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by <code>ntpath.isabs</code> on Python &lt; 3.11. <a
href="https://github.com/advisories/GHSA-f9vj-2wh5-fj8j">GHSA-f9vj-2wh5-fj8j</a></li>
</ul>
<h2>3.0.5</h2>
<p>This is the Werkzeug 3.0.5 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.5/">https://pypi.org/project/Werkzeug/3.0.5/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5">https://werkzeug.palletsprojects.com/en/stable/changes/#version-3-0-5</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/37?closed=1">https://github.com/pallets/werkzeug/milestone/37?closed=1</a></p>
<ul>
<li>The Watchdog reloader ignores file closed no write events. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2945">#2945</a></li>
<li>Logging works with client addresses containing an IPv6 scope. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2952">#2952</a></li>
<li>Ignore invalid authorization parameters. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2955">#2955</a></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2958">#2958</a></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current UID does not have an associated name. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2957">#2957</a></li>
</ul>
<h2>3.0.4</h2>
<p>This is the Werkzeug 3.0.4 fix release, which fixes bugs but does not
otherwise change behavior and should not result in breaking changes.</p>
<p>PyPI: <a
href="https://pypi.org/project/Werkzeug/3.0.4/">https://pypi.org/project/Werkzeug/3.0.4/</a>
Changes: <a
href="https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4">https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-4</a>
Milestone: <a
href="https://github.com/pallets/werkzeug/milestone/36?closed=1">https://github.com/pallets/werkzeug/milestone/36?closed=1</a></p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2930">#2930</a></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2904">#2904</a></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. <a
href="https://redirect.github.com/pallets/werkzeug/issues/2916">#2916</a></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python &lt; 3.13.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2926">#2926</a></li>
<li>Debugger pin auth works when the URL already contains a query
string.
<a
href="https://redirect.github.com/pallets/werkzeug/issues/2918">#2918</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/werkzeug/blob/main/CHANGES.rst">werkzeug's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.0.6</h2>
<p>Released 2024-10-25</p>
<ul>
<li>Fix how <code>max_form_memory_size</code> is applied when parsing
large non-file
fields. :ghsa:<code>q34m-jh98-gwm2</code></li>
<li><code>safe_join</code> catches certain paths on Windows that were
not caught by
<code>ntpath.isabs</code> on Python &lt; 3.11.
:ghsa:<code>f9vj-2wh5-fj8j</code></li>
</ul>
<h2>Version 3.0.5</h2>
<p>Released 2024-10-24</p>
<ul>
<li>The Watchdog reloader ignores file closed no write events.
:issue:<code>2945</code></li>
<li>Logging works with client addresses containing an IPv6 scope
:issue:<code>2952</code></li>
<li>Ignore invalid authorization parameters.
:issue:<code>2955</code></li>
<li>Improve type annotation fore <code>SharedDataMiddleware</code>.
:issue:<code>2958</code></li>
<li>Compatibility with Python 3.13 when generating debugger pin and the
current
UID does not have an associated name. :issue:<code>2957</code></li>
</ul>
<h2>Version 3.0.4</h2>
<p>Released 2024-08-21</p>
<ul>
<li>Restore behavior where parsing
<code>multipart/x-www-form-urlencoded</code> data with
invalid UTF-8 bytes in the body results in no form data parsed rather
than a
413 error. :issue:<code>2930</code></li>
<li>Improve <code>parse_options_header</code> performance when parsing
unterminated
quoted string values. :issue:<code>2904</code></li>
<li>Debugger pin auth is synchronized across threads/processes when
tracking
failed entries. :issue:<code>2916</code></li>
<li>Dev server handles unexpected <code>SSLEOFError</code> due to issue
in Python &lt; 3.13.
:issue:<code>2926</code></li>
<li>Debugger pin auth works when the URL already contains a query
string.
:issue:<code>2918</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5eaefc3996"><code>5eaefc3</code></a>
release version 3.0.6</li>
<li><a
href="2767bcb10a"><code>2767bcb</code></a>
Merge commit from fork</li>
<li><a
href="87cc78a25f"><code>87cc78a</code></a>
catch special absolute path on Windows Python &lt; 3.11</li>
<li><a
href="50cfeebcb0"><code>50cfeeb</code></a>
Merge commit from fork</li>
<li><a
href="8760275afb"><code>8760275</code></a>
apply max_form_memory_size another level up in the parser</li>
<li><a
href="8d6a12e2af"><code>8d6a12e</code></a>
start version 3.0.6</li>
<li><a
href="a7b121abc7"><code>a7b121a</code></a>
release version 3.0.5 (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2961">#2961</a>)</li>
<li><a
href="9caf72ac06"><code>9caf72a</code></a>
release version 3.0.5</li>
<li><a
href="e28a2451e9"><code>e28a245</code></a>
catch OSError from getpass.getuser (<a
href="https://redirect.github.com/pallets/werkzeug/issues/2960">#2960</a>)</li>
<li><a
href="e6b4cce97e"><code>e6b4cce</code></a>
catch OSError from getpass.getuser</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/werkzeug/compare/3.0.3...3.0.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=werkzeug&package-manager=pip&previous-version=3.0.3&new-version=3.0.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/infiniflow/ragflow/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-28 11:58:25 +08:00
322bafdf2a fix baidufanyi param error (#3053)
### What problem does this PR solve?

#3052

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-28 11:05:32 +08:00
8257eeb3f2 add model moonshot-v1-auto (#3051)
### What problem does this PR solve?

#3048

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-28 10:37:22 +08:00
00810525d6 Minor editorial updates to the HTTP API reference (#3027)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-27 08:00:51 +08:00
391b950be6 Fix non-null violation when inviting people to team (#3015)
### What problem does this PR solve?

Not sure why MySQL is inserting empty string in this case, but when I
use postgres I got `null value in column "invited_by" of relation
"user_tenant" violates not-null constraint`

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-10-25 18:39:09 +08:00
d78f215caa Final touches to HTTP and Python API references (#3019)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-25 17:11:58 +08:00
9457d20ef1 make gemini robust (#3012)
### What problem does this PR solve?

#3003

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-25 10:50:44 +08:00
648f8e81d1 Fix issues in API (#3008)
### What problem does this PR solve?

Fix issues in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-24 20:10:47 +08:00
161c7a231b Fix some issues in API and test (#3001)
### What problem does this PR solve?

Fix some issues in API and test

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-24 20:05:21 +08:00
e997b42504 DRAFT: miscellaneous updates to HTTP API Reference (#3005)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-24 20:04:50 +08:00
524699da7d Miscellaneous updates to HTTP and PYthon APIs (#3000)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-24 16:14:07 +08:00
765a114be7 minor (#2998)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change


- [x] Documentation Update
2024-10-24 11:02:09 +08:00
c86afff447 feat: Limit the maximum value of auto keywords to 30 #2687 (#2991)
### What problem does this PR solve?

feat: Limit the maximum value of auto keywords to 30 #2687

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-24 09:35:34 +08:00
b73fe0cc3c Integrating RAGFlow API as a Plugin into ChatGPT-on-WeChat (#2988)
### What problem does this PR solve?

This PR introduces the `ragflow_chat` plugin for the ChatGPT-on-WeChat
project, extending its functionality by integrating Retrieval-Augmented
Generation (RAG) capabilities. It allows users to have more contextually
relevant conversations by retrieving information from external knowledge
sources (via the RAGFlow API) and incorporating it into their chat
interactions.

The primary goal of this PR is to enable seamless communication between
ChatGPT-on-WeChat and the RAGFlow server, improving response accuracy by
embedding knowledge-based data into conversational flows. This is
particularly useful for users who need more complex, data-driven
responses.

This PR adds a new plugin that enhances ChatGPT-on-WeChat with Ragflow
capabilities, allowing for a more enriched conversational experience
driven by external knowledge.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-24 09:35:11 +08:00
2a614e0e23 Remove defaults to 'None' (#2996)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-23 22:58:37 +08:00
50b425cf89 Test Cases (#2993)
### What problem does this PR solve?

Test Cases

### Type of change

- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-23 22:58:27 +08:00
2174c350be Updated HTTP API Reference (document, chat assistant, session, chat) (#2994)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-23 20:07:47 +08:00
7f81fc8f9b refactor auto keywords and auto question (#2990)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-10-23 17:00:56 +08:00
f090075cb2 allowing docker container to access service on host (#2895)
### What problem does this PR solve?

1. services running (e.g., ollama) running on the host could not be
accessed from docker containers

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Zhichang Yu <yuzhichang@gmail.com>
2024-10-23 16:43:21 +08:00
ec6d942d83 feat: Added auto_keywords and auto_questions fields to the parsing configuration page #2687 (#2987)
### What problem does this PR solve?

feat: Added auto_keywords and auto_questions fields to the parsing
configuration page #2687

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-23 15:45:03 +08:00
8714754afc Fix some issues in API (#2982)
### What problem does this PR solve?

Fix some issues in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-23 12:02:18 +08:00
43b959fe58 minor (#2984)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-23 11:00:35 +08:00
320e8f6553 fix generate string join issue (#2983)
### What problem does this PR solve?

#2921

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-23 10:54:04 +08:00
89d5b2414e fix SILICONFLOW rerank error (#2980)
### What problem does this PR solve?

#2977

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-23 10:12:39 +08:00
91ea559f9e DRAFT: Updated python and http api references (#2973)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-10-22 17:10:23 +08:00
445dce4363 [Bug]: unnecessary auto-increment calculations in the tokens statistics of the chat model (#2969)
### What problem does this PR solve?

the details is shown in
https://github.com/infiniflow/ragflow/issues/2968

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-22 16:26:04 +08:00
1fce6caf80 make titles in markdown not be splited with following content (#2971)
### What problem does this PR solve?

#2970 
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
2024-10-22 15:25:23 +08:00
adb0a93d95 add component invoke (#2967)
### What problem does this PR solve?

#2908

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-22 14:16:44 +08:00
226bdd6e99 add auto keywords and auto-question (#2965)
### What problem does this PR solve?

#2687

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-22 13:12:49 +08:00
5aa9d7787e [Bug]: When use OpenAI chat model , raise ERROR: 'CompletionUsage' object has no attribute 'get' #2948 (#2949)
[Bug]: When use OpenAI chat model , raise ERROR: 'CompletionUsage'
object has no attribute 'get' #2948

### What problem does this PR solve?

the detail of this PR is shown at
https://github.com/infiniflow/ragflow/issues/2948

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-22 11:40:05 +08:00
b2524eec49 fix sequence2txt error and usage total token issue (#2961)
### What problem does this PR solve?

#1363

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-22 11:38:37 +08:00
6a4858a7ee Fix thumbnail_img NoneType error (#2941)
### What problem does this PR solve?

fix thumbnail_img NoneType error

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-22 09:21:05 +08:00
1a623df849 DRAFT: Miscellaneous updates to HTTP API reference (#2923)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-21 19:50:45 +08:00
bfc07fe4f9 bigger resolution for OCR (#2919)
### What problem does this PR solve?



### Type of change

- [x] Performance Improvement
2024-10-21 16:25:42 +08:00
3e702aa4ac add package crawl4ai (#2918)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-21 15:07:45 +08:00
2ced25c676 fix thumbnail issue (#2917)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-21 14:33:26 +08:00
1935c3be1a Fix some issues in API (#2902)
### What problem does this PR solve?

Fix some issues in API

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-21 14:29:06 +08:00
609cfa7b5f feat: Replace crawler icon #2915 (#2916)
### What problem does this PR solve?

feat: Replace crawler icon #2915

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-21 14:28:30 +08:00
ac26d09a59 Feature/feat1017 (#2872)
### What problem does this PR solve?

1. fix: mid map show error in knowledge graph, juse because
```@antv/g6```version changed
2. feat: concurrent threads configuration support in graph extractor
3. fix: used tokens update failed for tenant
4. feat: timeout configuration support for llm
5. fix: regex error in graph extractor
6. feat: qwen rerank(```gte-rerank```) support
7. fix: timeout deal in knowledge graph index process. Now chat by
stream output, also, it is configuratable.
8. feat: ```qwen-long``` model configuration

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-21 12:11:08 +08:00
4bdf3fd48e Add agent component for web crawler (#2878)
### What problem does this PR solve?

Add agent component for  web crawler

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-21 11:38:41 +08:00
c1d0473f49 add zhipu glm-4-9b (#2912)
### What problem does this PR solve?

#2910

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-21 10:30:35 +08:00
e5f7733b31 Resolves #2905 openai compatible model provider add llama.cpp rerank support (#2906)
### What problem does this PR solve?
Resolve #2905 



due to the in-consistent of token size, I make it safe to limit 500 in
code, since there is no config param to control

my llama.cpp run set -ub to 1024:

${llama_path}/bin/llama-server --host 0.0.0.0 --port 9901 -ub 1024 -ngl
99 -m $gguf_file --reranking "$@"





### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Here is my test Ragflow use llama.cpp

```
lot update_slots: id  0 | task 458 | prompt done, n_past = 416, n_tokens = 416
slot      release: id  0 | task 458 | stop processing: n_past = 416, truncated = 0
slot launch_slot_: id  0 | task 459 | processing task
slot update_slots: id  0 | task 459 | tokenizing prompt, len = 2
slot update_slots: id  0 | task 459 | prompt tokenized, n_ctx_slot = 8192, n_keep = 0, n_prompt_tokens = 111
slot update_slots: id  0 | task 459 | kv cache rm [0, end)
slot update_slots: id  0 | task 459 | prompt processing progress, n_past = 111, n_tokens = 111, progress = 1.000000
slot update_slots: id  0 | task 459 | prompt done, n_past = 111, n_tokens = 111
slot      release: id  0 | task 459 | stop processing: n_past = 111, truncated = 0
srv  update_slots: all slots are idle
request: POST /rerank 172.23.0.4 200

```
2024-10-21 10:06:29 +08:00
5aec1e3e17 DRAFT: Miscellaneous updates to HTTP API. Tried to finish off Python API ref… (#2909)
…erence but failed.

### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-21 09:47:59 +08:00
1d6bcf5aa2 add nginx path for sdk handlers (#2899) (#2900)
### What problem does this PR solve?

add the nginx path `/api` for sdk handlers 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-21 09:47:45 +08:00
1e6d44d6ef DRAFT: Miscellaneous proofedits on Python APIs (#2903)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-19 19:46:13 +08:00
cec208051f DRAFT: Updated chunk APIs (#2901)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-10-18 20:56:33 +08:00
526fcbbfde fix: Fixed the issue of error reporting when uploading files in the chat box #2897 (#2898)
### What problem does this PR solve?

fix: Fixed the issue of error reporting when uploading files in the chat
box #2897

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-18 17:21:12 +08:00
c760f058df add owner check for team work (#2892)
### What problem does this PR solve?

#2834

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-18 13:48:57 +08:00
8fdfa0f669 feat: Use Badge.Ribbon to distinguish the teams to which the knowledge base belongs #2846 (#2891)
### What problem does this PR solve?

feat: Use Badge.Ribbon to distinguish the teams to which the knowledge
base belongs #2846

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-18 12:50:06 +08:00
ceecac69e9 Delete useless files (#2889)
### What problem does this PR solve?

Delete useless files

### Type of change


- [x] Other (please describe):
Delete useless files

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-18 11:30:40 +08:00
e0c0bdeb0a add team tag to kb (#2890)
### What problem does this PR solve?
#2834

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-18 11:30:19 +08:00
cf3106040a feat: Bind data to TenantTable #2846 (#2883)
### What problem does this PR solve?

feat: Bind data to TenantTable #2846
feat: Add TenantTable

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-18 09:21:01 +08:00
791afbba15 Miscellaneous minor updates (#2885)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-17 19:52:35 +08:00
8358245f64 Draft: Updated file management-related APIs (#2882)
### What problem does this PR solve?

Updated file management-related APIs

### Type of change

- [x] Documentation Update
2024-10-17 18:19:17 +08:00
396bb4b688 Fixed docker build (#2881)
### What problem does this PR solve?

Fixed docker build

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-17 16:34:21 +08:00
167b4af52b feat: Load markdown file with "asset/source" #1739 (#2880)
### What problem does this PR solve?

feat: Load markdown file with "asset/source" #17339

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-17 16:03:13 +08:00
bedb05012d feat: Configure the root directory alias #1739 (#2875)
### What problem does this PR solve?

feat: Configure the root directory alias #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-17 11:36:01 +08:00
6a60e26020 update dashscope (#2871)
### What problem does this PR solve?
#2857
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-17 09:52:31 +08:00
6496055e23 Updated session APIs (#2868)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-10-16 20:38:19 +08:00
dab92ac1e8 Refactor Chunk API (#2855)
### What problem does this PR solve?

Refactor Chunk API
#2846
### Type of change


- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-16 18:41:24 +08:00
b9fa00f341 add API for tenant function (#2866)
### What problem does this PR solve?

feat: API access key management
https://github.com/infiniflow/ragflow/issues/2846
feat: Render markdown file with remark-loader
https://github.com/infiniflow/ragflow/issues/2846

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-16 16:10:24 +08:00
e5d3ab0332 feat: API access key management #2846 (#2865)
### What problem does this PR solve?

feat: API access key management #2846
feat: Render markdown file with remark-loader #2846

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-16 15:57:39 +08:00
4991107822 Fix keys of Xinference deployed models, especially has the same model name with public hosted models. (#2832)
### What problem does this PR solve?

Fix keys of Xinference deployed models, especially has the same model
name with public hosted models.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: 0000sir <0000sir@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-16 10:21:08 +08:00
51ecda0ff5 refactor (#2858)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-10-16 10:17:05 +08:00
6850fd69c6 Enhance email validation: Allow top-level domains with 5 letters (#2856)
### What problem does this PR solve?

Currently singing up to ragflow using a mail-adress with associated
top-level domains that have more than 4 chars will fail due to a regex
validation that enforces just this.

In our use case, we'd like to use e-mail addresses with `.swiss`
top-level domains, which is a valid TLD associated with the country
switzerland in the IANA root database.

This change makes the validation accept 5-letter TLDs.


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [x] Other (please describe): Making validation for lenient, accepting
more valid input.
2024-10-16 09:34:45 +08:00
e1e5711680 Feat:Compatible with Dify's External Knowledge API (#2848)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._
Fixes #2731 
### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 17:47:24 +08:00
4463128436 add rm token (#2850)
### What problem does this PR solve?

#2846

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 16:48:38 +08:00
c8783672d7 refactor api util (#2849)
### What problem does this PR solve?


### Type of change

- [x] Refactoring
2024-10-15 16:11:26 +08:00
ce495e4e3e refine API token application (#2847)
### What problem does this PR solve?

#2846

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 15:47:40 +08:00
fcabdf7745 feat: Generate operator names in an auto-incremental manner #1739 (#2844)
### What problem does this PR solve?

feat: Generate operator names in an auto-incremental manner #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-15 15:36:09 +08:00
b540d41cdc let presentation do raptor (#2838)
### What problem does this PR solve?

#2837

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-15 10:11:09 +08:00
260d694bbc Updated chat APIs (#2831)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Jin Hai <haijin.chn@gmail.com>
2024-10-14 20:48:23 +08:00
6329427ad5 Refactor Document API (#2833)
### What problem does this PR solve?

Refactor Document API

### Type of change


- [x] Refactoring

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-14 20:03:33 +08:00
df223eddf3 feat: Fix translation issue of message_history_window_size #1739 (#2828)
### What problem does this PR solve?

feat: Fix translation issue of message_history_window_size #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-14 15:30:20 +08:00
85b359556e feat: Add message_history_window_size to CategorizeForm #1739 (#2826)
### What problem does this PR solve?

feat: Add message_history_window_size to CategorizeForm #1739
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-14 15:18:45 +08:00
b164116277 refine token similarity (#2824)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-10-14 13:33:18 +08:00
8e5efcc47f Updated dataset APIs (#2820)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2024-10-12 20:07:21 +08:00
6eed115723 Refactor API for document and session (#2819)
### What problem does this PR solve?

Refactor API for document and session.

### Type of change


- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-12 19:35:19 +08:00
7d80fc474c refine components retrieval and rewrite (#2818)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-10-12 18:47:25 +08:00
a20b82092f Refactor Chat API (#2804)
### What problem does this PR solve?

Refactor Chat API

### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-12 13:48:43 +08:00
2a86472b88 fix chat and thumbnail bug (#2803)
### What problem does this PR solve?

1. fix white screen issue when chat response
2. thumbnail bug when document not support

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
2024-10-11 16:10:27 +08:00
190eea7097 trival (#2808)
### What problem does this PR solve?



### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-11 15:33:38 +08:00
2d1c83da59 fix LIGHTEN issue (#2806)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-11 15:01:27 +08:00
3f065c75da support chat model in huggingface (#2802)
### What problem does this PR solve?

#2794

### Type of change
- [x] New Feature (non-breaking change which adds functionality)
2024-10-11 14:45:48 +08:00
1bae479b37 Fixed broken links. (#2805)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-11 14:34:31 +08:00
5e7c1fb23a reduce rerank batch size (#2801)
### What problem does this PR solve?

### Type of change


- [x] Performance Improvement
2024-10-11 11:29:19 +08:00
bae30e5cc4 fix: document preview (#2795)
(cherry picked from commit 8d11a070b2fc88285a7cff1ab85ebefee84b6c64)

### What problem does this PR solve?

fix document preview error in file manager.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
2024-10-11 11:26:59 +08:00
18f80743eb support api-version and change default-model in adding azure-openai and openai (#2799)
### What problem does this PR solve?
#2701 #2712 #2749

### Type of change
-[x] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-11 11:26:42 +08:00
bfaef2cca6 feat: Arrange the form of generate, categorize, switch, retrieval operators vertically #1739 (#2800)
### What problem does this PR solve?

feat: Arrange the form of generate, categorize, switch, retrieval
operators vertically #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-11 11:06:14 +08:00
cbd7cd7c4d Refactor Dataset API (#2783)
### What problem does this PR solve?

Refactor Dataset API

### Type of change

- [x] Refactoring

---------

Co-authored-by: liuhua <10215101452@stu.ecun.edu.cn>
2024-10-11 09:55:27 +08:00
a2f9c03a95 fix: Change document status with @tanstack/react-query #13306 (#2788)
### What problem does this PR solve?

fix: Change document status with @tanstack/react-query #13306

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-11 08:45:10 +08:00
2c56d274d8 Updated instructions on downloading RAGFlow Slim and RAGFlow all-in-one. (#2785)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2024-10-10 19:24:54 +08:00
7742f67481 refine agent (#2787)
### What problem does this PR solve?



### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [x] Performance Improvement
- [ ] Other (please describe):
2024-10-10 17:07:36 +08:00
6af9d4e5f9 Refactor README on different docker version. (#2775)
### What problem does this PR solve?

1. Use two env files for slim and full docker image.
2. Update README

### Type of change

- [x] Documentation Update
- [x] Refactoring

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: writinwaters <93570324+writinwaters@users.noreply.github.com>
2024-10-10 15:30:32 +08:00
51efecf4b5 trival (#2779)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-10 11:05:03 +08:00
9dfcae2b5d Fix error commands (#2778)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-10 10:38:57 +08:00
66172cef3e fix: torch dependency start error (#2777)
### What problem does this PR solve?

when use slim image, remove ```torch``` denpendency.

### Type of change

- [✓] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-10 10:06:03 +08:00
29f022c91c fix bedrock issue (#2776)
### What problem does this PR solve?

#2722

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-10 09:13:35 +08:00
485bfd6c08 fix: Large document thumbnail display failed (#2763)
### What problem does this PR solve?

In MySQL, when the thumbnail base64 of a document is relatively large,
the display of the document's thumbnail fails.
Now, I put the document thumbnail into MiniIO storage.

### Type of change

- [✓] Bug Fix (non-breaking change which fixes an issue)

---------

Co-authored-by: chongchuanbing <chongchuanbing@gmail.com>
2024-10-10 09:09:29 +08:00
f7a73c5149 Fix README and some comments (#2774)
### What problem does this PR solve?

1. Fix typo
2. Update comments.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 23:30:00 +08:00
5d966b1120 Fix README description (#2773)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 23:03:18 +08:00
ce79144e75 Use slim image as the default (#2772)
### What problem does this PR solve?

Use the slim docker image as the default.

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 22:51:47 +08:00
d8566f0ddf HTTP API documents, part1 (#2713)
### What problem does this PR solve?

1. dataset: create/delete/list/get/update
2. files in dataset: upload/download/list/delete/get_info

### Type of change

- [x] Documentation Update

---------

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2024-10-09 20:26:51 +08:00
e904c134e7 feat: Add a note node to the agent canvas #2767 (#2768)
### What problem does this PR solve?

feat: Add a note node to the agent canvas #2767

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-09 19:39:04 +08:00
7fc3bb3241 Docs:fix user guide links (#2761)
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [x] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-09 19:38:10 +08:00
20e63f8ec4 Fix docx images (#2756)
### What problem does this PR solve?

#2755 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-09 19:37:32 +08:00
2df15742fc fix xinference add rerank model bug (#2758)
### What problem does this PR solve?

Fix xinference add rerank model bug,
https://github.com/infiniflow/ragflow/issues/2294#issue-2510788135

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-09 19:37:11 +08:00
8f815a6c1e Fix component exesql (#2754)
### What problem does this PR solve?

#2700 

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-09 17:53:36 +08:00
8f4bd10b19 Initial draft of Python APIs (#2719)
### What problem does this PR solve?



### Type of change

- [x] Documentation Update
2024-10-09 15:30:22 +08:00
511d272d0d feat: The same query conditions on the search page should not request the interface every time the mind map drawer is opened. #2759 (#2760)
### What problem does this PR solve?

feat: The same query conditions on the search page should not request
the interface every time the mind map drawer is opened. #2759

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-09 15:28:28 +08:00
7f44cf543a move import positions (#2753)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-10-09 10:34:58 +08:00
16472eb3ea solve knowledgegraph issue when calling gemini model (#2738)
### What problem does this PR solve?
#2720

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-08 18:27:04 +08:00
d92acdcf1d Update azure_sas_conn.py - fixing container_url typo (#2740)
### What problem does this PR solve?

Fixes #2739

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-08 18:26:30 +08:00
2e33ed3ba0 Modified download_deps.py (#2747)
### What problem does this PR solve?

Modified download_deps.py

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-08 17:40:06 +08:00
04ff9cda7c expand rerank range (#2746)
### What problem does this PR solve?


### Type of change

- [x] Performance Improvement
2024-10-08 16:34:33 +08:00
5cc9981a4d Fix LIGHTEN. Close #2723 (#2744)
### What problem does this PR solve?

Fix LIGHTEN
#2726 
#2723

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-08 15:58:14 +08:00
5845b2b137 feat: Add component TuShare #1739 (#2745)
### What problem does this PR solve?

feat: Add component  TuShare  #1739

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [x] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-08 15:35:30 +08:00
b3b54680e7 Add query parts to lamp (#2736)
### What problem does this PR solve?


### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-10-08 12:53:04 +08:00
a3ab5ba9ac support sequence2txt and tts model in Xinference (#2696)
### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-10-08 10:43:18 +08:00
c552a02e7f chore: update operators.py (#2724)
### What problem does this PR solve?
substract -> subtract
### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-10-08 10:34:52 +08:00
a005be7c74 fix re.escape problem (#2735)
### What problem does this PR solve?

#2716

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-08 10:07:03 +08:00
6f7fcdc897 remove redeclared get_json_result func (#2675)
### What problem does this PR solve?

Redeclared 'get_json_result' defined above without usage
line 99 VS line 218

### Type of change

- [x] Other (please describe): remove redeclared func
2024-10-05 16:47:47 +08:00
34761fa4ca Fix/bedrock issues (#2718)
### What problem does this PR solve?

Adding a Bedrock API key for Claude Sonnet was broken. I find the issue
came up when trying to test the LLM configuration, the system is a
required parameter in boto3.

As well, there were problems in Bedrock implementation for embeddings
when trying to encode queries.

### Type of change

- [X] Bug Fix (non-breaking change which fixes an issue)
2024-10-05 16:44:50 +08:00
abe9995a7c build multi-arch image (#2710)
### What problem does this PR solve?
build multi-arch image

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-03 21:00:26 +08:00
7f2ee3bbe9 docs: Migrate README to Compose V2 syntax (#2711)
### What problem does this PR solve?

This PR updates README to reflect the migration from Compose V1
(`docker-compose`) to Compose V2 (`docker compose`):

### Type of change

- [x] Documentation Update

### Source

The migration details and differences between Compose V1 and Compose V2
are based on the official Docker documentation:
[Docker Docs: Migrate to Compose
V2](https://docs.docker.com/compose/releases/migrate/#what-are-the-differences-between-compose-v1-and-compose-v2)
2024-10-03 15:31:04 +08:00
a1ffc7fa2c Fix poetry cache (#2709)
### What problem does this PR solve?

Fix poetry cache

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-02 21:15:30 +08:00
70c6b5a7f9 Fix Dockerfile.slim (#2708)
### What problem does this PR solve?
Fix Dockerfile.slim

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-02 20:02:53 +08:00
1b80a693ba Updated Build Docker Image (#2706)
### What problem does this PR solve?



### Type of change


- [x] Documentation Update
2024-10-02 19:43:22 +08:00
e46a4d1875 Fix Dockerfile for arm64 (#2705)
### What problem does this PR solve?

Fix Dockerfile for arm64

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):

---------

Co-authored-by: Ubuntu <ubuntu@arm-test.us-central1-f.c.ragflow-01.internal>
2024-10-02 19:41:56 +08:00
5f4d2dc4fe Updated Dockefile to use cache (#2703)
### What problem does this PR solve?

Updated Dockefile to use cache

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-10-01 17:41:38 +08:00
62202b7eff fix: Fixed the issue where no error message was displayed when uploading a file that was too large #2258 (#2697)
### What problem does this PR solve?

fix: Fixed the issue where no error message was displayed when uploading
a file that was too large #2258

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-10-01 16:37:46 +08:00
1518824b0c Updated Dockerfile (#2695)
### What problem does this PR solve?

Updated Dockerfile

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [x] Refactoring
- [ ] Performance Improvement
- [ ] Other (please describe):
2024-09-30 18:13:06 +08:00
0a7654c747 fix error in exception (#2694)
### What problem does this PR solve?
#2670

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-09-30 17:54:27 +08:00
d6db805885 Refactoring entity_resolution (#2692)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-09-30 17:18:09 +08:00
570ad420a8 remove unused import (#2679)
### What problem does this PR solve?

### Type of change

- [x] Refactoring
2024-09-30 16:59:39 +08:00
ae5a877ed4 Simplify the usage of dict (#2681)
### What problem does this PR solve?
Simplify the usage of dictionaries

### Type of change

- [x] Refactoring

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-09-30 16:54:25 +08:00
9945988e44 format mind_map_extractor code (#2686)
### What problem does this PR solve?

format mind_map_extractor code

### Type of change

- [x] Refactoring
2024-09-30 16:29:15 +08:00
79b8210498 refine readme (#2689)
### What problem does this PR solve?

### Type of change

- [x] Documentation Update
2024-09-30 16:21:56 +08:00
c80d311474 fix tests.yml (#2688)
### What problem does this PR solve?

fix tests.yml

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe):
CI
2024-09-30 16:05:54 +08:00
64429578da added tests.yml (#2685)
### What problem does this PR solve?

added tests.yml

### Type of change

- [ ] Bug Fix (non-breaking change which fixes an issue)
- [ ] New Feature (non-breaking change which adds functionality)
- [ ] Documentation Update
- [ ] Refactoring
- [ ] Performance Improvement
- [x] Other (please describe): CI
2024-09-30 14:58:34 +08:00
548 changed files with 46206 additions and 21600 deletions

View File

@ -15,16 +15,16 @@ body:
value: "Please provide the following information to help us understand the issue."
- type: input
attributes:
label: Branch name
description: Enter the name of the branch where you encountered the issue.
placeholder: e.g., main
label: RAGFlow workspace code commit ID
description: Enter the commit ID associated with the issue.
placeholder: e.g., 26d3480e
validations:
required: true
- type: input
attributes:
label: Commit ID
description: Enter the commit ID associated with the issue.
placeholder: e.g., c3b2a1
label: RAGFlow image version
description: Enter the image version(shown in RAGFlow UI, `System` page) associated with the issue.
placeholder: e.g., 26d3480e(v0.13.0~174)
validations:
required: true
- type: textarea

130
.github/workflows/tests.yml vendored Normal file
View File

@ -0,0 +1,130 @@
name: tests
on:
push:
branches:
- 'main'
- '*.*.*'
paths-ignore:
- 'docs/**'
- '*.md'
- '*.mdx'
pull_request:
types: [ opened, synchronize, reopened, labeled ]
paths-ignore:
- 'docs/**'
- '*.md'
- '*.mdx'
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
ragflow_tests:
name: ragflow_tests
# https://docs.github.com/en/actions/using-jobs/using-conditions-to-control-job-execution
# https://github.com/orgs/community/discussions/26261
if: ${{ github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci') }}
runs-on: [ "self-hosted", "debug" ]
steps:
# https://github.com/hmarr/debug-action
#- uses: hmarr/debug-action@v2
- name: Show PR labels
run: |
echo "Workflow triggered by ${{ github.event_name }}"
if [[ ${{ github.event_name }} == 'pull_request' ]]; then
echo "PR labels: ${{ join(github.event.pull_request.labels.*.name, ', ') }}"
fi
- name: Ensure workspace ownership
run: echo "chown -R $USER $GITHUB_WORKSPACE" && sudo chown -R $USER $GITHUB_WORKSPACE
# https://github.com/actions/checkout/issues/1781
- name: Check out code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Build ragflow:dev-slim
run: |
RUNNER_WORKSPACE_PREFIX=${RUNNER_WORKSPACE_PREFIX:-$HOME}
cp -r ${RUNNER_WORKSPACE_PREFIX}/huggingface.co ${RUNNER_WORKSPACE_PREFIX}/nltk_data ${RUNNER_WORKSPACE_PREFIX}/libssl*.deb ${RUNNER_WORKSPACE_PREFIX}/tika-server*.jar* ${RUNNER_WORKSPACE_PREFIX}/chrome* ${RUNNER_WORKSPACE_PREFIX}/cl100k_base.tiktoken .
sudo docker pull ubuntu:22.04
sudo docker build --progress=plain -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
- name: Build ragflow:dev
run: |
sudo docker build --progress=plain -f Dockerfile -t infiniflow/ragflow:dev .
- name: Start ragflow:dev-slim
run: |
sudo docker compose -f docker/docker-compose.yml up -d
- name: Stop ragflow:dev-slim
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:dev
run: |
echo "RAGFLOW_IMAGE=infiniflow/ragflow:dev" >> docker/.env
sudo docker compose -f docker/docker-compose.yml up -d
- name: Run sdk tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && poetry install && source .venv/bin/activate && cd test/test_sdk_api && pytest -s --tb=short get_email.py t_dataset.py t_chat.py t_session.py t_document.py t_chunk.py
- name: Run frontend api tests against Elasticsearch
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && poetry install && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Stop ragflow:dev
if: always() # always run this step even if previous steps failed
run: |
sudo docker compose -f docker/docker-compose.yml down -v
- name: Start ragflow:dev
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml up -d
- name: Run sdk tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && poetry install && source .venv/bin/activate && cd test/test_sdk_api && pytest -s --tb=short get_email.py t_dataset.py t_chat.py t_session.py t_document.py t_chunk.py
- name: Run frontend api tests against Infinity
run: |
export http_proxy=""; export https_proxy=""; export no_proxy=""; export HTTP_PROXY=""; export HTTPS_PROXY=""; export NO_PROXY=""
export HOST_ADDRESS=http://host.docker.internal:9380
until sudo docker exec ragflow-server curl -s --connect-timeout 5 ${HOST_ADDRESS} > /dev/null; do
echo "Waiting for service to be available..."
sleep 5
done
cd sdk/python && poetry install && source .venv/bin/activate && cd test/test_frontend_api && pytest -s --tb=short get_email.py test_dataset.py
- name: Stop ragflow:dev
if: always() # always run this step even if previous steps failed
run: |
sudo DOC_ENGINE=infinity docker compose -f docker/docker-compose.yml down -v

View File

@ -1,6 +1,7 @@
# base stage
FROM ubuntu:24.04 AS base
FROM ubuntu:22.04 AS base
USER root
SHELL ["/bin/bash", "-c"]
ENV LIGHTEN=0
@ -9,26 +10,48 @@ WORKDIR /ragflow
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# if you located in China, you can use tsinghua mirror to speed up apt
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list.d/ubuntu.sources
# Setup apt mirror site
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://install.python-poetry.org | python3 -
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && DEBIAN_FRONTEND=noninteractive apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus default-jdk python3-pip pipx \
libatk-bridge2.0-0 libgtk-4-1 libnss3 xdg-utils unzip libgbm-dev wget git \
&& rm -rf /var/lib/apt/lists/*
RUN curl -o libssl1.deb http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5_amd64.deb && dpkg -i libssl1.deb && rm -f libssl1.deb
RUN pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && pip3 config set global.trusted-host "pypi.tuna.tsinghua.edu.cn mirrors.pku.edu.cn" && pip3 config set global.extra-index-url "https://mirrors.pku.edu.cn/pypi/web/simple" \
&& pipx install poetry \
&& /root/.local/bin/poetry self add poetry-plugin-pypi-mirror
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_amd64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb \
--mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_arm64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb \
if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
elif [ "$(uname -m)" = "aarch64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb; \
fi
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
ENV PATH=/root/.local/bin:$PATH
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
ENV POETRY_PYPI_MIRROR_URL=https://pypi.tuna.tsinghua.edu.cn/simple/
# nodejs 12.22 on Ubuntu 22.04 is too old
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt purge -y nodejs npm && \
apt autoremove && \
apt update && \
apt install -y nodejs cargo && \
rm -rf /var/lib/apt/lists/*
# builder stage
FROM base AS builder
@ -36,21 +59,38 @@ USER root
WORKDIR /ragflow
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y nodejs npm cargo && \
rm -rf /var/lib/apt/lists/*
COPY .git /ragflow/.git
RUN current_commit=$(git rev-parse --short HEAD); \
last_tag=$(git describe --tags --abbrev=0); \
commit_count=$(git rev-list --count "$last_tag..HEAD"); \
version_info=""; \
if [ "$commit_count" -eq 0 ]; then \
version_info=$last_tag; \
else \
version_info="$current_commit($last_tag~$commit_count)"; \
fi; \
if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION
COPY web web
RUN cd web && npm i --force && npm run build
COPY docs docs
RUN --mount=type=cache,id=ragflow_builder_npm,target=/root/.npm,sharing=locked \
cd web && npm install --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" -eq 0 ]; then \
/root/.local/bin/poetry install --sync --no-cache --no-root --with=full; \
RUN --mount=type=cache,id=ragflow_builder_poetry,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" == "1" ]; then \
poetry install --no-root; \
else \
/root/.local/bin/poetry install --sync --no-cache --no-root; \
poetry install --no-root --with=full; \
fi
# production stage
@ -59,9 +99,11 @@ USER root
WORKDIR /ragflow
COPY --from=builder /ragflow/VERSION /ragflow/VERSION
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=cache,id=ragflow_production_apt,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
@ -89,19 +131,39 @@ RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
/huggingface.co/maidalun1020/bce-reranker-base_v1 \
| tar -xf - --strip-components=2 -C /root/.ragflow
# Copy nltk data downloaded via download_deps.py
COPY nltk_data /root/nltk_data
# https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
COPY tika-server-standard-3.0.0.jar /ragflow/tika-server-standard.jar
COPY tika-server-standard-3.0.0.jar.md5 /ragflow/tika-server-standard.jar.md5
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard.jar"
# Copy cl100k_base
COPY cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
# Add dependencies of selenium
RUN --mount=type=bind,source=chrome-linux64-121-0-6167-85,target=/chrome-linux64.zip \
unzip /chrome-linux64.zip && \
mv chrome-linux64 /opt/chrome && \
ln -s /opt/chrome/chrome /usr/local/bin/
RUN --mount=type=bind,source=chromedriver-linux64-121-0-6167-85,target=/chromedriver-linux64.zip \
unzip -j /chromedriver-linux64.zip chromedriver-linux64/chromedriver && \
mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:/root/.local/bin:${PATH}"
# Download nltk data
RUN python3 -m nltk.downloader wordnet punkt punkt_tab
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh

View File

@ -26,6 +26,7 @@ RUN dnf install -y nginx
ADD ./web ./web
ADD ./api ./api
ADD ./docs ./docs
ADD ./conf ./conf
ADD ./deepdoc ./deepdoc
ADD ./rag ./rag
@ -37,7 +38,7 @@ RUN dnf install -y openmpi openmpi-devel python3-openmpi
ENV C_INCLUDE_PATH /usr/include/openmpi-x86_64:$C_INCLUDE_PATH
ENV LD_LIBRARY_PATH /usr/lib64/openmpi/lib:$LD_LIBRARY_PATH
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
RUN cd ./web && npm i --force && npm run build
RUN cd ./web && npm i && npm run build
RUN conda run -n py11 pip install $(grep -ivE "mpi4py" ./requirements.txt) # without mpi4py==3.1.5
RUN conda run -n py11 pip install redis
@ -52,6 +53,7 @@ RUN conda run -n py11 python -m nltk.downloader wordnet
ENV PYTHONPATH=/ragflow/
ENV HF_ENDPOINT=https://hf-mirror.com
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
ADD docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh

View File

@ -1,6 +1,7 @@
# base stage
FROM ubuntu:24.04 AS base
FROM ubuntu:22.04 AS base
USER root
SHELL ["/bin/bash", "-c"]
ENV LIGHTEN=1
@ -9,26 +10,48 @@ WORKDIR /ragflow
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && apt-get --no-install-recommends install -y ca-certificates
# if you located in China, you can use tsinghua mirror to speed up apt
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list.d/ubuntu.sources
# Setup apt mirror site
RUN sed -i 's|http://archive.ubuntu.com|https://mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://install.python-poetry.org | python3 -
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
apt update && DEBIAN_FRONTEND=noninteractive apt install -y curl libpython3-dev nginx libglib2.0-0 libglx-mesa0 pkg-config libicu-dev libgdiplus default-jdk python3-pip pipx \
libatk-bridge2.0-0 libgtk-4-1 libnss3 xdg-utils unzip libgbm-dev wget git \
&& rm -rf /var/lib/apt/lists/*
RUN curl -o libssl1.deb http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu5_amd64.deb && dpkg -i libssl1.deb && rm -f libssl1.deb
RUN pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple && pip3 config set global.trusted-host "pypi.tuna.tsinghua.edu.cn mirrors.pku.edu.cn" && pip3 config set global.extra-index-url "https://mirrors.pku.edu.cn/pypi/web/simple" \
&& pipx install poetry \
&& /root/.local/bin/poetry self add poetry-plugin-pypi-mirror
# https://forum.aspose.com/t/aspose-slides-for-net-no-usable-version-of-libssl-found-with-linux-server/271344/13
# aspose-slides on linux/arm64 is unavailable
RUN --mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_amd64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb \
--mount=type=bind,source=libssl1.1_1.1.1f-1ubuntu2_arm64.deb,target=/root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb \
if [ "$(uname -m)" = "x86_64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_amd64.deb; \
elif [ "$(uname -m)" = "aarch64" ]; then \
dpkg -i /root/libssl1.1_1.1.1f-1ubuntu2_arm64.deb; \
fi
ENV PYTHONDONTWRITEBYTECODE=1 DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1
ENV PATH=/root/.local/bin:$PATH
# Configure Poetry
ENV POETRY_NO_INTERACTION=1
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
ENV POETRY_VIRTUALENVS_CREATE=true
ENV POETRY_REQUESTS_TIMEOUT=15
ENV POETRY_PYPI_MIRROR_URL=https://pypi.tuna.tsinghua.edu.cn/simple/
# nodejs 12.22 on Ubuntu 22.04 is too old
RUN --mount=type=cache,id=ragflow_base_apt,target=/var/cache/apt,sharing=locked \
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt purge -y nodejs npm && \
apt autoremove && \
apt update && \
apt install -y nodejs cargo && \
rm -rf /var/lib/apt/lists/*
# builder stage
FROM base AS builder
@ -36,21 +59,38 @@ USER root
WORKDIR /ragflow
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
apt update && apt install -y nodejs npm cargo && \
rm -rf /var/lib/apt/lists/*
COPY .git /ragflow/.git
RUN current_commit=$(git rev-parse --short HEAD); \
last_tag=$(git describe --tags --abbrev=0); \
commit_count=$(git rev-list --count "$last_tag..HEAD"); \
version_info=""; \
if [ "$commit_count" -eq 0 ]; then \
version_info=$last_tag; \
else \
version_info="$current_commit($last_tag~$commit_count)"; \
fi; \
if [ "$LIGHTEN" == "1" ]; then \
version_info="$version_info slim"; \
else \
version_info="$version_info full"; \
fi; \
echo "RAGFlow version: $version_info"; \
echo $version_info > /ragflow/VERSION
COPY web web
RUN cd web && npm i --force && npm run build
COPY docs docs
RUN --mount=type=cache,id=ragflow_builder_npm,target=/root/.npm,sharing=locked \
cd web && npm install --force && npm run build
# install dependencies from poetry.lock file
COPY pyproject.toml poetry.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" -eq 0 ]; then \
/root/.local/bin/poetry install --sync --no-cache --no-root --with=full; \
RUN --mount=type=cache,id=ragflow_builder_poetry,target=/root/.cache/pypoetry,sharing=locked \
if [ "$LIGHTEN" == "1" ]; then \
poetry install --no-root; \
else \
/root/.local/bin/poetry install --sync --no-cache --no-root; \
poetry install --no-root --with=full; \
fi
# production stage
@ -59,9 +99,11 @@ USER root
WORKDIR /ragflow
COPY --from=builder /ragflow/VERSION /ragflow/VERSION
# Install python packages' dependencies
# cv2 requires libGL.so.1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
RUN --mount=type=cache,id=ragflow_production_apt,target=/var/cache/apt,sharing=locked \
apt update && apt install -y --no-install-recommends nginx libgl1 vim less && \
rm -rf /var/lib/apt/lists/*
@ -82,19 +124,39 @@ RUN --mount=type=bind,source=huggingface.co,target=/huggingface.co \
/huggingface.co/InfiniFlow/deepdoc \
| tar -xf - --strip-components=3 -C /ragflow/rag/res/deepdoc
# Copy nltk data downloaded via download_deps.py
COPY nltk_data /root/nltk_data
# https://github.com/chrismattmann/tika-python
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
COPY tika-server-standard-3.0.0.jar /ragflow/tika-server-standard.jar
COPY tika-server-standard-3.0.0.jar.md5 /ragflow/tika-server-standard.jar.md5
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard.jar"
# Copy cl100k_base
COPY cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
# Add dependencies of selenium
RUN --mount=type=bind,source=chrome-linux64-121-0-6167-85,target=/chrome-linux64.zip \
unzip /chrome-linux64.zip && \
mv chrome-linux64 /opt/chrome && \
ln -s /opt/chrome/chrome /usr/local/bin/
RUN --mount=type=bind,source=chromedriver-linux64-121-0-6167-85,target=/chromedriver-linux64.zip \
unzip -j /chromedriver-linux64.zip chromedriver-linux64/chromedriver && \
mv chromedriver /usr/local/bin/ && \
rm -f /usr/bin/google-chrome
# Copy compiled web pages
COPY --from=builder /ragflow/web/dist /ragflow/web/dist
# Copy Python environment and packages
ENV VIRTUAL_ENV=/ragflow/.venv
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:/root/.local/bin:${PATH}"
# Download nltk data
RUN python3 -m nltk.downloader wordnet punkt punkt_tab
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
ENV PYTHONPATH=/ragflow/
COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template
COPY docker/entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh

176
README.md
View File

@ -8,20 +8,26 @@
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a>
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
<h4 align="center">
@ -34,7 +40,7 @@
<details open>
<summary></b>📕 Table of Contents</b></summary>
- 💡 [What is RAGFlow?](#-what-is-ragflow)
- 🎮 [Demo](#-demo)
- 📌 [Latest Updates](#-latest-updates)
@ -42,8 +48,8 @@
- 🔎 [System Architecture](#-system-architecture)
- 🎬 [Get Started](#-get-started)
- 🔧 [Configurations](#-configurations)
- 🪛 [Build the docker image without embedding models](#-build-the-docker-image-without-embedding-models)
- 🪚 [Build the docker image including embedding models](#-build-the-docker-image-including-embedding-models)
- 🔧 [Build a docker image without embedding models](#-build-a-docker-image-without-embedding-models)
- 🔧 [Build a docker image including embedding models](#-build-a-docker-image-including-embedding-models)
- 🔨 [Launch service from source for development](#-launch-service-from-source-for-development)
- 📚 [Documentation](#-documentation)
- 📜 [Roadmap](#-roadmap)
@ -54,35 +60,41 @@
## 💡 What is RAGFlow?
[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document
understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models)
to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted
data.
## 🎮 Demo
Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 Latest Updates
- 2024-09-29 Optimizes multi-round conversations.
- 2024-11-22 Adds more variables to Agent.
- 2024-11-01 Adds keyword extraction and related question generation to the parsed chunks to improve the accuracy of retrieval.
- 2024-09-13 Adds search mode for knowledge base Q&A.
- 2024-09-09 Adds a medical consultant agent template.
- 2024-08-22 Support text to SQL statements through RAG.
- 2024-08-02 Supports GraphRAG inspired by [graphrag](https://github.com/microsoft/graphrag) and mind map.
- 2024-07-23 Supports audio file parsing.
- 2024-07-08 Supports workflow based on [Graph](./agent/README.md).
- 2024-06-27 Supports Markdown and Docx in the Q&A parsing method, extracting images from Docx files, extracting tables from Markdown files.
- 2024-05-23 Supports [RAPTOR](https://arxiv.org/html/2401.18059v1) for better text retrieval.
## 🎉 Stay Tuned
⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new
releases! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 Key Features
### 🍭 **"Quality in, quality out"**
- [Deep document understanding](./deepdoc/README.md)-based knowledge extraction from unstructured data with complicated formats.
- [Deep document understanding](./deepdoc/README.md)-based knowledge extraction from unstructured data with complicated
formats.
- Finds "needle in a data haystack" of literally unlimited tokens.
### 🍱 **Template-based chunking**
@ -120,7 +132,8 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
> If you have not installed Docker on your local machine (Windows, Mac, or Linux), see [Install Docker Engine](https://docs.docker.com/engine/install/).
> If you have not installed Docker on your local machine (Windows, Mac, or Linux),
see [Install Docker Engine](https://docs.docker.com/engine/install/).
### 🚀 Start up the server
@ -139,7 +152,8 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> This change will be reset after a system reboot. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
> This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
`vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
>
> ```bash
> vm.max_map_count=262144
@ -152,14 +166,28 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
```
3. Build the pre-built Docker images and start up the server:
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_IMAGE` in **docker/.env** to the intended version, for example `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`, before running the following commands.
> The command below downloads the dev version Docker image for RAGFlow slim (`dev-slim`). Note that RAGFlow slim
Docker images do not include embedding models or Python libraries and hence are approximately 1GB in size.
```bash
$ cd ragflow/docker
$ docker compose up -d
$ docker compose -f docker-compose.yml up -d
```
> The core image is about 9 GB in size and may take a while to load.
> - To download a RAGFlow slim Docker image of a specific version, update the `RAGFLOW_IMAGE` variable in *
*docker/.env** to your desired version. For example, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0-slim`. After
making this change, rerun the command above to initiate the download.
> - To download the dev version of RAGFlow Docker image *including* embedding models and Python libraries, update the
`RAGFLOW_IMAGE` variable in **docker/.env** to `RAGFLOW_IMAGE=infiniflow/ragflow:dev`. After making this change,
rerun the command above to initiate the download.
> - To download a specific version of RAGFlow Docker image *including* embedding models and Python libraries, update
the `RAGFLOW_IMAGE` variable in **docker/.env** to your desired version. For example,
`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0`. After making this change, rerun the command above to initiate the
download.
> **NOTE:** A RAGFlow Docker image that includes embedding models and Python libraries is approximately 9GB in size
and may take significantly longer time to load.
4. Check the server status after having the server up and running:
@ -170,23 +198,26 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
_The following output confirms a successful launch of the system:_
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network abnormal` error because, at that moment, your RAGFlow may not be fully initialized.
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anormal`
error because, at that moment, your RAGFlow may not be fully initialized.
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key.
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default
HTTP serving port `80` can be omitted when using the default configurations.
6. In [service_conf.yaml.template](./docker/service_conf.yaml.template), select the desired LLM factory in `user_default_llm` and update
the `API_KEY` field with the corresponding API key.
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
@ -196,104 +227,126 @@ Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
When it comes to system configurations, you will need to manage the following files:
- [.env](./docker/.env): Keeps the fundamental setups for the system, such as `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, and `MINIO_PASSWORD`.
- [service_conf.yaml](./docker/service_conf.yaml): Configures the back-end services.
- [.env](./docker/.env): Keeps the fundamental setups for the system, such as `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, and
`MINIO_PASSWORD`.
- [service_conf.yaml.template](./docker/service_conf.yaml.template): Configures the back-end services. The environment variables in this file will be automatically populated when the Docker container starts. Any environment variables set within the Docker container will be available for use, allowing you to customize service behavior based on the deployment environment.
- [docker-compose.yml](./docker/docker-compose.yml): The system relies on [docker-compose.yml](./docker/docker-compose.yml) to start up.
You must ensure that changes to the [.env](./docker/.env) file are in line with what are in the [service_conf.yaml](./docker/service_conf.yaml) file.
> The [./docker/README](./docker/README.md) file provides a detailed description of the environment settings and service
> configurations which can be used as `${ENV_VARS}` in the [service_conf.yaml.template](./docker/service_conf.yaml.template) file.
> The [./docker/README](./docker/README.md) file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the [./docker/README](./docker/README.md) file are aligned with the corresponding configurations in the [service_conf.yaml](./docker/service_conf.yaml) file.
To update the default HTTP serving port (80), go to [docker-compose.yml](./docker/docker-compose.yml) and change `80:80` to `<YOUR_SERVING_PORT>:80`.
To update the default HTTP serving port (80), go to [docker-compose.yml](./docker/docker-compose.yml) and change `80:80`
to `<YOUR_SERVING_PORT>:80`.
Updates to the above configurations require a reboot of all containers to take effect:
> ```bash
> $ docker-compose -f docker/docker-compose.yml up -d
> $ docker compose -f docker/docker-compose.yml up -d
> ```
## 🪛 Build the Docker image without embedding models
### Switch doc engine from Elasticsearch to Infinity
RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to [Infinity](https://github.com/infiniflow/infinity/), follow these steps:
1. Stop all running containers:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. Set `DOC_ENGINE` in **docker/.env** to `infinity`.
3. Start the containers:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
## 🔧 Build a Docker image without embedding models
This image is approximately 1 GB in size and relies on external LLM and embedding services.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🪚 Build the Docker image including embedding models
## 🔧 Build a Docker image including embedding models
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
## 🔨 Launch service from source for development
1. Install Poetry, or skip this step if it is already installed:
1. Install Poetry, or skip this step if it is already installed:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
2. Clone the source code and install Python dependencies:
2. Clone the source code and install Python dependencies:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install RAGFlow dependent python modules
~/.local/bin/poetry install --sync --no-root --with=full # install RAGFlow dependent python modules
```
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/service_conf.yaml** to `127.0.0.1`:
Add the following line to `/etc/hosts` to resolve all hosts specified in **docker/.env** to `127.0.0.1`:
```
127.0.0.1 es01 mysql minio redis
127.0.0.1 es01 infinity mysql minio redis
```
In **docker/service_conf.yaml**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
In **docker/service_conf.yaml.template**, update mysql port to `5455` and es port to `1200`, as specified in **docker/.env**.
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
4. If you cannot access HuggingFace, set the `HF_ENDPOINT` environment variable to use a mirror site:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. Launch backend service:
5. Launch backend service:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. Install frontend dependencies:
6. Install frontend dependencies:
```bash
cd web
npm install --force
```
7. Configure frontend to update `proxy.target` in **.umirc.ts** to `http://127.0.0.1:9380`:
8. Launch frontend service:
8. Launch frontend service:
```bash
npm run dev
```
_The following output confirms a successful launch of the system:_
_The following output confirms a successful launch of the system:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 Documentation
- [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
- [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)
@ -309,4 +362,5 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)
## 🙌 Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./CONTRIBUTING.md) first.
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community.
If you would like to be a part, review our [Contribution Guidelines](./CONTRIBUTING.md) first.

341
README_id.md Normal file
View File

@ -0,0 +1,341 @@
<div align="center">
<a href="https://demo.ragflow.io/">
<img src="web/src/assets/logo-with-text.png" width="520" alt="Logo ragflow">
</a>
</div>
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="Ikuti di X (Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Lencana Daring" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Rilis%20Terbaru" alt="Rilis Terbaru">
</a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/Lisensi-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="Lisensi">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Dokumentasi</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Peta Jalan</a> |
<a href="https://twitter.com/infiniflowai">Twitter</a> |
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
<a href="https://demo.ragflow.io">Demo</a>
</h4>
<details open>
<summary></b>📕 Daftar Isi</b></summary>
- 💡 [Apa Itu RAGFlow?](#-apa-itu-ragflow)
- 🎮 [Demo](#-demo)
- 📌 [Pembaruan Terbaru](#-pembaruan-terbaru)
- 🌟 [Fitur Utama](#-fitur-utama)
- 🔎 [Arsitektur Sistem](#-arsitektur-sistem)
- 🎬 [Mulai](#-mulai)
- 🔧 [Konfigurasi](#-konfigurasi)
- 🔧 [Membangun Image Docker tanpa Model Embedding](#-membangun-image-docker-tanpa-model-embedding)
- 🔧 [Membangun Image Docker dengan Model Embedding](#-membangun-image-docker-dengan-model-embedding)
- 🔨 [Meluncurkan aplikasi dari Sumber untuk Pengembangan](#-meluncurkan-aplikasi-dari-sumber-untuk-pengembangan)
- 📚 [Dokumentasi](#-dokumentasi)
- 📜 [Peta Jalan](#-peta-jalan)
- 🏄 [Komunitas](#-komunitas)
- 🙌 [Kontribusi](#-kontribusi)
</details>
## 💡 Apa Itu RAGFlow?
[RAGFlow](https://ragflow.io/) adalah mesin RAG (Retrieval-Augmented Generation) open-source berbasis pemahaman dokumen yang mendalam. Platform ini menyediakan alur kerja RAG yang efisien untuk bisnis dengan berbagai skala, menggabungkan LLM (Large Language Models) untuk menyediakan kemampuan tanya-jawab yang benar dan didukung oleh referensi dari data terstruktur kompleks.
## 🎮 Demo
Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 Pembaruan Terbaru
- 22-11-2024 Peningkatan definisi dan penggunaan variabel di Agen.
- 2024-11-01: Penambahan ekstraksi kata kunci dan pembuatan pertanyaan terkait untuk meningkatkan akurasi pengambilan.
- 2024-09-13: Penambahan mode pencarian untuk Q&A basis pengetahuan.
- 2024-08-22: Dukungan untuk teks ke pernyataan SQL melalui RAG.
- 2024-08-02: Dukungan GraphRAG yang terinspirasi oleh [graphrag](https://github.com/microsoft/graphrag) dan mind map.
## 🎉 Tetap Terkini
⭐️ Star repositori kami untuk tetap mendapat informasi tentang fitur baru dan peningkatan menarik! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 Fitur Utama
### 🍭 **"Kualitas Masuk, Kualitas Keluar"**
- Ekstraksi pengetahuan berbasis pemahaman dokumen mendalam dari data tidak terstruktur dengan format yang rumit.
- Menemukan "jarum di tumpukan data" dengan token yang hampir tidak terbatas.
### 🍱 **Pemotongan Berbasis Template**
- Cerdas dan dapat dijelaskan.
- Banyak pilihan template yang tersedia.
### 🌱 **Referensi yang Didasarkan pada Data untuk Mengurangi Hallusinasi**
- Visualisasi pemotongan teks memungkinkan intervensi manusia.
- Tampilan cepat referensi kunci dan referensi yang dapat dilacak untuk mendukung jawaban yang didasarkan pada fakta.
### 🍔 **Kompatibilitas dengan Sumber Data Heterogen**
- Mendukung Word, slide, excel, txt, gambar, salinan hasil scan, data terstruktur, halaman web, dan banyak lagi.
### 🛀 **Alur Kerja RAG yang Otomatis dan Mudah**
- Orkestrasi RAG yang ramping untuk bisnis kecil dan besar.
- LLM yang dapat dikonfigurasi serta model embedding.
- Peringkat ulang berpasangan dengan beberapa pengambilan ulang.
- API intuitif untuk integrasi yang mudah dengan bisnis.
## 🔎 Arsitektur Sistem
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/d6ac5664-c237-4200-a7c2-a4a00691b485" width="1000"/>
</div>
## 🎬 Mulai
### 📝 Prasyarat
- CPU >= 4 inti
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
### 🚀 Menjalankan Server
1. Pastikan `vm.max_map_count` >= 262144:
> Untuk memeriksa nilai `vm.max_map_count`:
>
> ```bash
> $ sysctl vm.max_map_count
> ```
>
> Jika nilainya kurang dari 262144, setel ulang `vm.max_map_count` ke setidaknya 262144:
>
> ```bash
> # Dalam contoh ini, kita atur menjadi 262144:
> $ sudo sysctl -w vm.max_map_count=262144
> ```
>
> Perubahan ini akan hilang setelah sistem direboot. Untuk membuat perubahan ini permanen, tambahkan atau perbarui nilai
`vm.max_map_count` di **/etc/sysctl.conf**:
>
> ```bash
> vm.max_map_count=262144
> ```
2. Clone repositori:
```bash
$ git clone https://github.com/infiniflow/ragflow.git
```
3. Bangun image Docker pre-built dan jalankan server:
> Perintah di bawah ini akan mengunduh versi dev dari Docker image RAGFlow slim (`dev-slim`). Image RAGFlow slim
tidak termasuk model embedding atau library Python dan berukuran sekitar 1GB.
```bash
$ cd ragflow/docker
$ docker compose -f docker-compose.yml up -d
```
> - Untuk mengunduh versi tertentu dari image Docker RAGFlow slim, perbarui variabel `RAGFlow_IMAGE` di *
*docker/.env** sesuai dengan versi yang diinginkan. Misalnya, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0-slim`.
Setelah mengubah ini, jalankan ulang perintah di atas untuk memulai unduhan.
> - Untuk mengunduh versi dev dari image Docker RAGFlow *termasuk* model embedding dan library Python, perbarui
variabel `RAGFlow_IMAGE` di **docker/.env** menjadi `RAGFLOW_IMAGE=infiniflow/ragflow:dev`. Setelah mengubah ini,
jalankan ulang perintah di atas untuk memulai unduhan.
> - Untuk mengunduh versi tertentu dari image Docker RAGFlow *termasuk* model embedding dan library Python, perbarui
variabel `RAGFlow_IMAGE` di **docker/.env** sesuai dengan versi yang diinginkan. Misalnya,
`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0`. Setelah mengubah ini, jalankan ulang perintah di atas untuk memulai unduhan.
> **CATATAN:** Image Docker RAGFlow yang mencakup model embedding dan library Python berukuran sekitar 9GB
dan mungkin memerlukan waktu lebih lama untuk dimuat.
4. Periksa status server setelah server aktif dan berjalan:
```bash
$ docker logs -f ragflow-server
```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_
```bash
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> Jika Anda melewatkan langkah ini dan langsung login ke RAGFlow, browser Anda mungkin menampilkan error `network anormal`
karena RAGFlow mungkin belum sepenuhnya siap.
5. Buka browser web Anda, masukkan alamat IP server Anda, dan login ke RAGFlow.
> Dengan pengaturan default, Anda hanya perlu memasukkan `http://IP_DEVICE_ANDA` (**tanpa** nomor port) karena
port HTTP default `80` bisa dihilangkan saat menggunakan konfigurasi default.
6. Dalam [service_conf.yaml](./docker/service_conf.yaml), pilih LLM factory yang diinginkan di `user_default_llm` dan perbarui
bidang `API_KEY` dengan kunci API yang sesuai.
> Lihat [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) untuk informasi lebih lanjut.
_Sistem telah siap digunakan!_
## 🔧 Konfigurasi
Untuk konfigurasi sistem, Anda perlu mengelola file-file berikut:
- [.env](./docker/.env): Menyimpan pengaturan dasar sistem, seperti `SVR_HTTP_PORT`, `MYSQL_PASSWORD`, dan
`MINIO_PASSWORD`.
- [service_conf.yaml](./docker/service_conf.yaml): Mengonfigurasi aplikasi backend.
- [docker-compose.yml](./docker/docker-compose.yml): Sistem ini bergantung pada [docker-compose.yml](./docker/docker-compose.yml) untuk memulai.
Anda harus memastikan bahwa perubahan pada file [.env](./docker/.env) sesuai dengan yang ada di file [service_conf.yaml](./docker/service_conf.yaml).
> File [./docker/README](./docker/README.md) menyediakan penjelasan detail tentang pengaturan lingkungan dan konfigurasi aplikasi,
> dan Anda DIWAJIBKAN memastikan bahwa semua pengaturan lingkungan yang tercantum di
> [./docker/README](./docker/README.md) selaras dengan konfigurasi yang sesuai di
> [service_conf.yaml](./docker/service_conf.yaml).
Untuk memperbarui port HTTP default (80), buka [docker-compose.yml](./docker/docker-compose.yml) dan ubah `80:80`
menjadi `<YOUR_SERVING_PORT>:80`.
Pembaruan konfigurasi ini memerlukan reboot semua kontainer agar efektif:
> ```bash
> $ docker compose -f docker/docker-compose.yml up -d
> ```
## 🔧 Membangun Docker Image tanpa Model Embedding
Image ini berukuran sekitar 1 GB dan bergantung pada aplikasi LLM eksternal dan embedding.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🔧 Membangun Docker Image Termasuk Model Embedding
Image ini berukuran sekitar 9 GB. Karena sudah termasuk model embedding, ia hanya bergantung pada aplikasi LLM eksternal.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
## 🔨 Menjalankan Aplikasi dari untuk Pengembangan
1. Instal Poetry, atau lewati langkah ini jika sudah terinstal:
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
2. Clone kode sumber dan instal dependensi Python:
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
~/.local/bin/poetry install --sync --no-root # install modul python RAGFlow
```
3. Jalankan aplikasi yang diperlukan (MinIO, Elasticsearch, Redis, dan MySQL) menggunakan Docker Compose:
```bash
docker compose -f docker/docker-compose-base.yml up -d
```
Tambahkan baris berikut ke `/etc/hosts` untuk memetakan semua host yang ditentukan di **docker/service_conf.yaml** ke `127.0.0.1`:
```
127.0.0.1 es01 infinity mysql minio redis
```
Di **docker/service_conf.yaml**, perbarui port mysql ke `5455` dan es ke `1200`, sesuai dengan yang ditentukan di **docker/.env**.
4. Jika Anda tidak dapat mengakses HuggingFace, atur variabel lingkungan `HF_ENDPOINT` untuk menggunakan situs mirror:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
5. Jalankan aplikasi backend:
```bash
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
```
6. Instal dependensi frontend:
```bash
cd web
npm install --force
```
7. Konfigurasikan frontend untuk memperbarui `proxy.target` di **.umirc.ts** menjadi `http://127.0.0.1:9380`:
8. Jalankan aplikasi frontend:
```bash
npm run dev
```
_Output berikut menandakan bahwa sistem berhasil diluncurkan:_
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)
## 📚 Dokumentasi
- [Quickstart](https://ragflow.io/docs/dev/)
- [Panduan Pengguna](https://ragflow.io/docs/dev/category/guides)
- [Referensi](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)
## 📜 Roadmap
Lihat [Roadmap RAGFlow 2024](https://github.com/infiniflow/ragflow/issues/162)
## 🏄 Komunitas
- [Discord](https://discord.gg/4XxujFgUN7)
- [Twitter](https://twitter.com/infiniflowai)
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
## 🙌 Kontribusi
RAGFlow berkembang melalui kolaborasi open-source. Dalam semangat ini, kami menerima kontribusi dari komunitas.
Jika Anda ingin berpartisipasi, tinjau terlebih dahulu [Panduan Kontribusi](./CONTRIBUTING.md).

View File

@ -8,23 +8,29 @@
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a>
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen"
alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> |
@ -42,22 +48,23 @@
デモをお試しください:[https://demo.ragflow.io](https://demo.ragflow.io)。
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 最新情報
- 2024-09-29 マルチラウンドダイアログを最適化
- 2024-11-22 エージェントでの変数の定義と使用法を改善しました
- 2024-11-01 再現の精度を向上させるために、解析されたチャンクにキーワード抽出と関連質問の生成を追加しました。
- 2024-09-13 ナレッジベース Q&A の検索モードを追加しました。
- 2024-09-09 エージェントに医療相談テンプレートを追加しました。
- 2024-08-22 RAG を介して SQL ステートメントへのテキストをサポートします。
- 2024-08-02 [graphrag](https://github.com/microsoft/graphrag) からインスピレーションを得た GraphRAG とマインド マップをサポートします。
- 2024-07-23 音声ファイルの解析をサポートしました。
- 2024-07-08 [Graph](./agent/README.md) ベースのワークフローをサポート
- 2024-06-27 Q&A 解析メソッドで Markdown と Docx をサポートし、Docx ファイルから画像を抽出し、Markdown ファイルからテーブルを抽出します。
- 2024-05-23 より良いテキスト検索のために [RAPTOR](https://arxiv.org/html/2401.18059v1) をサポート。
## 🎉 続きを楽しみに
⭐️ リポジトリをスター登録して、エキサイティングな新機能やアップデートを最新の状態に保ちましょう!すべての新しいリリースに関する即時通知を受け取れます! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 主な特徴
@ -134,15 +141,18 @@
3. ビルド済みの Docker イメージをビルドし、サーバーを起動する:
> 以下のコマンドは、RAGFlow slim`dev-slim`の開発版Dockerイメージをダウンロードします。RAGFlow slimのDockerイメージには、埋め込みモデルやPythonライブラリが含まれていないため、サイズは約1GBです。
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
$ docker compose -f docker-compose.yml up -d
```
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_IMAGE変数を見つけて、対応するバージョンに変更してください。 例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`とし、上記のコマンドを実行してください。
> コアイメージのサイズは約 9 GB で、ロードに時間がかかる場合があります。
> - 特定のバージョンのRAGFlow slim Dockerイメージをダウンロードするには、**docker/.env**内の`RAGFlow_IMAGE`変数を希望のバージョンに更新します。例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0`とします。この変更を行った後、上記のコマンドを実行してダウンロードを開始してください。
> - RAGFlowの埋め込みモデルとPythonライブラリを含む開発版Dockerイメージをダウンロードするには、**docker/.env**内の`RAGFlow_IMAGE`変数を`RAGFLOW_IMAGE=infiniflow/ragflow:dev`に更新します。この変更を行った後、上記のコマンドを再実行してダウンロードを開始してください。
> - 特定のバージョンのRAGFlow Dockerイメージ埋め込みモデルとPythonライブラリを含むをダウンロードするには、**docker/.env**内の`RAGFlow_IMAGE`変数を希望のバージョンに更新します。例えば、`RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0`とします。この変更を行った後、上記のコマンドを再実行してダウンロードを開始してください。
> **NOTE:** 埋め込みモデルとPythonライブラリを含むRAGFlow Dockerイメージのサイズは約9GBであり、読み込みにかなりの時間がかかる場合があります。
4. サーバーを立ち上げた後、サーバーの状態を確認する:
@ -191,29 +201,46 @@
> すべてのシステム設定のアップデートを有効にするには、システムの再起動が必要です:
>
> ```bash
> $ docker-compose up -d
> $ docker compose -f docker/docker-compose.yml up -d
> ```
## 🪛 ソースコードでDockerイメージを作成埋め込みモデルなし
### Elasticsearch から Infinity にドキュメントエンジンを切り替えます
RAGFlow はデフォルトで Elasticsearch を使用して全文とベクトルを保存します。Infinityに切り替えhttps://github.com/infiniflow/infinity/)、次の手順に従います。
1. 実行中のすべてのコンテナを停止するには:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. **docker/.env** の「DOC _ ENGINE」を「infinity」に設定します。
3. 起動コンテナ:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Linux/arm64 マシンでの Infinity への切り替えは正式にサポートされていません。
## 🔧 ソースコードでDockerイメージを作成埋め込みモデルなし
この Docker イメージのサイズは約 1GB で、外部の大モデルと埋め込みサービスに依存しています。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🪚 ソースコードをコンパイルしたDockerイメージ埋め込みモデルを含む
## 🔧 ソースコードをコンパイルしたDockerイメージ埋め込みモデルを含む
この Docker のサイズは約 9GB で、埋め込みモデルを含むため、外部の大モデルサービスのみが必要です。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
@ -240,7 +267,7 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
`/etc/hosts` に以下の行を追加して、**docker/service_conf.yaml** に指定されたすべてのホストを `127.0.0.1` に解決します:
```
127.0.0.1 es01 mysql minio redis
127.0.0.1 es01 infinity mysql minio redis
```
**docker/service_conf.yaml** で mysql のポートを `5455` に、es のポートを `1200` に更新します(**docker/.env** に指定された通り).
@ -275,7 +302,7 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
## 📚 ドキュメンテーション
- [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
- [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)

View File

@ -9,21 +9,28 @@
<a href="./README_zh.md">简体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> |
@ -43,30 +50,28 @@
데모를 [https://demo.ragflow.io](https://demo.ragflow.io)에서 실행해 보세요.
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 업데이트
- 2024-09-29 다단계 대화를 최적화합니다.
- 2024-11-22 에이전트의 변수 정의 및 사용을 개선했습니다.
- 2024-11-01 파싱된 청크에 키워드 추출 및 관련 질문 생성을 추가하여 재현율을 향상시킵니다.
- 2024-09-13 지식베이스 Q&A 검색 모드를 추가합니다.
- 2024-09-09 Agent에 의료상담 템플릿을 추가하였습니다.
- 2024-08-22 RAG를 통해 SQL 문에 텍스트를 지원합니다.
- 2024-08-02: [graphrag](https://github.com/microsoft/graphrag)와 마인드맵에서 영감을 받은 GraphRAG를 지원합니다.
- 2024-07-23: 오디오 파일 분석을 지원합니다.
- 2024-07-08: [Graph](./agent/README.md)를 기반으로 한 워크플로우를 지원합니다.
- 2024-06-27 Q&A 구문 분석 방식에서 Markdown 및 Docx를 지원하고, Docx 파일에서 이미지 추출, Markdown 파일에서 테이블 추출을 지원합니다.
- 2024-05-23: 더 나은 텍스트 검색을 위해 [RAPTOR](https://arxiv.org/html/2401.18059v1)를 지원합니다.
## 🎉 계속 지켜봐 주세요
⭐️우리의 저장소를 즐겨찾기에 등록하여 흥미로운 새로운 기능과 업데이트를 최신 상태로 유지하세요! 모든 새로운 릴리스에 대한 즉시 알림을 받으세요! 🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 주요 기능
@ -140,14 +145,18 @@
3. 미리 빌드된 Docker 이미지를 생성하고 서버를 시작하세요:
> 다음 명령어를 실행하면 *dev* 버전의 RAGFlow Docker 이미지가 자동으로 다운로드니다. 특정 Docker 버전을 다운로드하고 실행하려면, **docker/.env** 파일에서 `RAGFLOW_IMAGE`을 원하는 버전으로 업데이트한 후, 예를 들어 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`로 업데이트 한 뒤, 다음 명령어를 실행하세요.
> 아래의 명령은 RAGFlow slim(dev-slim)의 개발 버전 Docker 이미지 다운로드니다. RAGFlow slim Docker 이미지에는 임베딩 모델이나 Python 라이브러리가 포함되어 있지 않으므로 크기는 약 1GB입니다.
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
$ docker compose -f docker-compose.yml up -d
```
> 기본 이미지는 약 9GB 크기이며 로드하는 데 시간이 걸릴 수 있습니다.
> - 특정 버전의 RAGFlow slim Docker 이미지를 다운로드하려면, **docker/.env**에서 `RAGFlow_IMAGE` 변수를 원하는 버전으로 업데이트하세요. 예를 들어, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0-slim`으로 설정합니다. 이 변경을 완료한 후, 위의 명령을 다시 실행하여 다운로드를 시작하세요.
> - RAGFlow의 임베딩 모델과 Python 라이브러리를 포함한 개발 버전 Docker 이미지를 다운로드하려면, **docker/.env**에서 `RAGFlow_IMAGE` 변수를 `RAGFLOW_IMAGE=infiniflow/ragflow:dev`로 업데이트하세요. 이 변경을 완료한 후, 위의 명령을 다시 실행하여 다운로드를 시작하세요.
> - 특정 버전의 RAGFlow Docker 이미지를 임베딩 모델과 Python 라이브러리를 포함하여 다운로드하려면, **docker/.env**에서 `RAGFlow_IMAGE` 변수를 원하는 버전으로 업데이트하세요. 예를 들어, `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0` 로 설정합니다. 이 변경을 완료한 후, 위의 명령을 다시 실행하여 다운로드를 시작하세요.
> **NOTE:** 임베딩 모델과 Python 라이브러리를 포함한 RAGFlow Docker 이미지의 크기는 약 9GB이며, 로드하는 데 상당히 오랜 시간이 걸릴 수 있습니다.
4. 서버가 시작된 후 서버 상태를 확인하세요:
@ -170,7 +179,7 @@
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network abnormal` 오류가 발생할 수 있습니다.
> 만약 확인 단계를 건너뛰고 바로 RAGFlow에 로그인하면, RAGFlow가 완전히 초기화되지 않았기 때문에 브라우저에서 `network anormal` 오류가 발생할 수 있습니다.
5. 웹 브라우저에 서버의 IP 주소를 입력하고 RAGFlow에 로그인하세요.
> 기본 설정을 사용할 경우, `http://IP_OF_YOUR_MACHINE`만 입력하면 됩니다 (포트 번호는 제외). 기본 HTTP 서비스 포트 `80`은 기본 구성으로 사용할 때 생략할 수 있습니다.
@ -196,29 +205,44 @@
> 모든 시스템 구성 업데이트는 적용되기 위해 시스템 재부팅이 필요합니다.
>
> ```bash
> $ docker-compose up -d
> $ docker compose -f docker/docker-compose.yml up -d
> ```
## 🪛 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
### Elasticsearch 에서 Infinity 로 문서 엔진 전환
RAGFlow 는 기본적으로 Elasticsearch 를 사용하여 전체 텍스트 및 벡터를 저장합니다. [Infinity]로 전환(https://github.com/infiniflow/infinity/), 다음 절차를 따르십시오.
1. 실행 중인 모든 컨테이너를 중지합니다.
```bash
$docker compose-f docker/docker-compose.yml down-v
```
2. **docker/.env**의 "DOC_ENGINE" 을 "infinity" 로 설정합니다.
3. 컨테이너 부팅:
```bash
$docker compose-f docker/docker-compose.yml up-d
```
> [!WARNING]
> Linux/arm64 시스템에서 Infinity로 전환하는 것은 공식적으로 지원되지 않습니다.
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함하지 않음)
이 Docker 이미지의 크기는 약 1GB이며, 외부 대형 모델과 임베딩 서비스에 의존합니다.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🪚 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
## 🔧 소스 코드로 Docker 이미지를 컴파일합니다(임베딩 모델 포함)
이 Docker의 크기는 약 9GB이며, 이미 임베딩 모델을 포함하고 있으므로 외부 대형 모델 서비스에만 의존하면 됩니다.
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
@ -245,7 +269,7 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
`/etc/hosts` 에 다음 줄을 추가하여 **docker/service_conf.yaml** 에 지정된 모든 호스트를 `127.0.0.1` 로 해결합니다:
```
127.0.0.1 es01 mysql minio redis
127.0.0.1 es01 infinity mysql minio redis
```
**docker/service_conf.yaml** 에서 mysql 포트를 `5455` 로, es 포트를 `1200` 으로 업데이트합니다( **docker/.env** 에 지정된 대로).
@ -280,7 +304,7 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
## 📚 문서
- [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
- [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)

View File

@ -8,22 +8,29 @@
<a href="./README.md">English</a> |
<a href="./README_zh.md">简体中文</a> |
<a href="./README_ja.md">日本語</a> |
<a href="./README_ko.md">한국어</a>
<a href="./README_ko.md">한국어</a> |
<a href="./README_id.md">Bahasa Indonesia</a>
</p>
<p align="center">
<a href="https://x.com/intent/follow?screen_name=infiniflowai" target="_blank">
<img src="https://img.shields.io/twitter/follow/infiniflow?logo=X&color=%20%23f5f5f5" alt="follow on X(Twitter)">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99">
</a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.14.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.14.0">
</a>
<a href="https://github.com/infiniflow/ragflow/releases/latest">
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://demo.ragflow.io" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.12.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.12.0"></a>
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
<h4 align="center">
<a href="https://ragflow.io/docs/dev/">Document</a> |
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> |
@ -41,21 +48,24 @@
请登录网址 [https://demo.ragflow.io](https://demo.ragflow.io) 试用 demo。
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
<img src="https://github.com/infiniflow/ragflow/assets/12318111/b083d173-dadc-4ea9-bdeb-180d7df514eb" width="1200"/>
<img src="https://github.com/user-attachments/assets/504bbbf1-c9f7-4d83-8cc5-e9cb63c26db6" width="1200"/>
</div>
## 🔥 近期更新
- 2024-09-29 优化多轮对话.
- 2024-11-22 完善了 Agent 中的变量定义和使用。
- 2024-11-01 对解析后的 chunk 加入关键词抽取和相关问题生成以提高召回的准确度。
- 2024-09-13 增加知识库问答搜索模式。
- 2024-09-09 在 Agent 中加入医疗问诊模板。
- 2024-08-22 支持用 RAG 技术实现从自然语言到 SQL 语句的转换。
- 2024-08-02 支持 GraphRAG 启发于 [graphrag](https://github.com/microsoft/graphrag) 和思维导图。
- 2024-07-23 支持解析音频文件。
- 2024-07-08 支持 Agentic RAG: 基于 [Graph](./agent/README.md) 的工作流。
- 2024-06-27 Q&A 解析方式支持 Markdown 文件和 Docx 文件,支持提取出 Docx 文件中的图片和 Markdown 文件中的表格。
- 2024-05-23 实现 [RAPTOR](https://arxiv.org/html/2401.18059v1) 提供更好的文本检索。
## 🎉 关注项目
⭐️点击右上角的 Star 关注RAGFlow可以获取最新发布的实时通知 !🌟
<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/user-attachments/assets/18c9707e-b8aa-4caf-a154-037089c105ba" width="1200"/>
</div>
## 🌟 主要功能
@ -132,16 +142,18 @@
3. 进入 **docker** 文件夹,利用提前编译好的 Docker 镜像启动服务器:
> 运行以下命令会自动下载 dev 版的 RAGFlow slim Docker 镜像(`dev-slim`),该镜像并不包含 embedding 模型以及一些 Python 库,因此镜像大小约 1GB。
```bash
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose -f docker-compose.yml up -d
```
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_IMAGE 变量,将其改为对应版本。例如 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.12.0`,然后运行上述命令。
> 核心镜像下载大小为 9 GB可能需要一定时间拉取。请耐心等待
> - 如果你想下载并运行特定版本的 RAGFlow slim Docker 镜像,请在 **docker/.env** 文件中找到 `RAGFLOW_IMAGE` 变量,将其改为对应版本。例如 `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0-slim`,然后运行上述命令。
> - 如果您想安装内置 embedding 模型和 Python 库的 dev 版本的 Docker 镜像,需要将 **docker/.env** 文件中的 `RAGFLOW_IMAGE` 变量修改为: `RAGFLOW_IMAGE=infiniflow/ragflow:dev`。
> - 如果您想安装内置 embedding 模型和 Python 库的指定版本的 RAGFlow Docker 镜像,需要将 **docker/.env** 文件中的 `RAGFLOW_IMAGE` 变量修改为: `RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0`。修改后,再运行上面的命令
> **注意:** 安装内置 embedding 模型和 Python 库的指定版本的 RAGFlow Docker 镜像大小约 9 GB可能需要更长时间下载请耐心等待。
4. 服务器启动成功后再次确认服务器状态:
```bash
@ -162,7 +174,7 @@
* Running on http://x.x.x.x:9380
INFO:werkzeug:Press CTRL+C to quit
```
> 如果您跳过这一步系统确认步骤就登录 RAGFlow你的浏览器有可能会提示 `network abnormal` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
> 如果您跳过这一步系统确认步骤就登录 RAGFlow你的浏览器有可能会提示 `network anormal` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80
@ -194,26 +206,48 @@
> $ docker compose -f docker-compose.yml up -d
> ```
## 🪛 源码编译 Docker 镜像(不含 embedding 模型)
### 把文档引擎从 Elasticsearch 切换成为 Infinity
RAGFlow 默认使用 Elasticsearch 存储文本和向量数据. 如果要切换为 [Infinity](https://github.com/infiniflow/infinity/), 可以按照下面步骤进行:
1. 停止所有容器运行:
```bash
$ docker compose -f docker/docker-compose.yml down -v
```
2. 设置 **docker/.env** 目录中的 `DOC_ENGINE` 为 `infinity`.
3. 启动容器:
```bash
$ docker compose -f docker/docker-compose.yml up -d
```
> [!WARNING]
> Infinity 目前官方并未正式支持在 Linux/arm64 架构下的机器上运行.
## 🔧 源码编译 Docker 镜像(不含 embedding 模型)
本 Docker 镜像大小约 1 GB 左右并且依赖外部的大模型和 embedding 服务。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .
```
## 🪚 源码编译 Docker 镜像(包含 embedding 模型)
## 🔧 源码编译 Docker 镜像(包含 embedding 模型)
本 Docker 大小约 9 GB 左右。由于已包含 embedding 模型,所以只需依赖外部的大模型服务即可。
```bash
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .
```
@ -240,7 +274,7 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
在 `/etc/hosts` 中添加以下代码,将 **docker/service_conf.yaml** 文件中的所有 host 地址都解析为 `127.0.0.1`
```
127.0.0.1 es01 mysql minio redis
127.0.0.1 es01 infinity mysql minio redis
```
在文件 **docker/service_conf.yaml** 中,对照 **docker/.env** 的配置将 mysql 端口更新为 `5455`es 端口更新为 `1200`。
@ -275,7 +309,7 @@ docker build -f Dockerfile -t infiniflow/ragflow:dev .
## 📚 技术文档
- [Quickstart](https://ragflow.io/docs/dev/)
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
- [User guide](https://ragflow.io/docs/dev/category/guides)
- [References](https://ragflow.io/docs/dev/category/references)
- [FAQ](https://ragflow.io/docs/dev/faq)

View File

@ -13,19 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import importlib
import logging
import json
import traceback
from abc import ABC
from copy import deepcopy
from functools import partial
import pandas as pd
from agent.component import component_class
from agent.component.base import ComponentBase
from agent.settings import flow_logger, DEBUG
class Canvas(ABC):
"""
@ -162,8 +156,12 @@ class Canvas(ABC):
self.components[k]["obj"].reset()
self._embed_id = ""
def get_compnent_name(self, cid):
for n in self.dsl["graph"]["nodes"]:
if cid == n["id"]: return n["data"]["name"]
return ""
def run(self, **kwargs):
ans = ""
if self.answer:
cpn_id = self.answer[0]
self.answer.pop(0)
@ -173,10 +171,10 @@ class Canvas(ABC):
ans = ComponentBase.be_output(str(e))
self.path[-1].append(cpn_id)
if kwargs.get("stream"):
assert isinstance(ans, partial)
return ans
self.history.append(("assistant", ans.to_dict("records")))
return ans
for an in ans():
yield an
else: yield ans
return
if not self.path:
self.components["begin"]["obj"].run(self.history, **kwargs)
@ -184,6 +182,8 @@ class Canvas(ABC):
self.path.append([])
ran = -1
waiting = []
without_dependent_checking = []
def prepare2run(cpns):
nonlocal ran, ans
@ -193,18 +193,22 @@ class Canvas(ABC):
if cpn.component_name == "Answer":
self.answer.append(c)
else:
if DEBUG: print("RUN: ", c)
if cpn.component_name == "Generate":
logging.debug(f"Canvas.prepare2run: {c}")
if c not in without_dependent_checking:
cpids = cpn.get_dependent_components()
if any([c not in self.path[-1] for c in cpids]):
if any([cc not in self.path[-1] for cc in cpids]):
if c not in waiting: waiting.append(c)
continue
yield "*'{}'* is running...🕞".format(self.get_compnent_name(c))
ans = cpn.run(self.history, **kwargs)
self.path[-1].append(c)
ran += 1
prepare2run(self.components[self.path[-2][-1]]["downstream"])
for m in prepare2run(self.components[self.path[-2][-1]]["downstream"]):
yield {"content": m, "running_status": True}
while 0 <= ran < len(self.path[-1]):
if DEBUG: print(ran, self.path)
logging.debug(f"Canvas.run: {ran} {self.path}")
cpn_id = self.path[-1][ran]
cpn = self.get_component(cpn_id)
if not cpn["downstream"]: break
@ -217,27 +221,26 @@ class Canvas(ABC):
assert switch_out in self.components, \
"{}'s output: {} not valid.".format(cpn_id, switch_out)
try:
prepare2run([switch_out])
for m in prepare2run([switch_out]):
yield {"content": m, "running_status": True}
except Exception as e:
for p in [c for p in self.path for c in p][::-1]:
if p.lower().find("answer") >= 0:
self.get_component(p)["obj"].set_exception(e)
prepare2run([p])
break
traceback.print_exc()
break
yield {"content": "*Exception*: {}".format(e), "running_status": True}
logging.exception("Canvas.run got exception")
continue
try:
prepare2run(cpn["downstream"])
for m in prepare2run(cpn["downstream"]):
yield {"content": m, "running_status": True}
except Exception as e:
for p in [c for p in self.path for c in p][::-1]:
if p.lower().find("answer") >= 0:
self.get_component(p)["obj"].set_exception(e)
prepare2run([p])
break
traceback.print_exc()
break
yield {"content": "*Exception*: {}".format(e), "running_status": True}
logging.exception("Canvas.run got exception")
if ran >= len(self.path[-1]) and waiting:
without_dependent_checking = waiting
waiting = []
for m in prepare2run(without_dependent_checking):
yield {"content": m, "running_status": True}
ran -= 1
if self.answer:
cpn_id = self.answer[0]
@ -246,11 +249,13 @@ class Canvas(ABC):
self.path[-1].append(cpn_id)
if kwargs.get("stream"):
assert isinstance(ans, partial)
return ans
for an in ans():
yield an
else:
yield ans
self.history.append(("assistant", ans.to_dict("records")))
return ans
else:
raise Exception("The dialog flow has no way to interact with you. Please add an 'Interact' component to the end of the flow.")
def get_component(self, cpn_id):
return self.components[cpn_id]
@ -260,9 +265,11 @@ class Canvas(ABC):
def get_history(self, window_size):
convs = []
for role, obj in self.history[(window_size + 1) * -1:]:
convs.append({"role": role, "content": (obj if role == "user" else
'\n'.join(pd.DataFrame(obj)['content']))})
for role, obj in self.history[window_size * -1:]:
if isinstance(obj, list) and obj and all([isinstance(o, dict) for o in obj]):
convs.append({"role": role, "content": '\n'.join([str(s.get("content", "")) for s in obj])})
else:
convs.append({"role": role, "content": str(obj)})
return convs
def add_user_input(self, question):

View File

@ -28,6 +28,9 @@ from .wencai import WenCai, WenCaiParam
from .jin10 import Jin10, Jin10Param
from .tushare import TuShare, TuShareParam
from .akshare import AkShare, AkShareParam
from .crawler import Crawler, CrawlerParam
from .invoke import Invoke, InvokeParam
from .template import Template, TemplateParam
def component_class(class_name):

View File

@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import arxiv
import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
class ArXivParam(ComponentParamBase):
"""
Define the ArXiv component parameters.
@ -65,5 +64,5 @@ class ArXiv(ComponentBase, ABC):
return ArXiv.be_output("")
df = pd.DataFrame(arxiv_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {str(df)}")
return df

View File

@ -13,13 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
import logging
from abc import ABC
from functools import partial
import pandas as pd
import requests
import re
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
@ -64,6 +62,6 @@ class Baidu(ComponentBase, ABC):
return Baidu.be_output("")
df = pd.DataFrame(baidu_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {str(df)}")
return df

View File

@ -36,7 +36,6 @@ class BaiduFanyiParam(ComponentParamBase):
self.domain = 'finance'
def check(self):
self.check_positive_integer(self.top_n, "Top N")
self.check_empty(self.appid, "BaiduFanyi APPID")
self.check_empty(self.secret_key, "BaiduFanyi Secret Key")
self.check_valid_value(self.trans_type, "Translate type", ['translate', 'fieldtranslate'])

View File

@ -17,14 +17,13 @@ from abc import ABC
import builtins
import json
import os
from copy import deepcopy
import logging
from functools import partial
from typing import List, Dict, Tuple, Union
from typing import Tuple, Union
import pandas as pd
from agent import settings
from agent.settings import flow_logger, DEBUG
_FEEDED_DEPRECATED_PARAMS = "_feeded_deprecated_params"
_DEPRECATED_PARAMS = "_deprecated_params"
@ -36,6 +35,8 @@ class ComponentParamBase(ABC):
def __init__(self):
self.output_var_name = "output"
self.message_history_window_size = 22
self.query = []
self.inputs = []
def set_name(self, name: str):
self._name = name
@ -81,7 +82,6 @@ class ComponentParamBase(ABC):
return {name: True for name in self.get_feeded_deprecated_params()}
def __str__(self):
return json.dumps(self.as_dict(), ensure_ascii=False)
def as_dict(self):
@ -359,13 +359,13 @@ class ComponentParamBase(ABC):
def _warn_deprecated_param(self, param_name, descr):
if self._deprecated_params_set.get(param_name):
flow_logger.warning(
logging.warning(
f"{descr} {param_name} is deprecated and ignored in this version."
)
def _warn_to_deprecate_param(self, param_name, descr, new_param):
if self._deprecated_params_set.get(param_name):
flow_logger.warning(
logging.warning(
f"{descr} {param_name} will be deprecated in future release; "
f"please use {new_param} instead."
)
@ -385,10 +385,14 @@ class ComponentBase(ABC):
"""
return """{{
"component_name": "{}",
"params": {}
"params": {},
"output": {},
"inputs": {}
}}""".format(self.component_name,
self._param
)
self._param,
json.dumps(json.loads(str(self._param)).get("output", {}), ensure_ascii=False),
json.dumps(json.loads(str(self._param)).get("inputs", []), ensure_ascii=False)
)
def __init__(self, canvas, id, param: ComponentParamBase):
self._canvas = canvas
@ -396,8 +400,15 @@ class ComponentBase(ABC):
self._param = param
self._param.check()
def get_dependent_components(self):
cpnts = set([para["component_id"].split("@")[0] for para in self._param.query \
if para.get("component_id") \
and para["component_id"].lower().find("answer") < 0 \
and para["component_id"].lower().find("begin") < 0])
return list(cpnts)
def run(self, history, **kwargs):
flow_logger.info("{}, history: {}, kwargs: {}".format(self, json.dumps(history, ensure_ascii=False),
logging.debug("{}, history: {}, kwargs: {}".format(self, json.dumps(history, ensure_ascii=False),
json.dumps(kwargs, ensure_ascii=False)))
try:
res = self._run(history, **kwargs)
@ -431,48 +442,85 @@ class ComponentBase(ABC):
def reset(self):
setattr(self._param, self._param.output_var_name, None)
self._param.inputs = []
def set_output(self, v: pd.DataFrame):
def set_output(self, v: partial | pd.DataFrame):
setattr(self._param, self._param.output_var_name, v)
def get_input(self):
upstream_outs = []
reversed_cpnts = []
if len(self._canvas.path) > 1:
reversed_cpnts.extend(self._canvas.path[-2])
reversed_cpnts.extend(self._canvas.path[-1])
if DEBUG: print(self.component_name, reversed_cpnts[::-1])
if self._param.query:
self._param.inputs = []
outs = []
for q in self._param.query:
if q["component_id"]:
if q["component_id"].split("@")[0].lower().find("begin") >= 0:
cpn_id, key = q["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
outs.append(pd.DataFrame([{"content": p.get("value", "")}]))
self._param.inputs.append({"component_id": q["component_id"],
"content": p.get("value", "")})
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
outs.append(self._canvas.get_component(q["component_id"])["obj"].output(allow_partial=False)[1])
self._param.inputs.append({"component_id": q["component_id"],
"content": "\n".join(
[str(d["content"]) for d in outs[-1].to_dict('records')])})
elif q["value"]:
self._param.inputs.append({"component_id": None, "content": q["value"]})
outs.append(pd.DataFrame([{"content": q["value"]}]))
if outs:
df = pd.concat(outs, ignore_index=True)
if "content" in df: df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return df
upstream_outs = []
for u in reversed_cpnts[::-1]:
if self.get_component_name(u) in ["switch", "concentrator"]: continue
if self.component_name.lower() == "generate" and self.get_component_name(u) == "retrieval":
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None:
o["component_id"] = u
upstream_outs.append(o)
continue
if u not in self._canvas.get_component(self._id)["upstream"]: continue
#if self.component_name.lower()!="answer" and u not in self._canvas.get_component(self._id)["upstream"]: continue
if self.component_name.lower().find("switch") < 0 \
and self.get_component_name(u) in ["relevant", "categorize"]:
continue
if u.lower().find("answer") >= 0:
for r, c in self._canvas.history[::-1]:
if r == "user":
upstream_outs.append(pd.DataFrame([{"content": c}]))
upstream_outs.append(pd.DataFrame([{"content": c, "component_id": u}]))
break
break
if self.component_name.lower().find("answer") >= 0 and self.get_component_name(u) in ["relevant"]:
continue
o = self._canvas.get_component(u)["obj"].output(allow_partial=False)[1]
if o is not None:
o["component_id"] = u
upstream_outs.append(o)
break
if upstream_outs:
df = pd.concat(upstream_outs, ignore_index=True)
if "content" in df:
df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
return df
return pd.DataFrame(self._canvas.get_history(3)[-1:])
assert upstream_outs, "Can't inference the where the component input is. Please identify whose output is this component's input."
df = pd.concat(upstream_outs, ignore_index=True)
if "content" in df:
df = df.drop_duplicates(subset=['content']).reset_index(drop=True)
self._param.inputs = []
for _, r in df.iterrows():
self._param.inputs.append({"component_id": r["component_id"], "content": r["content"]})
return df
def get_stream_input(self):
reversed_cpnts = []

View File

@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import requests
import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
class BingParam(ComponentParamBase):
"""
Define the Bing component parameters.
@ -81,5 +80,5 @@ class Bing(ComponentBase, ABC):
return Bing.be_output("")
df = pd.DataFrame(bing_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {str(df)}")
return df

View File

@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate
from agent.settings import DEBUG
class CategorizeParam(GenerateParam):
@ -34,7 +34,7 @@ class CategorizeParam(GenerateParam):
super().check()
self.check_empty(self.category_description, "[Categorize] Category examples")
for k, v in self.category_description.items():
if not k: raise ValueError(f"[Categorize] Category name can not be empty!")
if not k: raise ValueError("[Categorize] Category name can not be empty!")
if not v.get("to"): raise ValueError(f"[Categorize] 'To' of category {k} can not be empty!")
def get_prompt(self):
@ -73,11 +73,11 @@ class Categorize(Generate, ABC):
def _run(self, history, **kwargs):
input = self.get_input()
input = "Question: " + ("; ".join(input["content"]) if "content" in input else "") + "Category: "
input = "Question: " + (list(input["content"])[-1] if "content" in input else "") + "\tCategory: "
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": input}],
self._param.gen_conf())
if DEBUG: print(ans, ":::::::::::::::::::::::::::::::::", input)
logging.debug(f"input: {input}, answer: {str(ans)}")
for c in self._param.category_description.keys():
if ans.lower().find(c.lower()) >= 0:
return Categorize.be_output(self._param.category_description[c]["to"])

View File

@ -1,75 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import pandas as pd
from api.db import LLMType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api.settings import retrievaler
from agent.component.base import ComponentBase, ComponentParamBase
class CiteParam(ComponentParamBase):
"""
Define the Retrieval component parameters.
"""
def __init__(self):
super().__init__()
self.cite_sources = []
def check(self):
self.check_empty(self.cite_source, "Please specify where you want to cite from.")
class Cite(ComponentBase, ABC):
component_name = "Cite"
def _run(self, history, **kwargs):
input = "\n- ".join(self.get_input()["content"])
sources = [self._canvas.get_component(cpn_id).output()[1] for cpn_id in self._param.cite_source]
query = []
for role, cnt in history[::-1][:self._param.message_history_window_size]:
if role != "user":continue
query.append(cnt)
query = "\n".join(query)
kbs = KnowledgebaseService.get_by_ids(self._param.kb_ids)
if not kbs:
raise ValueError("Can't find knowledgebases by {}".format(self._param.kb_ids))
embd_nms = list(set([kb.embd_id for kb in kbs]))
assert len(embd_nms) == 1, "Knowledge bases use different embedding models."
embd_mdl = LLMBundle(kbs[0].tenant_id, LLMType.EMBEDDING, embd_nms[0])
rerank_mdl = None
if self._param.rerank_id:
rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id)
kbinfos = retrievaler.retrieval(query, embd_mdl, kbs[0].tenant_id, self._param.kb_ids,
1, self._param.top_n,
self._param.similarity_threshold, 1 - self._param.keywords_similarity_weight,
aggs=False, rerank_mdl=rerank_mdl)
if not kbinfos["chunks"]: return pd.DataFrame()
df = pd.DataFrame(kbinfos["chunks"])
df["content"] = df["content_with_weight"]
del df["content_with_weight"]
return df

View File

@ -0,0 +1,70 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import ABC
import asyncio
from crawl4ai import AsyncWebCrawler
from agent.component.base import ComponentBase, ComponentParamBase
class CrawlerParam(ComponentParamBase):
"""
Define the Crawler component parameters.
"""
def __init__(self):
super().__init__()
self.proxy = None
self.extract_type = "markdown"
def check(self):
self.check_valid_value(self.extract_type, "Type of content from the crawler", ['html', 'markdown', 'content'])
class Crawler(ComponentBase, ABC):
component_name = "Crawler"
def _run(self, history, **kwargs):
ans = self.get_input()
ans = " - ".join(ans["content"]) if "content" in ans else ""
if not ans:
return Crawler.be_output("")
try:
result = asyncio.run(self.get_web(ans))
return Crawler.be_output(result)
except Exception as e:
return Crawler.be_output(f"An unexpected error occurred: {str(e)}")
async def get_web(self, url):
proxy = self._param.proxy if self._param.proxy else None
async with AsyncWebCrawler(verbose=True, proxy=proxy) as crawler:
result = await crawler.arun(
url=url,
bypass_cache=True
)
if self._param.extract_type == 'html':
return result.cleaned_html
elif self._param.extract_type == 'markdown':
return result.markdown
elif self._param.extract_type == 'content':
result.extracted_content
return result.markdown

View File

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from duckduckgo_search import DDGS
import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
@ -62,5 +62,5 @@ class DuckDuckGo(ComponentBase, ABC):
return DuckDuckGo.be_output("")
df = pd.DataFrame(duck_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug("df: {df}")
return df

View File

@ -16,7 +16,8 @@
from abc import ABC
import re
import pandas as pd
from peewee import MySQLDatabase, PostgresqlDatabase
import pymysql
import psycopg2
from agent.component.base import ComponentBase, ComponentParamBase
@ -44,6 +45,9 @@ class ExeSQLParam(ComponentParamBase):
self.check_positive_integer(self.port, "IP Port")
self.check_empty(self.password, "Database password")
self.check_positive_integer(self.top_n, "Number of records")
if self.database == "rag_flow":
if self.host == "ragflow-mysql": raise ValueError("The host is not accessible.")
if self.password == "infini_rag_flow": raise ValueError("The host is not accessible.")
class ExeSQL(ComponentBase, ABC):
@ -66,14 +70,14 @@ class ExeSQL(ComponentBase, ABC):
raise Exception("SQL statement not found!")
if self._param.db_type in ["mysql", "mariadb"]:
db = MySQLDatabase(self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
db = pymysql.connect(db=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
elif self._param.db_type == 'postgresql':
db = PostgresqlDatabase(self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
db = psycopg2.connect(dbname=self._param.database, user=self._param.username, host=self._param.host,
port=self._param.port, password=self._param.password)
try:
db.connect()
cursor = db.cursor()
except Exception as e:
raise Exception("Database Connection Failed! \n" + str(e))
sql_res = []
@ -81,13 +85,13 @@ class ExeSQL(ComponentBase, ABC):
if not single_sql:
continue
try:
query = db.execute_sql(single_sql)
if query.rowcount == 0:
sql_res.append({"content": "\nTotal: " + str(query.rowcount) + "\n No record in the database!"})
cursor.execute(single_sql)
if cursor.rowcount == 0:
sql_res.append({"content": "\nTotal: 0\n No record in the database!"})
continue
single_res = pd.DataFrame([i for i in query.fetchmany(size=self._param.top_n)])
single_res.columns = [i[0] for i in query.description]
sql_res.append({"content": "\nTotal: " + str(query.rowcount) + "\n" + single_res.to_markdown()})
single_res = pd.DataFrame([i for i in cursor.fetchmany(size=self._param.top_n)])
single_res.columns = [i[0] for i in cursor.description]
sql_res.append({"content": "\nTotal: " + str(cursor.rowcount) + "\n" + single_res.to_markdown()})
except Exception as e:
sql_res.append({"content": "**Error**:" + str(e) + "\nError SQL Statement:" + single_sql})
pass

View File

@ -17,8 +17,9 @@ import re
from functools import partial
import pandas as pd
from api.db import LLMType
from api.db.services.dialog_service import message_fit_in
from api.db.services.llm_service import LLMBundle
from api.settings import retrievaler
from api import settings
from agent.component.base import ComponentBase, ComponentParamBase
@ -62,18 +63,22 @@ class Generate(ComponentBase):
component_name = "Generate"
def get_dependent_components(self):
cpnts = [para["component_id"] for para in self._param.parameters]
return cpnts
cpnts = set([para["component_id"].split("@")[0] for para in self._param.parameters \
if para.get("component_id") \
and para["component_id"].lower().find("answer") < 0 \
and para["component_id"].lower().find("begin") < 0])
return list(cpnts)
def set_cite(self, retrieval_res, answer):
retrieval_res = retrieval_res.dropna(subset=["vector", "content_ltks"]).reset_index(drop=True)
if "empty_response" in retrieval_res.columns:
retrieval_res["empty_response"].fillna("", inplace=True)
answer, idx = retrievaler.insert_citations(answer, [ck["content_ltks"] for _, ck in retrieval_res.iterrows()],
[ck["vector"] for _, ck in retrieval_res.iterrows()],
LLMBundle(self._canvas.get_tenant_id(), LLMType.EMBEDDING,
self._canvas.get_embedding_model()), tkweight=0.7,
vtweight=0.3)
answer, idx = settings.retrievaler.insert_citations(answer,
[ck["content_ltks"] for _, ck in retrieval_res.iterrows()],
[ck["vector"] for _, ck in retrieval_res.iterrows()],
LLMBundle(self._canvas.get_tenant_id(), LLMType.EMBEDDING,
self._canvas.get_embedding_model()), tkweight=0.7,
vtweight=0.3)
doc_ids = set([])
recall_docs = []
for i in idx:
@ -100,19 +105,53 @@ class Generate(ComponentBase):
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
prompt = self._param.prompt
retrieval_res = self.get_input()
input = (" - " + "\n - ".join(retrieval_res["content"])) if "content" in retrieval_res else ""
retrieval_res = []
self._param.inputs = []
for para in self._param.parameters:
cpn = self._canvas.get_component(para["component_id"])["obj"]
if not para.get("component_id"): continue
component_id = para["component_id"].split("@")[0]
if para["component_id"].lower().find("@") >= 0:
cpn_id, key = para["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
kwargs[para["key"]] = p.get("value", "")
self._param.inputs.append(
{"component_id": para["component_id"], "content": kwargs[para["key"]]})
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
cpn = self._canvas.get_component(component_id)["obj"]
if cpn.component_name.lower() == "answer":
hist = self._canvas.get_history(1)
if hist:
hist = hist[0]["content"]
else:
hist = ""
kwargs[para["key"]] = hist
continue
_, out = cpn.output(allow_partial=False)
if "content" not in out.columns:
kwargs[para["key"]] = "Nothing"
kwargs[para["key"]] = ""
else:
kwargs[para["key"]] = " - " + "\n - ".join(out["content"])
if cpn.component_name.lower() == "retrieval":
retrieval_res.append(out)
kwargs[para["key"]] = " - "+"\n - ".join([o if isinstance(o, str) else str(o) for o in out["content"]])
self._param.inputs.append({"component_id": para["component_id"], "content": kwargs[para["key"]]})
if retrieval_res:
retrieval_res = pd.concat(retrieval_res, ignore_index=True)
else: retrieval_res = pd.DataFrame([])
kwargs["input"] = input
for n, v in kwargs.items():
prompt = re.sub(r"\{%s\}" % n, re.escape(str(v)), prompt)
prompt = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), prompt)
if not self._param.inputs and prompt.find("{input}") >= 0:
retrieval_res = self.get_input()
input = (" - " + "\n - ".join(
[c for c in retrieval_res["content"] if isinstance(c, str)])) if "content" in retrieval_res else ""
prompt = re.sub(r"\{input\}", re.escape(input), prompt)
downstreams = self._canvas.get_component(self._id)["downstream"]
if kwargs.get("stream") and len(downstreams) == 1 and self._canvas.get_component(downstreams[0])[
@ -124,8 +163,12 @@ class Generate(ComponentBase):
retrieval_res["empty_response"]) else "Nothing found in knowledgebase!", "reference": []}
return pd.DataFrame([res])
ans = chat_mdl.chat(prompt, self._canvas.get_history(self._param.message_history_window_size),
self._param.gen_conf())
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1: msg.append({"role": "user", "content": ""})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2: msg.append({"role": "user", "content": ""})
ans = chat_mdl.chat(msg[0]["content"], msg[1:], self._param.gen_conf())
if self._param.cite and "content_ltks" in retrieval_res.columns and "vector" in retrieval_res.columns:
res = self.set_cite(retrieval_res, ans)
return pd.DataFrame([res])
@ -141,9 +184,12 @@ class Generate(ComponentBase):
self.set_output(res)
return
msg = self._canvas.get_history(self._param.message_history_window_size)
if len(msg) < 1: msg.append({"role": "user", "content": ""})
_, msg = message_fit_in([{"role": "system", "content": prompt}, *msg], int(chat_mdl.max_length * 0.97))
if len(msg) < 2: msg.append({"role": "user", "content": ""})
answer = ""
for ans in chat_mdl.chat_streamly(prompt, self._canvas.get_history(self._param.message_history_window_size),
self._param.gen_conf()):
for ans in chat_mdl.chat_streamly(msg[0]["content"], msg[1:], self._param.gen_conf()):
res = {"content": ans, "reference": []}
answer = ans
yield res

View File

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
import requests
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
@ -57,5 +57,5 @@ class GitHub(ComponentBase, ABC):
return GitHub.be_output("")
df = pd.DataFrame(github_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {df}")
return df

View File

@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from serpapi import GoogleSearch
import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
@ -85,12 +85,12 @@ class Google(ComponentBase, ABC):
"hl": self._param.language, "num": self._param.top_n})
google_res = [{"content": '<a href="' + i["link"] + '">' + i["title"] + '</a> ' + i["snippet"]} for i in
client.get_dict()["organic_results"]]
except Exception as e:
except Exception:
return Google.be_output("**ERROR**: Existing Unavailable Parameters!")
if not google_res:
return Google.be_output("")
df = pd.DataFrame(google_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {df}")
return df

View File

@ -13,9 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
from scholarly import scholarly
@ -58,13 +58,13 @@ class GoogleScholar(ComponentBase, ABC):
'pub_url'] + '"></a> ' + "\n author: " + ",".join(pub['bib']['author']) + '\n Abstract: ' + pub[
'bib'].get('abstract', 'no abstract')})
except StopIteration or Exception as e:
print("**ERROR** " + str(e))
except StopIteration or Exception:
logging.exception("GoogleScholar")
break
if not scholar_res:
return GoogleScholar.be_output("")
df = pd.DataFrame(scholar_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {df}")
return df

106
agent/component/invoke.py Normal file
View File

@ -0,0 +1,106 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
import re
from abc import ABC
import requests
from deepdoc.parser import HtmlParser
from agent.component.base import ComponentBase, ComponentParamBase
class InvokeParam(ComponentParamBase):
"""
Define the Crawler component parameters.
"""
def __init__(self):
super().__init__()
self.proxy = None
self.headers = ""
self.method = "get"
self.variables = []
self.url = ""
self.timeout = 60
self.clean_html = False
def check(self):
self.check_valid_value(self.method.lower(), "Type of content from the crawler", ['get', 'post', 'put'])
self.check_empty(self.url, "End point URL")
self.check_positive_integer(self.timeout, "Timeout time in second")
self.check_boolean(self.clean_html, "Clean HTML")
class Invoke(ComponentBase, ABC):
component_name = "Invoke"
def _run(self, history, **kwargs):
args = {}
for para in self._param.variables:
if para.get("component_id"):
cpn = self._canvas.get_component(para["component_id"])["obj"]
if cpn.component_name.lower() == "answer":
args[para["key"]] = self._canvas.get_history(1)[0]["content"]
continue
_, out = cpn.output(allow_partial=False)
args[para["key"]] = "\n".join(out["content"])
else:
args[para["key"]] = "\n".join(para["value"])
url = self._param.url.strip()
if url.find("http") != 0:
url = "http://" + url
method = self._param.method.lower()
headers = {}
if self._param.headers:
headers = json.loads(self._param.headers)
proxies = None
if re.sub(r"https?:?/?/?", "", self._param.proxy):
proxies = {"http": self._param.proxy, "https": self._param.proxy}
if method == 'get':
response = requests.get(url=url,
params=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)
if method == 'put':
response = requests.put(url=url,
data=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)
if method == 'post':
response = requests.post(url=url,
json=args,
headers=headers,
proxies=proxies,
timeout=self._param.timeout)
if self._param.clean_html:
sections = HtmlParser()(None, response.content)
return Invoke.be_output("\n".join(sections))
return Invoke.be_output(response.text)

View File

@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import re
from abc import ABC
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
from agent.component import GenerateParam, Generate
from agent.settings import DEBUG
class KeywordExtractParam(GenerateParam):
@ -50,16 +50,13 @@ class KeywordExtract(Generate, ABC):
component_name = "KeywordExtract"
def _run(self, history, **kwargs):
q = ""
for r, c in self._canvas.history[::-1]:
if r == "user":
q += c
break
query = self.get_input()
query = str(query["content"][0]) if "content" in query else ""
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": q}],
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": query}],
self._param.gen_conf())
ans = re.sub(r".*keyword:", "", ans).strip()
if DEBUG: print(ans, ":::::::::::::::::::::::::::::::::")
logging.debug(f"ans: {ans}")
return KeywordExtract.be_output(ans)

View File

@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from Bio import Entrez
import re
import pandas as pd
import xml.etree.ElementTree as ET
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
@ -65,5 +65,5 @@ class PubMed(ComponentBase, ABC):
return PubMed.be_output("")
df = pd.DataFrame(pubmed_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {df}")
return df

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
@ -70,7 +71,7 @@ class Relevant(Generate, ABC):
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": ans}],
self._param.gen_conf())
print(ans, ":::::::::::::::::::::::::::::::::")
logging.debug(ans)
if ans.lower().find("yes") >= 0:
return Relevant.be_output(self._param.yes)
if ans.lower().find("no") >= 0:

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
@ -20,7 +21,7 @@ import pandas as pd
from api.db import LLMType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api.settings import retrievaler
from api import settings
from agent.component.base import ComponentBase, ComponentParamBase
@ -43,22 +44,19 @@ class RetrievalParam(ComponentParamBase):
self.check_decimal_float(self.similarity_threshold, "[Retrieval] Similarity threshold")
self.check_decimal_float(self.keywords_similarity_weight, "[Retrieval] Keywords similarity weight")
self.check_positive_number(self.top_n, "[Retrieval] Top N")
self.check_empty(self.kb_ids, "[Retrieval] Knowledge bases")
class Retrieval(ComponentBase, ABC):
component_name = "Retrieval"
def _run(self, history, **kwargs):
query = []
for role, cnt in history[::-1][:self._param.message_history_window_size]:
if role != "user":continue
query.append(cnt)
# query = "\n".join(query)
query = query[0]
query = self.get_input()
query = str(query["content"][0]) if "content" in query else ""
kbs = KnowledgebaseService.get_by_ids(self._param.kb_ids)
if not kbs:
raise ValueError("Can't find knowledgebases by {}".format(self._param.kb_ids))
return Retrieval.be_output("")
embd_nms = list(set([kb.embd_id for kb in kbs]))
assert len(embd_nms) == 1, "Knowledge bases use different embedding models."
@ -69,7 +67,7 @@ class Retrieval(ComponentBase, ABC):
if self._param.rerank_id:
rerank_mdl = LLMBundle(kbs[0].tenant_id, LLMType.RERANK, self._param.rerank_id)
kbinfos = retrievaler.retrieval(query, embd_mdl, kbs[0].tenant_id, self._param.kb_ids,
kbinfos = settings.retrievaler.retrieval(query, embd_mdl, kbs[0].tenant_id, self._param.kb_ids,
1, self._param.top_n,
self._param.similarity_threshold, 1 - self._param.keywords_similarity_weight,
aggs=False, rerank_mdl=rerank_mdl)
@ -83,7 +81,7 @@ class Retrieval(ComponentBase, ABC):
df = pd.DataFrame(kbinfos["chunks"])
df["content"] = df["content_with_weight"]
del df["content_with_weight"]
print(">>>>>>>>>>>>>>>>>>>>>>>>>>\n", query, df)
logging.debug("{} {}".format(query, df))
return df

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
from api.db import LLMType
from api.db.services.llm_service import LLMBundle
@ -33,7 +34,7 @@ class RewriteQuestionParam(GenerateParam):
def check(self):
super().check()
def get_prompt(self):
def get_prompt(self, conv):
self.prompt = """
You are an expert at query expansion to generate a paraphrasing of a question.
I can't retrieval relevant information from the knowledge base by using user's question directly.
@ -43,6 +44,40 @@ class RewriteQuestionParam(GenerateParam):
And return 5 versions of question and one is from translation.
Just list the question. No other words are needed.
"""
return f"""
Role: A helpful assistant
Task: Generate a full user question that would follow the conversation.
Requirements & Restrictions:
- Text generated MUST be in the same language of the original user's question.
- If the user's latest question is completely, don't do anything, just return the original question.
- DON'T generate anything except a refined question.
######################
-Examples-
######################
# Example 1
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
###############
Output: What's the name of Donald Trump's mother?
------------
# Example 2
## Conversation
USER: What is the name of Donald Trump's father?
ASSISTANT: Fred Trump.
USER: And his mother?
ASSISTANT: Mary Trump.
User: What's her full name?
###############
Output: What's the full name of Donald Trump's mother Mary Trump?
######################
# Real Data
## Conversation
{conv}
###############
"""
return self.prompt
@ -56,19 +91,21 @@ class RewriteQuestion(Generate, ABC):
self._loop = 0
raise Exception("Sorry! Nothing relevant found.")
self._loop += 1
q = "Question: "
for r, c in self._canvas.history[::-1]:
if r == "user":
q += c
break
hist = self._canvas.get_history(4)
conv = []
for m in hist:
if m["role"] not in ["user", "assistant"]: continue
conv.append("{}: {}".format(m["role"].upper(), m["content"]))
conv = "\n".join(conv)
chat_mdl = LLMBundle(self._canvas.get_tenant_id(), LLMType.CHAT, self._param.llm_id)
ans = chat_mdl.chat(self._param.get_prompt(), [{"role": "user", "content": q}],
ans = chat_mdl.chat(self._param.get_prompt(conv), [{"role": "user", "content": "Output: "}],
self._param.gen_conf())
self._canvas.history.pop()
self._canvas.history.append(("user", ans))
print(ans, ":::::::::::::::::::::::::::::::::")
logging.debug(ans)
return RewriteQuestion.be_output(ans)

View File

@ -47,13 +47,35 @@ class SwitchParam(ComponentParamBase):
class Switch(ComponentBase, ABC):
component_name = "Switch"
def get_dependent_components(self):
res = []
for cond in self._param.conditions:
for item in cond["items"]:
if not item["cpn_id"]: continue
if item["cpn_id"].find("begin") >= 0:
continue
cid = item["cpn_id"].split("@")[0]
res.append(cid)
return list(set(res))
def _run(self, history, **kwargs):
for cond in self._param.conditions:
res = []
for item in cond["items"]:
out = self._canvas.get_component(item["cpn_id"])["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join(out["content"])
res.append(self.process_operator(cpn_input, item["operator"], item["value"]))
if not item["cpn_id"]:continue
cid = item["cpn_id"].split("@")[0]
if item["cpn_id"].find("@") > 0:
cpn_id, key = item["cpn_id"].split("@")
for p in self._canvas.get_component(cid)["obj"]._param.query:
if p["key"] == key:
res.append(self.process_operator(p.get("value",""), item["operator"], item.get("value", "")))
break
else:
out = self._canvas.get_component(cid)["obj"].output()[1]
cpn_input = "" if "content" not in out.columns else " ".join([str(s) for s in out["content"]])
res.append(self.process_operator(cpn_input, item["operator"], item.get("value", "")))
if cond["logical_operator"] != "and" and any(res):
return Switch.be_output(cond["to"])

View File

@ -0,0 +1,85 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re
from agent.component.base import ComponentBase, ComponentParamBase
class TemplateParam(ComponentParamBase):
"""
Define the Generate component parameters.
"""
def __init__(self):
super().__init__()
self.content = ""
self.parameters = []
def check(self):
self.check_empty(self.content, "[Template] Content")
return True
class Template(ComponentBase):
component_name = "Template"
def get_dependent_components(self):
cpnts = set([para["component_id"].split("@")[0] for para in self._param.parameters \
if para.get("component_id") \
and para["component_id"].lower().find("answer") < 0 \
and para["component_id"].lower().find("begin") < 0])
return list(cpnts)
def _run(self, history, **kwargs):
content = self._param.content
self._param.inputs = []
for para in self._param.parameters:
if not para.get("component_id"): continue
component_id = para["component_id"].split("@")[0]
if para["component_id"].lower().find("@") >= 0:
cpn_id, key = para["component_id"].split("@")
for p in self._canvas.get_component(cpn_id)["obj"]._param.query:
if p["key"] == key:
kwargs[para["key"]] = p.get("value", "")
self._param.inputs.append(
{"component_id": para["component_id"], "content": kwargs[para["key"]]})
break
else:
assert False, f"Can't find parameter '{key}' for {cpn_id}"
continue
cpn = self._canvas.get_component(component_id)["obj"]
if cpn.component_name.lower() == "answer":
hist = self._canvas.get_history(1)
if hist:
hist = hist[0]["content"]
else:
hist = ""
kwargs[para["key"]] = hist
continue
_, out = cpn.output(allow_partial=False)
if "content" not in out.columns:
kwargs[para["key"]] = ""
else:
kwargs[para["key"]] = " - "+"\n - ".join([o if isinstance(o, str) else str(o) for o in out["content"]])
self._param.inputs.append({"component_id": para["component_id"], "content": kwargs[para["key"]]})
for n, v in kwargs.items():
content = re.sub(r"\{%s\}" % re.escape(n), str(v).replace("\\", " "), content)
return Template.be_output(content)

View File

@ -13,12 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
import logging
from abc import ABC
from functools import partial
import wikipedia
import pandas as pd
from agent.settings import DEBUG
from agent.component.base import ComponentBase, ComponentParamBase
@ -65,5 +63,5 @@ class Wikipedia(ComponentBase, ABC):
return Wikipedia.be_output("")
df = pd.DataFrame(wiki_res)
if DEBUG: print(df, ":::::::::::::::::::::::::::::::::")
logging.debug(f"df: {df}")
return df

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from abc import ABC
import pandas as pd
from agent.component.base import ComponentBase, ComponentParamBase
@ -74,8 +75,8 @@ class YahooFinance(ComponentBase, ABC):
{"content": "quarterly cash flow statement:\n" + msft.quarterly_cashflow.to_markdown() + "\n"})
if self._param.news:
yohoo_res.append({"content": "news:\n" + pd.DataFrame(msft.news).to_markdown() + "\n"})
except Exception as e:
print("**ERROR** " + str(e))
except Exception:
logging.exception("YahooFinance got exception")
if not yohoo_res:
return YahooFinance.be_output("")

View File

@ -13,22 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Logger
import os
from api.utils.file_utils import get_project_base_directory
from api.utils.log_utils import LoggerFactory, getLogger
DEBUG = 0
LoggerFactory.set_directory(
os.path.join(
get_project_base_directory(),
"logs",
"flow"))
# {CRITICAL: 50, FATAL:50, ERROR:40, WARNING:30, WARN:30, INFO:20, DEBUG:10, NOTSET:0}
LoggerFactory.LEVEL = 30
flow_logger = getLogger("flow")
database_logger = getLogger("database")
FLOAT_ZERO = 1e-8
PARAM_MAXDEPTH = 5

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -13,14 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import sys
import logging
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
from flask import Blueprint, Flask
from werkzeug.wrappers.request import Request
from flask_cors import CORS
from flasgger import Swagger
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api.db import StatusEnum
from api.db.db_models import close_connection
@ -29,32 +31,60 @@ from api.utils import CustomJSONEncoder, commands
from flask_session import Session
from flask_login import LoginManager
from api.settings import SECRET_KEY, stat_logger
from api.settings import API_VERSION, access_logger
from api import settings
from api.utils.api_utils import server_error_response
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
from api.constants import API_VERSION
__all__ = ['app']
logger = logging.getLogger('flask.app')
for h in access_logger.handlers:
logger.addHandler(h)
__all__ = ["app"]
Request.json = property(lambda self: self.get_json(force=True, silent=True))
app = Flask(__name__)
CORS(app, supports_credentials=True,max_age=2592000)
# Add this at the beginning of your file to configure Swagger UI
swagger_config = {
"headers": [],
"specs": [
{
"endpoint": "apispec",
"route": "/apispec.json",
"rule_filter": lambda rule: True, # Include all endpoints
"model_filter": lambda tag: True, # Include all models
}
],
"static_url_path": "/flasgger_static",
"swagger_ui": True,
"specs_route": "/apidocs/",
}
swagger = Swagger(
app,
config=swagger_config,
template={
"swagger": "2.0",
"info": {
"title": "RAGFlow API",
"description": "",
"version": "1.0.0",
},
"securityDefinitions": {
"ApiKeyAuth": {"type": "apiKey", "name": "Authorization", "in": "header"}
},
},
)
CORS(app, supports_credentials=True, max_age=2592000)
app.url_map.strict_slashes = False
app.json_encoder = CustomJSONEncoder
app.errorhandler(Exception)(server_error_response)
## convince for dev and debug
#app.config["LOGIN_DISABLED"] = True
# app.config["LOGIN_DISABLED"] = True
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
app.config['MAX_CONTENT_LENGTH'] = int(os.environ.get("MAX_CONTENT_LENGTH", 128 * 1024 * 1024))
app.config["MAX_CONTENT_LENGTH"] = int(
os.environ.get("MAX_CONTENT_LENGTH", 128 * 1024 * 1024)
)
Session(app)
login_manager = LoginManager()
@ -64,17 +94,23 @@ commands.register_commands(app)
def search_pages_path(pages_dir):
app_path_list = [path for path in pages_dir.glob('*_app.py') if not path.name.startswith('.')]
api_path_list = [path for path in pages_dir.glob('*sdk/*.py') if not path.name.startswith('.')]
app_path_list = [
path for path in pages_dir.glob("*_app.py") if not path.name.startswith(".")
]
api_path_list = [
path for path in pages_dir.glob("*sdk/*.py") if not path.name.startswith(".")
]
app_path_list.extend(api_path_list)
return app_path_list
def register_page(page_path):
path = f'{page_path}'
path = f"{page_path}"
page_name = page_path.stem.rstrip('_app')
module_name = '.'.join(page_path.parts[page_path.parts.index('api'):-1] + (page_name,))
page_name = page_path.stem.rstrip("_app")
module_name = ".".join(
page_path.parts[page_path.parts.index("api"): -1] + (page_name,)
)
spec = spec_from_file_location(module_name, page_path)
page = module_from_spec(spec)
@ -82,8 +118,10 @@ def register_page(page_path):
page.manager = Blueprint(page_name, module_name)
sys.modules[module_name] = page
spec.loader.exec_module(page)
page_name = getattr(page, 'page_name', page_name)
url_prefix = f'/api/{API_VERSION}/{page_name}' if "/sdk/" in path else f'/{API_VERSION}/{page_name}'
page_name = getattr(page, "page_name", page_name)
url_prefix = (
f"/api/{API_VERSION}" if "/sdk/" in path else f"/{API_VERSION}/{page_name}"
)
app.register_blueprint(page.manager, url_prefix=url_prefix)
return url_prefix
@ -91,31 +129,31 @@ def register_page(page_path):
pages_dir = [
Path(__file__).parent,
Path(__file__).parent.parent / 'api' / 'apps',
Path(__file__).parent.parent / 'api' / 'apps' / 'sdk',
Path(__file__).parent.parent / "api" / "apps",
Path(__file__).parent.parent / "api" / "apps" / "sdk",
]
client_urls_prefix = [
register_page(path)
for dir in pages_dir
for path in search_pages_path(dir)
register_page(path) for dir in pages_dir for path in search_pages_path(dir)
]
@login_manager.request_loader
def load_user(web_request):
jwt = Serializer(secret_key=SECRET_KEY)
jwt = Serializer(secret_key=settings.SECRET_KEY)
authorization = web_request.headers.get("Authorization")
if authorization:
try:
access_token = str(jwt.loads(authorization))
user = UserService.query(access_token=access_token, status=StatusEnum.VALID.value)
user = UserService.query(
access_token=access_token, status=StatusEnum.VALID.value
)
if user:
return user[0]
else:
return None
except Exception as e:
stat_logger.exception(e)
except Exception:
logging.exception("load_user got exception")
return None
else:
return None
@ -123,4 +161,4 @@ def load_user(web_request):
@app.teardown_request
def _db_close(exc):
close_connection()
close_connection()

View File

@ -22,35 +22,29 @@ from api.db.services.llm_service import TenantLLMService
from flask_login import login_required, current_user
from api.db import FileType, LLMType, ParserType, FileSource
from api.db.db_models import APIToken, API4Conversation, Task, File
from api.db.db_models import APIToken, Task, File
from api.db.services import duplicate_name
from api.db.services.api_service import APITokenService, API4ConversationService
from api.db.services.dialog_service import DialogService, chat
from api.db.services.dialog_service import DialogService, chat, keyword_extraction
from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.task_service import queue_tasks, TaskService
from api.db.services.user_service import UserTenantService
from api.settings import RetCode, retrievaler
from api import settings
from api.utils import get_uuid, current_timestamp, datetime_format
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request
from itsdangerous import URLSafeTimedSerializer
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request, \
generate_confirmation_token
from api.utils.file_utils import filename_type, thumbnail
from rag.nlp import keyword_extraction
from rag.utils.storage_factory import STORAGE_IMPL
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
from api.db.services.canvas_service import UserCanvasService
from agent.canvas import Canvas
from functools import partial
def generate_confirmation_token(tenent_id):
serializer = URLSafeTimedSerializer(tenent_id)
return "ragflow-" + serializer.dumps(get_uuid(), salt=tenent_id)[2:34]
@manager.route('/new_token', methods=['POST'])
@login_required
def new_token():
@ -58,7 +52,7 @@ def new_token():
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
obj = {"tenant_id": tenant_id, "token": generate_confirmation_token(tenant_id),
@ -74,7 +68,7 @@ def new_token():
obj["dialog_id"] = req["dialog_id"]
if not APITokenService.save(**obj):
return get_data_error_result(retmsg="Fail to new a dialog!")
return get_data_error_result(message="Fail to new a dialog!")
return get_json_result(data=obj)
except Exception as e:
@ -87,7 +81,7 @@ def token_list():
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
id = request.args["dialog_id"] if "dialog_id" in request.args else request.args["canvas_id"]
objs = APITokenService.query(tenant_id=tenants[0].tenant_id, dialog_id=id)
@ -116,7 +110,7 @@ def stats():
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
objs = API4ConversationService.stats(
tenants[0].tenant_id,
request.args.get(
@ -147,7 +141,7 @@ def set_conversation():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
try:
if objs[0].source == "agent":
@ -169,7 +163,7 @@ def set_conversation():
else:
e, dia = DialogService.get_by_id(objs[0].dialog_id)
if not e:
return get_data_error_result(retmsg="Dialog not found")
return get_data_error_result(message="Dialog not found")
conv = {
"id": get_uuid(),
"dialog_id": dia.id,
@ -189,11 +183,11 @@ def completion():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
if "quote" not in req: req["quote"] = False
msg = []
@ -263,19 +257,20 @@ def completion():
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
fillin_conv(ans)
rename_field(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans},
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
@ -295,12 +290,12 @@ def completion():
API4ConversationService.append_message(conv.id, conv.to_dict())
rename_field(result)
return get_json_result(data=result)
#******************For dialog******************
# ******************For dialog******************
conv.message.append(msg[-1])
e, dia = DialogService.get_by_id(conv.dialog_id)
if not e:
return get_data_error_result(retmsg="Dialog not found!")
return get_data_error_result(message="Dialog not found!")
del req["conversation_id"]
del req["messages"]
@ -315,14 +310,14 @@ def completion():
for ans in chat(dia, msg, True, **req):
fillin_conv(ans)
rename_field(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans},
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n"
API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
@ -331,7 +326,7 @@ def completion():
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
answer = None
for ans in chat(dia, msg, **req):
answer = ans
@ -352,18 +347,18 @@ def get(conversation_id):
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
try:
e, conv = API4ConversationService.get_by_id(conversation_id)
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
conv = conv.to_dict()
if token != APIToken.query(dialog_id=conv['dialog_id'])[0].token:
return get_json_result(data=False, retmsg='Token is not valid for this conversation_id!"',
retcode=RetCode.AUTHENTICATION_ERROR)
return get_json_result(data=False, message='Token is not valid for this conversation_id!"',
code=settings.RetCode.AUTHENTICATION_ERROR)
for referenct_i in conv['reference']:
if referenct_i is None or len(referenct_i) == 0:
continue
@ -383,7 +378,7 @@ def upload():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
kb_name = request.form.get("kb_name").strip()
tenant_id = objs[0].tenant_id
@ -392,19 +387,19 @@ def upload():
e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id)
if not e:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
kb_id = kb.id
except Exception as e:
return server_error_response(e)
if 'file' not in request.files:
return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file = request.files['file']
if file.filename == '':
return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
root_folder = FileService.get_root_folder(tenant_id)
pf_id = root_folder["id"]
@ -415,7 +410,7 @@ def upload():
try:
if DocumentService.get_doc_count(kb.tenant_id) >= int(os.environ.get('MAX_FILE_NUM_PER_USER', 8192)):
return get_data_error_result(
retmsg="Exceed the maximum file number of a free user!")
message="Exceed the maximum file number of a free user!")
filename = duplicate_name(
DocumentService.query,
@ -424,7 +419,7 @@ def upload():
filetype = filename_type(filename)
if not filetype:
return get_data_error_result(
retmsg="This type of file has not been supported yet!")
message="This type of file has not been supported yet!")
location = filename
while STORAGE_IMPL.obj_exist(kb_id, location):
@ -473,7 +468,7 @@ def upload():
# if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(doc["id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
# e, doc = DocumentService.get_by_id(doc["id"])
TaskService.filter_delete([Task.doc_id == doc["id"]])
@ -495,17 +490,17 @@ def upload_parse():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
if 'file' not in request.files:
return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, objs[0].tenant_id)
return get_json_result(data=doc_ids)
@ -518,7 +513,7 @@ def list_chunks():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
@ -532,15 +527,16 @@ def list_chunks():
doc_id = req['doc_id']
else:
return get_json_result(
data=False, retmsg="Can't find doc_name or doc_id"
data=False, message="Can't find doc_name or doc_id"
)
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
res = retrievaler.chunk_list(doc_id=doc_id, tenant_id=tenant_id)
res = settings.retrievaler.chunk_list(doc_id, tenant_id, kb_ids)
res = [
{
"content": res_item["content_with_weight"],
"doc_name": res_item["docnm_kwd"],
"img_id": res_item["img_id"]
"image_id": res_item["img_id"]
} for res_item in res
]
@ -557,7 +553,7 @@ def list_kb_docs():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
tenant_id = objs[0].tenant_id
@ -567,7 +563,7 @@ def list_kb_docs():
e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id)
if not e:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
kb_id = kb.id
except Exception as e:
@ -589,6 +585,7 @@ def list_kb_docs():
except Exception as e:
return server_error_response(e)
@manager.route('/document/infos', methods=['POST'])
@validate_request("doc_ids")
def docinfos():
@ -596,7 +593,7 @@ def docinfos():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
doc_ids = req["doc_ids"]
docs = DocumentService.get_by_ids(doc_ids)
@ -610,7 +607,7 @@ def document_rm():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
tenant_id = objs[0].tenant_id
req = request.json
@ -622,7 +619,7 @@ def document_rm():
if not doc_ids:
return get_json_result(
data=False, retmsg="Can't find doc_names or doc_ids"
data=False, message="Can't find doc_names or doc_ids"
)
except Exception as e:
@ -637,16 +634,16 @@ def document_rm():
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
@ -657,7 +654,7 @@ def document_rm():
errors += str(e)
if errors:
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR)
return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True)
@ -672,11 +669,11 @@ def completion_faq():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
if "quote" not in req: req["quote"] = True
msg = []
@ -757,7 +754,7 @@ def completion_faq():
conv.message.append(msg[-1])
e, dia = DialogService.get_by_id(conv.dialog_id)
if not e:
return get_data_error_result(retmsg="Dialog not found!")
return get_data_error_result(message="Dialog not found!")
del req["conversation_id"]
if not conv.reference:
@ -809,10 +806,10 @@ def retrieval():
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Token is not valid!"', code=settings.RetCode.AUTHENTICATION_ERROR)
req = request.json
kb_ids = req.get("kb_id",[])
kb_ids = req.get("kb_id", [])
doc_ids = req.get("doc_ids", [])
question = req.get("question")
page = int(req.get("page", 1))
@ -826,26 +823,26 @@ def retrieval():
embd_nms = list(set([kb.embd_id for kb in kbs]))
if len(embd_nms) != 1:
return get_json_result(
data=False, retmsg='Knowledge bases use different embedding models or does not exist."', retcode=RetCode.AUTHENTICATION_ERROR)
data=False, message='Knowledge bases use different embedding models or does not exist."',
code=settings.RetCode.AUTHENTICATION_ERROR)
embd_mdl = TenantLLMService.model_instance(
kbs[0].tenant_id, LLMType.EMBEDDING.value, llm_name=kbs[0].embd_id)
rerank_mdl = None
if req.get("rerank_id"):
rerank_mdl = TenantLLMService.model_instance(
kbs[0].tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
kbs[0].tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
if req.get("keyword", False):
chat_mdl = TenantLLMService.model_instance(kbs[0].tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
ranks = retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl)
ranks = settings.retrievaler.retrieval(question, embd_mdl, kbs[0].tenant_id, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl)
for c in ranks["chunks"]:
if "vector" in c:
del c["vector"]
c.pop("vector", None)
return get_json_result(data=ranks)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR)
return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
code=settings.RetCode.DATA_ERROR)
return server_error_response(e)

View File

@ -13,13 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import json
import traceback
from functools import partial
from flask import request, Response
from flask_login import login_required, current_user
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
from api.db.services.dialog_service import full_question
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, server_error_response, validate_request, get_data_error_result
@ -48,8 +48,8 @@ def rm():
for i in request.json["canvas_ids"]:
if not UserCanvasService.query(user_id=current_user.id,id=i):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
UserCanvasService.delete_by_id(i)
return get_json_result(data=True)
@ -68,12 +68,12 @@ def save():
return server_error_response(ValueError("Duplicated title."))
req["id"] = get_uuid()
if not UserCanvasService.save(**req):
return get_data_error_result(retmsg="Fail to save canvas.")
return get_data_error_result(message="Fail to save canvas.")
else:
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
UserCanvasService.update_by_id(req["id"], req)
return get_json_result(data=req)
@ -83,7 +83,7 @@ def save():
def get(canvas_id):
e, c = UserCanvasService.get_by_id(canvas_id)
if not e:
return get_data_error_result(retmsg="canvas not found.")
return get_data_error_result(message="canvas not found.")
return get_json_result(data=c.to_dict())
@ -95,11 +95,11 @@ def run():
stream = req.get("stream", True)
e, cvs = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(retmsg="canvas not found.")
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
@ -110,39 +110,40 @@ def run():
canvas = Canvas(cvs.dsl, current_user.id)
if "message" in req:
canvas.messages.append({"role": "user", "content": req["message"], "id": message_id})
if len([m for m in canvas.messages if m["role"] == "user"]) > 1:
ten = TenantService.get_by_user_id(current_user.id)[0]
req["message"] = full_question(ten["tenant_id"], ten["llm_id"], canvas.messages)
canvas.add_user_input(req["message"])
answer = canvas.run(stream=stream)
print(canvas)
except Exception as e:
return server_error_response(e)
assert answer is not None, "Nothing. Is it over?"
if stream:
assert isinstance(answer, partial), "Nothing. Is it over?"
def sse():
nonlocal answer, cvs
try:
for ans in answer():
for ans in canvas.run(stream=True):
if ans.get("running_status"):
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {"answer": ans["content"],
"running_status": True}},
ensure_ascii=False) + "\n\n"
continue
for k in ans.keys():
final_ans[k] = ans[k]
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
traceback.print_exc()
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
@ -151,13 +152,15 @@ def run():
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
return get_json_result(data={"answer": final_ans["content"], "reference": final_ans.get("reference", [])})
for answer in canvas.run(stream=False):
if answer.get("running_status"): continue
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
return get_json_result(data={"answer": final_ans["content"], "reference": final_ans.get("reference", [])})
@manager.route('/reset', methods=['POST'])
@ -168,11 +171,11 @@ def reset():
try:
e, user_canvas = UserCanvasService.get_by_id(req["id"])
if not e:
return get_data_error_result(retmsg="canvas not found.")
return get_data_error_result(message="canvas not found.")
if not UserCanvasService.query(user_id=current_user.id, id=req["id"]):
return get_json_result(
data=False, retmsg=f'Only owner of canvas authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of canvas authorized for this operation.',
code=RetCode.OPERATING_ERROR)
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
canvas.reset()

View File

@ -15,15 +15,13 @@
#
import datetime
import json
import traceback
from flask import request
from flask_login import login_required, current_user
from elasticsearch_dsl import Q
from api.db.services.dialog_service import keyword_extraction
from rag.app.qa import rmPrefix, beAdoc
from rag.nlp import search, rag_tokenizer, keyword_extraction
from rag.utils.es_conn import ELASTICSEARCH
from rag.nlp import search, rag_tokenizer
from rag.utils import rmSpace
from api.db import LLMType, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService
@ -31,7 +29,7 @@ from api.db.services.llm_service import LLMBundle
from api.db.services.user_service import UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db.services.document_service import DocumentService
from api.settings import RetCode, retrievaler, kg_retrievaler
from api import settings
from api.utils.api_utils import get_json_result
import hashlib
import re
@ -49,16 +47,17 @@ def list_chunk():
try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
query = {
"doc_ids": [doc_id], "page": page, "size": size, "question": question, "sort": True
}
if "available_int" in req:
query["available_int"] = int(req["available_int"])
sres = retrievaler.search(query, search.index_name(tenant_id), highlight=True)
sres = settings.retrievaler.search(query, search.index_name(tenant_id), kb_ids, highlight=True)
res = {"total": sres.total, "chunks": [], "doc": doc.to_dict()}
for id in sres.ids:
d = {
@ -69,22 +68,18 @@ def list_chunk():
"doc_id": sres.field[id]["doc_id"],
"docnm_kwd": sres.field[id]["docnm_kwd"],
"important_kwd": sres.field[id].get("important_kwd", []),
"img_id": sres.field[id].get("img_id", ""),
"image_id": sres.field[id].get("img_id", ""),
"available_int": sres.field[id].get("available_int", 1),
"positions": sres.field[id].get("position_int", "").split("\t")
"positions": json.loads(sres.field[id].get("position_list", "[]")),
}
if len(d["positions"]) % 5 == 0:
poss = []
for i in range(0, len(d["positions"]), 5):
poss.append([float(d["positions"][i]), float(d["positions"][i + 1]), float(d["positions"][i + 2]),
float(d["positions"][i + 3]), float(d["positions"][i + 4])])
d["positions"] = poss
assert isinstance(d["positions"], list)
assert len(d["positions"]) == 0 or (isinstance(d["positions"][0], list) and len(d["positions"][0]) == 5)
res["chunks"].append(d)
return get_json_result(data=res)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found!',
retcode=RetCode.DATA_ERROR)
return get_json_result(data=False, message='No chunk found!',
code=settings.RetCode.DATA_ERROR)
return server_error_response(e)
@ -95,27 +90,25 @@ def get():
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(retmsg="Tenant not found!")
res = ELASTICSEARCH.get(
chunk_id, search.index_name(
tenants[0].tenant_id))
if not res.get("found"):
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
chunk = settings.docStoreConn.get(chunk_id, search.index_name(tenant_id), kb_ids)
if chunk is None:
return server_error_response("Chunk not found")
id = res["_id"]
res = res["_source"]
res["chunk_id"] = id
k = []
for n in res.keys():
for n in chunk.keys():
if re.search(r"(_vec$|_sm_|_tks|_ltks)", n):
k.append(n)
for n in k:
del res[n]
del chunk[n]
return get_json_result(data=res)
return get_json_result(data=chunk)
except Exception as e:
if str(e).find("NotFoundError") >= 0:
return get_json_result(data=False, retmsg=f'Chunk not found!',
retcode=RetCode.DATA_ERROR)
return get_json_result(data=False, message='Chunk not found!',
code=settings.RetCode.DATA_ERROR)
return server_error_response(e)
@ -138,14 +131,14 @@ def set():
try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_id)
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
if doc.parser_id == ParserType.QA:
arr = [
@ -154,7 +147,7 @@ def set():
req["content_with_weight"]) if len(t) > 1]
if len(arr) != 2:
return get_data_error_result(
retmsg="Q&A must be separated by TAB/ENTER key.")
message="Q&A must be separated by TAB/ENTER key.")
q, a = rmPrefix(arr[0]), rmPrefix(arr[1])
d = beAdoc(d, arr[0], arr[1], not any(
[rag_tokenizer.is_chinese(t) for t in q + a]))
@ -162,7 +155,7 @@ def set():
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
settings.docStoreConn.insert([d], search.index_name(tenant_id), doc.kb_id)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@ -174,12 +167,15 @@ def set():
def switch():
req = request.json
try:
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
if not ELASTICSEARCH.upsert([{"id": i, "available_int": int(req["available_int"])} for i in req["chunk_ids"]],
search.index_name(tenant_id)):
return get_data_error_result(retmsg="Index updating failure")
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(message="Document not found!")
for cid in req["chunk_ids"]:
if not settings.docStoreConn.update({"id": cid},
{"available_int": int(req["available_int"])},
search.index_name(DocumentService.get_tenant_id(req["doc_id"])),
doc.kb_id):
return get_data_error_result(message="Index updating failure")
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@ -191,12 +187,11 @@ def switch():
def rm():
req = request.json
try:
if not ELASTICSEARCH.deleteByQuery(
Q("ids", values=req["chunk_ids"]), search.index_name(current_user.id)):
return get_data_error_result(retmsg="Index updating failure")
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
if not settings.docStoreConn.delete({"id": req["chunk_ids"]}, search.index_name(current_user.id), doc.kb_id):
return get_data_error_result(message="Index updating failure")
deleted_chunk_ids = req["chunk_ids"]
chunk_number = len(deleted_chunk_ids)
DocumentService.decrement_chunk_num(doc.id, doc.kb_id, 1, chunk_number, 0)
@ -224,14 +219,14 @@ def create():
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
d["kb_id"] = [doc.kb_id]
d["docnm_kwd"] = doc.name
d["doc_id"] = doc.id
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
embd_id = DocumentService.get_embd_id(req["doc_id"])
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING.value, embd_id)
@ -239,7 +234,7 @@ def create():
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
v = 0.1 * v[0] + 0.9 * v[1]
d["q_%d_vec" % len(v)] = v.tolist()
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
settings.docStoreConn.insert([d], search.index_name(tenant_id), doc.kb_id)
DocumentService.increment_chunk_num(
doc.id, doc.kb_id, c, 1, 0)
@ -256,28 +251,31 @@ def retrieval_test():
page = int(req.get("page", 1))
size = int(req.get("size", 30))
question = req["question"]
kb_id = req["kb_id"]
if isinstance(kb_id, str): kb_id = [kb_id]
kb_ids = req["kb_id"]
if isinstance(kb_ids, str):
kb_ids = [kb_ids]
doc_ids = req.get("doc_ids", [])
similarity_threshold = float(req.get("similarity_threshold", 0.0))
vector_similarity_weight = float(req.get("vector_similarity_weight", 0.3))
top = int(req.get("top_k", 1024))
tenant_ids = []
try:
tenants = UserTenantService.query(user_id=current_user.id)
for kid in kb_id:
for kb_id in kb_ids:
for tenant in tenants:
if KnowledgebaseService.query(
tenant_id=tenant.tenant_id, id=kid):
tenant_id=tenant.tenant_id, id=kb_id):
tenant_ids.append(tenant.tenant_id)
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id[0])
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
return get_data_error_result(message="Knowledgebase not found!")
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
@ -289,19 +287,18 @@ def retrieval_test():
chat_mdl = LLMBundle(kb.tenant_id, LLMType.CHAT)
question += keyword_extraction(chat_mdl, question)
retr = retrievaler if kb.parser_id != ParserType.KG else kg_retrievaler
ranks = retr.retrieval(question, embd_mdl, kb.tenant_id, kb_id, page, size,
retr = settings.retrievaler if kb.parser_id != ParserType.KG else settings.kg_retrievaler
ranks = retr.retrieval(question, embd_mdl, tenant_ids, kb_ids, page, size,
similarity_threshold, vector_similarity_weight, top,
doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"))
for c in ranks["chunks"]:
if "vector" in c:
del c["vector"]
c.pop("vector", None)
return get_json_result(data=ranks)
except Exception as e:
if str(e).find("not_found") > 0:
return get_json_result(data=False, retmsg=f'No chunk found! Check the chunk status please!',
retcode=RetCode.DATA_ERROR)
return get_json_result(data=False, message='No chunk found! Check the chunk status please!',
code=settings.RetCode.DATA_ERROR)
return server_error_response(e)
@ -309,19 +306,38 @@ def retrieval_test():
@login_required
def knowledge_graph():
doc_id = request.args["doc_id"]
tenant_id = DocumentService.get_tenant_id(doc_id)
kb_ids = KnowledgebaseService.get_kb_ids(tenant_id)
req = {
"doc_ids":[doc_id],
"doc_ids": [doc_id],
"knowledge_graph_kwd": ["graph", "mind_map"]
}
tenant_id = DocumentService.get_tenant_id(doc_id)
sres = retrievaler.search(req, search.index_name(tenant_id))
sres = settings.retrievaler.search(req, search.index_name(tenant_id), kb_ids)
obj = {"graph": {}, "mind_map": {}}
for id in sres.ids[:2]:
ty = sres.field[id]["knowledge_graph_kwd"]
try:
obj[ty] = json.loads(sres.field[id]["content_with_weight"])
except Exception as e:
print(traceback.format_exc(), flush=True)
content_json = json.loads(sres.field[id]["content_with_weight"])
except Exception:
continue
if ty == 'mind_map':
node_dict = {}
def repeat_deal(content_json, node_dict):
if 'id' in content_json:
if content_json['id'] in node_dict:
node_name = content_json['id']
content_json['id'] += f"({node_dict[content_json['id']]})"
node_dict[node_name] += 1
else:
node_dict[content_json['id']] = 1
if 'children' in content_json and content_json['children']:
for item in content_json['children']:
repeat_deal(item, node_dict)
repeat_deal(content_json, node_dict)
obj[ty] = content_json
return get_json_result(data=obj)

View File

@ -25,8 +25,7 @@ from api.db import LLMType
from api.db.services.dialog_service import DialogService, ConversationService, chat, ask
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle, TenantService, TenantLLMService
from api.settings import RetCode, retrievaler
from api.utils import get_uuid
from api import settings
from api.utils.api_utils import get_json_result
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from graphrag.mind_map_extractor import MindMapExtractor
@ -43,11 +42,11 @@ def set_conversation():
del req["conversation_id"]
try:
if not ConversationService.update_by_id(conv_id, req):
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
e, conv = ConversationService.get_by_id(conv_id)
if not e:
return get_data_error_result(
retmsg="Fail to update a conversation!")
message="Fail to update a conversation!")
conv = conv.to_dict()
return get_json_result(data=conv)
except Exception as e:
@ -56,7 +55,7 @@ def set_conversation():
try:
e, dia = DialogService.get_by_id(req["dialog_id"])
if not e:
return get_data_error_result(retmsg="Dialog not found")
return get_data_error_result(message="Dialog not found")
conv = {
"id": conv_id,
"dialog_id": req["dialog_id"],
@ -66,7 +65,7 @@ def set_conversation():
ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
if not e:
return get_data_error_result(retmsg="Fail to new a conversation!")
return get_data_error_result(message="Fail to new a conversation!")
conv = conv.to_dict()
return get_json_result(data=conv)
except Exception as e:
@ -80,15 +79,15 @@ def get():
try:
e, conv = ConversationService.get_by_id(conv_id)
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of conversation authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of conversation authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
conv = conv.to_dict()
return get_json_result(data=conv)
except Exception as e:
@ -103,15 +102,15 @@ def rm():
for cid in conv_ids:
exist, conv = ConversationService.get_by_id(cid)
if not exist:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if DialogService.query(tenant_id=tenant.tenant_id, id=conv.dialog_id):
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of conversation authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of conversation authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
ConversationService.delete_by_id(cid)
return get_json_result(data=True)
except Exception as e:
@ -125,8 +124,8 @@ def list_convsersation():
try:
if not DialogService.query(tenant_id=current_user.id, id=dialog_id):
return get_json_result(
data=False, retmsg=f'Only owner of dialog authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of dialog authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
convs = ConversationService.query(
dialog_id=dialog_id,
order_by=ConversationService.model.create_time,
@ -142,9 +141,6 @@ def list_convsersation():
@validate_request("conversation_id", "messages")
def completion():
req = request.json
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# {"role": "user", "content": "上海有吗?"}
# ]}
msg = []
for m in req["messages"]:
if m["role"] == "system":
@ -156,11 +152,11 @@ def completion():
try:
e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
conv.message = deepcopy(req["messages"])
e, dia = DialogService.get_by_id(conv.dialog_id)
if not e:
return get_data_error_result(retmsg="Dialog not found!")
return get_data_error_result(message="Dialog not found!")
del req["conversation_id"]
del req["messages"]
@ -184,13 +180,14 @@ def completion():
try:
for ans in chat(dia, msg, True, **req):
fillin_conv(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
traceback.print_exc()
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
@ -218,13 +215,13 @@ def tts():
req = request.json
text = req["text"]
tenants = TenantService.get_by_user_id(current_user.id)
tenants = TenantService.get_info_by(current_user.id)
if not tenants:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
tts_id = tenants[0]["tts_id"]
if not tts_id:
return get_data_error_result(retmsg="No default TTS model is set")
return get_data_error_result(message="No default TTS model is set")
tts_mdl = LLMBundle(tenants[0]["tenant_id"], LLMType.TTS, tts_id)
@ -234,7 +231,7 @@ def tts():
for chunk in tts_mdl.tts(txt):
yield chunk
except Exception as e:
yield ("data:" + json.dumps({"retcode": 500, "retmsg": str(e),
yield ("data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e)}},
ensure_ascii=False)).encode('utf-8')
@ -253,7 +250,7 @@ def delete_msg():
req = request.json
e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
conv = conv.to_dict()
for i, msg in enumerate(conv["message"]):
@ -276,7 +273,7 @@ def thumbup():
req = request.json
e, conv = ConversationService.get_by_id(req["conversation_id"])
if not e:
return get_data_error_result(retmsg="Conversation not found!")
return get_data_error_result(message="Conversation not found!")
up_down = req.get("set")
feedback = req.get("feedback", "")
conv = conv.to_dict()
@ -300,16 +297,17 @@ def thumbup():
def ask_about():
req = request.json
uid = current_user.id
def stream():
nonlocal req, uid
try:
for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
@ -327,13 +325,13 @@ def mindmap():
kb_ids = req["kb_ids"]
e, kb = KnowledgebaseService.get_by_id(kb_ids[0])
if not e:
return get_data_error_result(retmsg="Knowledgebase not found!")
return get_data_error_result(message="Knowledgebase not found!")
embd_mdl = TenantLLMService.model_instance(
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
chat_mdl = LLMBundle(current_user.id, LLMType.CHAT)
ranks = retrievaler.retrieval(req["question"], embd_mdl, kb.tenant_id, kb_ids, 1, 12,
0.3, 0.3, aggs=False)
ranks = settings.retrievaler.retrieval(req["question"], embd_mdl, kb.tenant_id, kb_ids, 1, 12,
0.3, 0.3, aggs=False)
mindmap = MindMapExtractor(chat_mdl)
mind_map = mindmap([c["content_with_weight"] for c in ranks["chunks"]]).output
if "error" in mind_map:

View File

@ -1,880 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pathlib
import re
import warnings
from functools import partial
from io import BytesIO
from elasticsearch_dsl import Q
from flask import request, send_file
from flask_login import login_required, current_user
from httpx import HTTPError
from api.contants import NAME_LENGTH_LIMIT
from api.db import FileType, ParserType, FileSource, TaskStatus
from api.db import StatusEnum
from api.db.db_models import File
from api.db.services import duplicate_name
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import construct_json_result, construct_error_response
from api.utils.api_utils import construct_result, validate_request
from api.utils.file_utils import filename_type, thumbnail
from rag.app import book, laws, manual, naive, one, paper, presentation, qa, resume, table, picture, audio, email
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL
MAXIMUM_OF_UPLOADING_FILES = 256
# ------------------------------ create a dataset ---------------------------------------
@manager.route("/", methods=["POST"])
@login_required # use login
@validate_request("name") # check name key
def create_dataset():
# Check if Authorization header is present
authorization_token = request.headers.get("Authorization")
if not authorization_token:
return construct_json_result(code=RetCode.AUTHENTICATION_ERROR, message="Authorization header is missing.")
# TODO: Login or API key
# objs = APIToken.query(token=authorization_token)
#
# # Authorization error
# if not objs:
# return construct_json_result(code=RetCode.AUTHENTICATION_ERROR, message="Token is invalid.")
#
# tenant_id = objs[0].tenant_id
tenant_id = current_user.id
request_body = request.json
# In case that there's no name
if "name" not in request_body:
return construct_json_result(code=RetCode.DATA_ERROR, message="Expected 'name' field in request body")
dataset_name = request_body["name"]
# empty dataset_name
if not dataset_name:
return construct_json_result(code=RetCode.DATA_ERROR, message="Empty dataset name")
# In case that there's space in the head or the tail
dataset_name = dataset_name.strip()
# In case that the length of the name exceeds the limit
dataset_name_length = len(dataset_name)
if dataset_name_length > NAME_LENGTH_LIMIT:
return construct_json_result(
code=RetCode.DATA_ERROR,
message=f"Dataset name: {dataset_name} with length {dataset_name_length} exceeds {NAME_LENGTH_LIMIT}!")
# In case that there are other fields in the data-binary
if len(request_body.keys()) > 1:
name_list = []
for key_name in request_body.keys():
if key_name != "name":
name_list.append(key_name)
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"fields: {name_list}, are not allowed in request body.")
# If there is a duplicate name, it will modify it to make it unique
request_body["name"] = duplicate_name(
KnowledgebaseService.query,
name=dataset_name,
tenant_id=tenant_id,
status=StatusEnum.VALID.value)
try:
request_body["id"] = get_uuid()
request_body["tenant_id"] = tenant_id
request_body["created_by"] = tenant_id
exist, t = TenantService.get_by_id(tenant_id)
if not exist:
return construct_result(code=RetCode.AUTHENTICATION_ERROR, message="Tenant not found.")
request_body["embd_id"] = t.embd_id
if not KnowledgebaseService.save(**request_body):
# failed to create new dataset
return construct_result()
return construct_json_result(code=RetCode.SUCCESS,
data={"dataset_name": request_body["name"], "dataset_id": request_body["id"]})
except Exception as e:
return construct_error_response(e)
# -----------------------------list datasets-------------------------------------------------------
@manager.route("/", methods=["GET"])
@login_required
def list_datasets():
offset = request.args.get("offset", 0)
count = request.args.get("count", -1)
orderby = request.args.get("orderby", "create_time")
desc = request.args.get("desc", True)
try:
tenants = TenantService.get_joined_tenants_by_user_id(current_user.id)
datasets = KnowledgebaseService.get_by_tenant_ids_by_offset(
[m["tenant_id"] for m in tenants], current_user.id, int(offset), int(count), orderby, desc)
return construct_json_result(data=datasets, code=RetCode.SUCCESS, message=f"List datasets successfully!")
except Exception as e:
return construct_error_response(e)
except HTTPError as http_err:
return construct_json_result(http_err)
# ---------------------------------delete a dataset ----------------------------
@manager.route("/<dataset_id>", methods=["DELETE"])
@login_required
def remove_dataset(dataset_id):
try:
datasets = KnowledgebaseService.query(created_by=current_user.id, id=dataset_id)
# according to the id, searching for the dataset
if not datasets:
return construct_json_result(message=f"The dataset cannot be found for your current account.",
code=RetCode.OPERATING_ERROR)
# Iterating the documents inside the dataset
for doc in DocumentService.query(kb_id=dataset_id):
if not DocumentService.remove_document(doc, datasets[0].tenant_id):
# the process of deleting failed
return construct_json_result(code=RetCode.DATA_ERROR,
message="There was an error during the document removal process. "
"Please check the status of the RAGFlow server and try the removal again.")
# delete the other files
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc.id)
# delete the dataset
if not KnowledgebaseService.delete_by_id(dataset_id):
return construct_json_result(code=RetCode.DATA_ERROR,
message="There was an error during the dataset removal process. "
"Please check the status of the RAGFlow server and try the removal again.")
# success
return construct_json_result(code=RetCode.SUCCESS, message=f"Remove dataset: {dataset_id} successfully")
except Exception as e:
return construct_error_response(e)
# ------------------------------ get details of a dataset ----------------------------------------
@manager.route("/<dataset_id>", methods=["GET"])
@login_required
def get_dataset(dataset_id):
try:
dataset = KnowledgebaseService.get_detail(dataset_id)
if not dataset:
return construct_json_result(code=RetCode.DATA_ERROR, message="Can't find this dataset!")
return construct_json_result(data=dataset, code=RetCode.SUCCESS)
except Exception as e:
return construct_json_result(e)
# ------------------------------ update a dataset --------------------------------------------
@manager.route("/<dataset_id>", methods=["PUT"])
@login_required
def update_dataset(dataset_id):
req = request.json
try:
# the request cannot be empty
if not req:
return construct_json_result(code=RetCode.DATA_ERROR, message="Please input at least one parameter that "
"you want to update!")
# check whether the dataset can be found
if not KnowledgebaseService.query(created_by=current_user.id, id=dataset_id):
return construct_json_result(message=f"Only the owner of knowledgebase is authorized for this operation!",
code=RetCode.OPERATING_ERROR)
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
# check whether there is this dataset
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR, message="This dataset cannot be found!")
if "name" in req:
name = req["name"].strip()
# check whether there is duplicate name
if name.lower() != dataset.name.lower() \
and len(KnowledgebaseService.query(name=name, tenant_id=current_user.id,
status=StatusEnum.VALID.value)) > 1:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"The name: {name.lower()} is already used by other "
f"datasets. Please choose a different name.")
dataset_updating_data = {}
chunk_num = req.get("chunk_num")
# modify the value of 11 parameters
# 2 parameters: embedding id and chunk method
# only if chunk_num is 0, the user can update the embedding id
if req.get("embedding_model_id"):
if chunk_num == 0:
dataset_updating_data["embd_id"] = req["embedding_model_id"]
else:
return construct_json_result(code=RetCode.DATA_ERROR,
message="You have already parsed the document in this "
"dataset, so you cannot change the embedding "
"model.")
# only if chunk_num is 0, the user can update the chunk_method
if "chunk_method" in req:
type_value = req["chunk_method"]
if is_illegal_value_for_enum(type_value, ParserType):
return construct_json_result(message=f"Illegal value {type_value} for 'chunk_method' field.",
code=RetCode.DATA_ERROR)
if chunk_num != 0:
construct_json_result(code=RetCode.DATA_ERROR, message="You have already parsed the document "
"in this dataset, so you cannot "
"change the chunk method.")
dataset_updating_data["parser_id"] = req["template_type"]
# convert the photo parameter to avatar
if req.get("photo"):
dataset_updating_data["avatar"] = req["photo"]
# layout_recognize
if "layout_recognize" in req:
if "parser_config" not in dataset_updating_data:
dataset_updating_data['parser_config'] = {}
dataset_updating_data['parser_config']['layout_recognize'] = req['layout_recognize']
# TODO: updating use_raptor needs to construct a class
# 6 parameters
for key in ["name", "language", "description", "permission", "id", "token_num"]:
if key in req:
dataset_updating_data[key] = req.get(key)
# update
if not KnowledgebaseService.update_by_id(dataset.id, dataset_updating_data):
return construct_json_result(code=RetCode.OPERATING_ERROR, message="Failed to update! "
"Please check the status of RAGFlow "
"server and try again!")
exist, dataset = KnowledgebaseService.get_by_id(dataset.id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR, message="Failed to get the dataset "
"using the dataset ID.")
return construct_json_result(data=dataset.to_json(), code=RetCode.SUCCESS)
except Exception as e:
return construct_error_response(e)
# --------------------------------content management ----------------------------------------------
# ----------------------------upload files-----------------------------------------------------
@manager.route("/<dataset_id>/documents/", methods=["POST"])
@login_required
def upload_documents(dataset_id):
# no files
if not request.files:
return construct_json_result(
message="There is no file!", code=RetCode.ARGUMENT_ERROR)
# the number of uploading files exceeds the limit
file_objs = request.files.getlist("file")
num_file_objs = len(file_objs)
if num_file_objs > MAXIMUM_OF_UPLOADING_FILES:
return construct_json_result(code=RetCode.DATA_ERROR, message=f"You try to upload {num_file_objs} files, "
f"which exceeds the maximum number of uploading files: {MAXIMUM_OF_UPLOADING_FILES}")
# no dataset
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(message="Can't find this dataset", code=RetCode.DATA_ERROR)
for file_obj in file_objs:
file_name = file_obj.filename
# no name
if not file_name:
return construct_json_result(
message="There is a file without name!", code=RetCode.ARGUMENT_ERROR)
# TODO: support the remote files
if 'http' in file_name:
return construct_json_result(code=RetCode.ARGUMENT_ERROR, message="Remote files have not unsupported.")
# get the root_folder
root_folder = FileService.get_root_folder(current_user.id)
# get the id of the root_folder
parent_file_id = root_folder["id"] # document id
# this is for the new user, create '.knowledgebase' file
FileService.init_knowledgebase_docs(parent_file_id, current_user.id)
# go inside this folder, get the kb_root_folder
kb_root_folder = FileService.get_kb_folder(current_user.id)
# link the file management to the kb_folder
kb_folder = FileService.new_a_file_from_kb(dataset.tenant_id, dataset.name, kb_root_folder["id"])
# grab all the errs
err = []
MAX_FILE_NUM_PER_USER = int(os.environ.get("MAX_FILE_NUM_PER_USER", 0))
uploaded_docs_json = []
for file in file_objs:
try:
# TODO: get this value from the database as some tenants have this limit while others don't
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(dataset.tenant_id) >= MAX_FILE_NUM_PER_USER:
return construct_json_result(code=RetCode.DATA_ERROR,
message="Exceed the maximum file number of a free user!")
# deal with the duplicate name
filename = duplicate_name(
DocumentService.query,
name=file.filename,
kb_id=dataset.id)
# deal with the unsupported type
filetype = filename_type(filename)
if filetype == FileType.OTHER.value:
return construct_json_result(code=RetCode.DATA_ERROR,
message="This type of file has not been supported yet!")
# upload to the minio
location = filename
while STORAGE_IMPL.obj_exist(dataset_id, location):
location += "_"
blob = file.read()
# the content is empty, raising a warning
if blob == b'':
warnings.warn(f"[WARNING]: The content of the file {filename} is empty.")
STORAGE_IMPL.put(dataset_id, location, blob)
doc = {
"id": get_uuid(),
"kb_id": dataset.id,
"parser_id": dataset.parser_id,
"parser_config": dataset.parser_config,
"created_by": current_user.id,
"type": filetype,
"name": filename,
"location": location,
"size": len(blob),
"thumbnail": thumbnail(filename, blob)
}
if doc["type"] == FileType.VISUAL:
doc["parser_id"] = ParserType.PICTURE.value
if doc["type"] == FileType.AURAL:
doc["parser_id"] = ParserType.AUDIO.value
if re.search(r"\.(ppt|pptx|pages)$", filename):
doc["parser_id"] = ParserType.PRESENTATION.value
if re.search(r"\.(eml)$", filename):
doc["parser_id"] = ParserType.EMAIL.value
DocumentService.insert(doc)
FileService.add_file_from_kb(doc, kb_folder["id"], dataset.tenant_id)
uploaded_docs_json.append(doc)
except Exception as e:
err.append(file.filename + ": " + str(e))
if err:
# return all the errors
return construct_json_result(message="\n".join(err), code=RetCode.SERVER_ERROR)
# success
return construct_json_result(data=uploaded_docs_json, code=RetCode.SUCCESS)
# ----------------------------delete a file-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>", methods=["DELETE"])
@login_required
def delete_document(document_id, dataset_id): # string
# get the root folder
root_folder = FileService.get_root_folder(current_user.id)
# parent file's id
parent_file_id = root_folder["id"]
# consider the new user
FileService.init_knowledgebase_docs(parent_file_id, current_user.id)
# store all the errors that may have
errors = ""
try:
# whether there is this document
exist, doc = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"Document {document_id} not found!", code=RetCode.DATA_ERROR)
# whether this doc is authorized by this tenant
tenant_id = DocumentService.get_tenant_id(document_id)
if not tenant_id:
return construct_json_result(
message=f"You cannot delete this document {document_id} due to the authorization"
f" reason!", code=RetCode.AUTHENTICATION_ERROR)
# get the doc's id and location
real_dataset_id, location = File2DocumentService.get_storage_address(doc_id=document_id)
if real_dataset_id != dataset_id:
return construct_json_result(message=f"The document {document_id} is not in the dataset: {dataset_id}, "
f"but in the dataset: {real_dataset_id}.", code=RetCode.ARGUMENT_ERROR)
# there is an issue when removing
if not DocumentService.remove_document(doc, tenant_id):
return construct_json_result(
message="There was an error during the document removal process. Please check the status of the "
"RAGFlow server and try the removal again.", code=RetCode.OPERATING_ERROR)
# fetch the File2Document record associated with the provided document ID.
file_to_doc = File2DocumentService.get_by_document_id(document_id)
# delete the associated File record.
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == file_to_doc[0].file_id])
# delete the File2Document record itself using the document ID. This removes the
# association between the document and the file after the File record has been deleted.
File2DocumentService.delete_by_document_id(document_id)
# delete it from minio
STORAGE_IMPL.rm(dataset_id, location)
except Exception as e:
errors += str(e)
if errors:
return construct_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
return construct_json_result(data=True, code=RetCode.SUCCESS)
# ----------------------------list files-----------------------------------------------------
@manager.route('/<dataset_id>/documents/', methods=['GET'])
@login_required
def list_documents(dataset_id):
if not dataset_id:
return construct_json_result(
data=False, message="Lack of 'dataset_id'", code=RetCode.ARGUMENT_ERROR)
# searching keywords
keywords = request.args.get("keywords", "")
offset = request.args.get("offset", 0)
count = request.args.get("count", -1)
order_by = request.args.get("order_by", "create_time")
descend = request.args.get("descend", True)
try:
docs, total = DocumentService.list_documents_in_dataset(dataset_id, int(offset), int(count), order_by,
descend, keywords)
return construct_json_result(data={"total": total, "docs": docs}, message=RetCode.SUCCESS)
except Exception as e:
return construct_error_response(e)
# ----------------------------update: enable rename-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>", methods=["PUT"])
@login_required
def update_document(dataset_id, document_id):
req = request.json
try:
legal_parameters = set()
legal_parameters.add("name")
legal_parameters.add("enable")
legal_parameters.add("template_type")
for key in req.keys():
if key not in legal_parameters:
return construct_json_result(code=RetCode.ARGUMENT_ERROR, message=f"{key} is an illegal parameter.")
# The request body cannot be empty
if not req:
return construct_json_result(
code=RetCode.DATA_ERROR,
message="Please input at least one parameter that you want to update!")
# Check whether there is this dataset
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR, message=f"This dataset {dataset_id} cannot be found!")
# The document does not exist
exist, document = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document {document_id} cannot be found!",
code=RetCode.ARGUMENT_ERROR)
# Deal with the different keys
updating_data = {}
if "name" in req:
new_name = req["name"]
updating_data["name"] = new_name
# Check whether the new_name is suitable
# 1. no name value
if not new_name:
return construct_json_result(code=RetCode.DATA_ERROR, message="There is no new name.")
# 2. In case that there's space in the head or the tail
new_name = new_name.strip()
# 3. Check whether the new_name has the same extension of file as before
if pathlib.Path(new_name.lower()).suffix != pathlib.Path(
document.name.lower()).suffix:
return construct_json_result(
data=False,
message="The extension of file cannot be changed",
code=RetCode.ARGUMENT_ERROR)
# 4. Check whether the new name has already been occupied by other file
for d in DocumentService.query(name=new_name, kb_id=document.kb_id):
if d.name == new_name:
return construct_json_result(
message="Duplicated document name in the same dataset.",
code=RetCode.ARGUMENT_ERROR)
if "enable" in req:
enable_value = req["enable"]
if is_illegal_value_for_enum(enable_value, StatusEnum):
return construct_json_result(message=f"Illegal value {enable_value} for 'enable' field.",
code=RetCode.DATA_ERROR)
updating_data["status"] = enable_value
# TODO: Chunk-method - update parameters inside the json object parser_config
if "template_type" in req:
type_value = req["template_type"]
if is_illegal_value_for_enum(type_value, ParserType):
return construct_json_result(message=f"Illegal value {type_value} for 'template_type' field.",
code=RetCode.DATA_ERROR)
updating_data["parser_id"] = req["template_type"]
# The process of updating
if not DocumentService.update_by_id(document_id, updating_data):
return construct_json_result(
code=RetCode.OPERATING_ERROR,
message="Failed to update document in the database! "
"Please check the status of RAGFlow server and try again!")
# name part: file service
if "name" in req:
# Get file by document id
file_information = File2DocumentService.get_by_document_id(document_id)
if file_information:
exist, file = FileService.get_by_id(file_information[0].file_id)
FileService.update_by_id(file.id, {"name": req["name"]})
exist, document = DocumentService.get_by_id(document_id)
# Success
return construct_json_result(data=document.to_json(), message="Success", code=RetCode.SUCCESS)
except Exception as e:
return construct_error_response(e)
# Helper method to judge whether it's an illegal value
def is_illegal_value_for_enum(value, enum_class):
return value not in enum_class.__members__.values()
# ----------------------------download a file-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>", methods=["GET"])
@login_required
def download_document(dataset_id, document_id):
try:
# Check whether there is this dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
# Check whether there is this document
exist, document = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document '{document_id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
# The process of downloading
doc_id, doc_location = File2DocumentService.get_storage_address(doc_id=document_id) # minio address
file_stream = STORAGE_IMPL.get(doc_id, doc_location)
if not file_stream:
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
file = BytesIO(file_stream)
# Use send_file with a proper filename and MIME type
return send_file(
file,
as_attachment=True,
download_name=document.name,
mimetype='application/octet-stream' # Set a default MIME type
)
# Error
except Exception as e:
return construct_error_response(e)
# ----------------------------start parsing a document-----------------------------------------------------
# helper method for parsing
# callback method
def doc_parse_callback(doc_id, prog=None, msg=""):
cancel = DocumentService.do_cancel(doc_id)
if cancel:
raise Exception("The parsing process has been cancelled!")
"""
def doc_parse(binary, doc_name, parser_name, tenant_id, doc_id):
match parser_name:
case "book":
book.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "laws":
laws.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "manual":
manual.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "naive":
# It's the mode by default, which is general in the front-end
naive.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "one":
one.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "paper":
paper.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "picture":
picture.chunk(doc_name, binary=binary, tenant_id=tenant_id, lang="Chinese",
callback=partial(doc_parse_callback, doc_id))
case "presentation":
presentation.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "qa":
qa.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "resume":
resume.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "table":
table.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "audio":
audio.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case "email":
email.chunk(doc_name, binary=binary, callback=partial(doc_parse_callback, doc_id))
case _:
return False
return True
"""
@manager.route("/<dataset_id>/documents/<document_id>/status", methods=["POST"])
@login_required
def parse_document(dataset_id, document_id):
try:
# valid dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
return parsing_document_internal(document_id)
except Exception as e:
return construct_error_response(e)
# ----------------------------start parsing documents-----------------------------------------------------
@manager.route("/<dataset_id>/documents/status", methods=["POST"])
@login_required
def parse_documents(dataset_id):
doc_ids = request.json["doc_ids"]
try:
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
# two conditions
if not doc_ids:
# documents inside the dataset
docs, total = DocumentService.list_documents_in_dataset(dataset_id, 0, -1, "create_time",
True, "")
doc_ids = [doc["id"] for doc in docs]
message = ""
# for loop
for id in doc_ids:
res = parsing_document_internal(id)
res_body = res.json
if res_body["code"] == RetCode.SUCCESS:
message += res_body["message"]
else:
return res
return construct_json_result(data=True, code=RetCode.SUCCESS, message=message)
except Exception as e:
return construct_error_response(e)
# helper method for parsing the document
def parsing_document_internal(id):
message = ""
try:
# Check whether there is this document
exist, document = DocumentService.get_by_id(id)
if not exist:
return construct_json_result(message=f"This document '{id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
tenant_id = DocumentService.get_tenant_id(id)
if not tenant_id:
return construct_json_result(message="Tenant not found!", code=RetCode.AUTHENTICATION_ERROR)
info = {"run": "1", "progress": 0}
info["progress_msg"] = ""
info["chunk_num"] = 0
info["token_num"] = 0
DocumentService.update_by_id(id, info)
ELASTICSEARCH.deleteByQuery(Q("match", doc_id=id), idxnm=search.index_name(tenant_id))
_, doc_attributes = DocumentService.get_by_id(id)
doc_attributes = doc_attributes.to_dict()
doc_id = doc_attributes["id"]
bucket, doc_name = File2DocumentService.get_storage_address(doc_id=doc_id)
binary = STORAGE_IMPL.get(bucket, doc_name)
parser_name = doc_attributes["parser_id"]
if binary:
res = doc_parse(binary, doc_name, parser_name, tenant_id, doc_id)
if res is False:
message += f"The parser id: {parser_name} of the document {doc_id} is not supported; "
else:
message += f"Empty data in the document: {doc_name}; "
# failed in parsing
if doc_attributes["status"] == TaskStatus.FAIL.value:
message += f"Failed in parsing the document: {doc_id}; "
return construct_json_result(code=RetCode.SUCCESS, message=message)
except Exception as e:
return construct_error_response(e)
# ----------------------------stop parsing a doc-----------------------------------------------------
@manager.route("<dataset_id>/documents/<document_id>/status", methods=["DELETE"])
@login_required
def stop_parsing_document(dataset_id, document_id):
try:
# valid dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
return stop_parsing_document_internal(document_id)
except Exception as e:
return construct_error_response(e)
# ----------------------------stop parsing docs-----------------------------------------------------
@manager.route("<dataset_id>/documents/status", methods=["DELETE"])
@login_required
def stop_parsing_documents(dataset_id):
doc_ids = request.json["doc_ids"]
try:
# valid dataset?
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset '{dataset_id}' cannot be found!")
if not doc_ids:
# documents inside the dataset
docs, total = DocumentService.list_documents_in_dataset(dataset_id, 0, -1, "create_time",
True, "")
doc_ids = [doc["id"] for doc in docs]
message = ""
# for loop
for id in doc_ids:
res = stop_parsing_document_internal(id)
res_body = res.json
if res_body["code"] == RetCode.SUCCESS:
message += res_body["message"]
else:
return res
return construct_json_result(data=True, code=RetCode.SUCCESS, message=message)
except Exception as e:
return construct_error_response(e)
# Helper method
def stop_parsing_document_internal(document_id):
try:
# valid doc?
exist, doc = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(message=f"This document '{document_id}' cannot be found!",
code=RetCode.ARGUMENT_ERROR)
doc_attributes = doc.to_dict()
# only when the status is parsing, we need to stop it
if doc_attributes["status"] == TaskStatus.RUNNING.value:
tenant_id = DocumentService.get_tenant_id(document_id)
if not tenant_id:
return construct_json_result(message="Tenant not found!", code=RetCode.AUTHENTICATION_ERROR)
# update successfully?
if not DocumentService.update_by_id(document_id, {"status": "2"}): # cancel
return construct_json_result(
code=RetCode.OPERATING_ERROR,
message="There was an error during the stopping parsing the document process. "
"Please check the status of the RAGFlow server and try the update again."
)
_, doc_attributes = DocumentService.get_by_id(document_id)
doc_attributes = doc_attributes.to_dict()
# failed in stop parsing
if doc_attributes["status"] == TaskStatus.RUNNING.value:
return construct_json_result(message=f"Failed in parsing the document: {document_id}; ", code=RetCode.SUCCESS)
return construct_json_result(code=RetCode.SUCCESS, message="")
except Exception as e:
return construct_error_response(e)
# ----------------------------show the status of the file-----------------------------------------------------
@manager.route("/<dataset_id>/documents/<document_id>/status", methods=["GET"])
@login_required
def show_parsing_status(dataset_id, document_id):
try:
# valid dataset
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This dataset: '{dataset_id}' cannot be found!")
# valid document
exist, _ = DocumentService.get_by_id(document_id)
if not exist:
return construct_json_result(code=RetCode.DATA_ERROR,
message=f"This document: '{document_id}' is not a valid document.")
_, doc = DocumentService.get_by_id(document_id) # get doc object
doc_attributes = doc.to_dict()
return construct_json_result(
data={"progress": doc_attributes["progress"], "status": TaskStatus(doc_attributes["status"]).name},
code=RetCode.SUCCESS
)
except Exception as e:
return construct_error_response(e)
# ----------------------------list the chunks of the file-----------------------------------------------------
# -- --------------------------delete the chunk-----------------------------------------------------
# ----------------------------edit the status of the chunk-----------------------------------------------------
# ----------------------------insert a new chunk-----------------------------------------------------
# ----------------------------upload a file-----------------------------------------------------
# ----------------------------get a specific chunk-----------------------------------------------------
# ----------------------------retrieval test-----------------------------------------------------

View File

@ -20,7 +20,7 @@ from api.db.services.dialog_service import DialogService
from api.db import StatusEnum
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService, UserTenantService
from api.settings import RetCode
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from api.utils.api_utils import get_json_result
@ -68,17 +68,17 @@ def set_dialog():
continue
if prompt_config["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(
retmsg="Parameter '{}' is not used".format(p["key"]))
message="Parameter '{}' is not used".format(p["key"]))
try:
e, tenant = TenantService.get_by_id(current_user.id)
if not e:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
llm_id = req.get("llm_id", tenant.llm_id)
if not dialog_id:
if not req.get("kb_ids"):
return get_data_error_result(
retmsg="Fail! Please select knowledgebase!")
message="Fail! Please select knowledgebase!")
dia = {
"id": get_uuid(),
"tenant_id": current_user.id,
@ -96,20 +96,20 @@ def set_dialog():
"icon": icon
}
if not DialogService.save(**dia):
return get_data_error_result(retmsg="Fail to new a dialog!")
return get_data_error_result(message="Fail to new a dialog!")
e, dia = DialogService.get_by_id(dia["id"])
if not e:
return get_data_error_result(retmsg="Fail to new a dialog!")
return get_data_error_result(message="Fail to new a dialog!")
return get_json_result(data=dia.to_json())
else:
del req["dialog_id"]
if "kb_names" in req:
del req["kb_names"]
if not DialogService.update_by_id(dialog_id, req):
return get_data_error_result(retmsg="Dialog not found!")
return get_data_error_result(message="Dialog not found!")
e, dia = DialogService.get_by_id(dialog_id)
if not e:
return get_data_error_result(retmsg="Fail to update a dialog!")
return get_data_error_result(message="Fail to update a dialog!")
dia = dia.to_dict()
dia["kb_ids"], dia["kb_names"] = get_kb_names(dia["kb_ids"])
return get_json_result(data=dia)
@ -124,7 +124,7 @@ def get():
try:
e, dia = DialogService.get_by_id(dialog_id)
if not e:
return get_data_error_result(retmsg="Dialog not found!")
return get_data_error_result(message="Dialog not found!")
dia = dia.to_dict()
dia["kb_ids"], dia["kb_names"] = get_kb_names(dia["kb_ids"])
return get_json_result(data=dia)
@ -174,8 +174,8 @@ def rm():
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of dialog authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of dialog authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
dialog_list.append({"id": id,"status":StatusEnum.INVALID.value})
DialogService.update_many_by_id(dialog_list)
return get_json_result(data=True)

View File

@ -13,44 +13,34 @@
# See the License for the specific language governing permissions and
# limitations under the License
#
import datetime
import hashlib
import json
import os
import os.path
import pathlib
import re
import traceback
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy
from io import BytesIO
import flask
from elasticsearch_dsl import Q
from flask import request
from flask_login import login_required, current_user
from api.db.db_models import Task, File
from api.db.services.dialog_service import DialogService, ConversationService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.llm_service import LLMBundle
from api.db.services.task_service import TaskService, queue_tasks
from api.db.services.user_service import TenantService, UserTenantService
from graphrag.mind_map_extractor import MindMapExtractor
from rag.app import naive
from api.db.services.user_service import UserTenantService
from deepdoc.parser.html_parser import RAGFlowHtmlParser
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from api.db.services import duplicate_name
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid
from api.db import FileType, TaskStatus, ParserType, FileSource, LLMType
from api.db import FileType, TaskStatus, ParserType, FileSource
from api.db.services.document_service import DocumentService, doc_upload_and_parse
from api.settings import RetCode, stat_logger
from api import settings
from api.utils.api_utils import get_json_result
from rag.utils.storage_factory import STORAGE_IMPL
from api.utils.file_utils import filename_type, thumbnail, get_project_base_directory
from api.utils.web_utils import html2pdf, is_valid_url
from api.constants import IMG_BASE64_PREFIX
@manager.route('/upload', methods=['POST'])
@ -60,16 +50,16 @@ def upload():
kb_id = request.form.get("kb_id")
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
if 'file' not in request.files:
return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
@ -78,7 +68,7 @@ def upload():
err, _ = FileService.upload_document(kb, file_objs, current_user.id)
if err:
return get_json_result(
data=False, retmsg="\n".join(err), retcode=RetCode.SERVER_ERROR)
data=False, message="\n".join(err), code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True)
@ -89,12 +79,12 @@ def web_crawl():
kb_id = request.form.get("kb_id")
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
name = request.form.get("name")
url = request.form.get("url")
if not is_valid_url(url):
return get_json_result(
data=False, retmsg='The URL format is invalid', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='The URL format is invalid', code=settings.RetCode.ARGUMENT_ERROR)
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
raise LookupError("Can't find this knowledgebase!")
@ -156,17 +146,17 @@ def create():
kb_id = req["kb_id"]
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
try:
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
if DocumentService.query(name=req["name"], kb_id=kb_id):
return get_data_error_result(
retmsg="Duplicated document name in the same knowledgebase.")
message="Duplicated document name in the same knowledgebase.")
doc = DocumentService.insert({
"id": get_uuid(),
@ -190,7 +180,7 @@ def list_docs():
kb_id = request.args.get("kb_id")
if not kb_id:
return get_json_result(
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "KB ID"', code=settings.RetCode.ARGUMENT_ERROR)
tenants = UserTenantService.query(user_id=current_user.id)
for tenant in tenants:
if KnowledgebaseService.query(
@ -198,8 +188,8 @@ def list_docs():
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
keywords = request.args.get("keywords", "")
page_number = int(request.args.get("page", 1))
@ -209,29 +199,47 @@ def list_docs():
try:
docs, tol = DocumentService.get_by_kb_id(
kb_id, page_number, items_per_page, orderby, desc, keywords)
for doc_item in docs:
if doc_item['thumbnail'] and not doc_item['thumbnail'].startswith(IMG_BASE64_PREFIX):
doc_item['thumbnail'] = f"/v1/document/image/{kb_id}-{doc_item['thumbnail']}"
return get_json_result(data={"total": tol, "docs": docs})
except Exception as e:
return server_error_response(e)
@manager.route('/infos', methods=['POST'])
@login_required
def docinfos():
req = request.json
doc_ids = req["doc_ids"]
for doc_id in doc_ids:
if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
docs = DocumentService.get_by_ids(doc_ids)
return get_json_result(data=list(docs.dicts()))
@manager.route('/thumbnails', methods=['GET'])
#@login_required
# @login_required
def thumbnails():
doc_ids = request.args.get("doc_ids").split(",")
if not doc_ids:
return get_json_result(
data=False, retmsg='Lack of "Document ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "Document ID"', code=settings.RetCode.ARGUMENT_ERROR)
try:
docs = DocumentService.get_thumbnails(doc_ids)
for doc_item in docs:
if doc_item['thumbnail'] and not doc_item['thumbnail'].startswith(IMG_BASE64_PREFIX):
doc_item['thumbnail'] = f"/v1/document/image/{doc_item['kb_id']}-{doc_item['thumbnail']}"
return get_json_result(data={d["id"]: d["thumbnail"] for d in docs})
except Exception as e:
return server_error_response(e)
@ -243,37 +251,34 @@ def thumbnails():
def change_status():
req = request.json
if str(req["status"]) not in ["0", "1"]:
get_json_result(
return get_json_result(
data=False,
retmsg='"Status" must be either 0 or 1!',
retcode=RetCode.ARGUMENT_ERROR)
message='"Status" must be either 0 or 1!',
code=settings.RetCode.ARGUMENT_ERROR)
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
e, kb = KnowledgebaseService.get_by_id(doc.kb_id)
if not e:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
if not DocumentService.update_by_id(
req["doc_id"], {"status": str(req["status"])}):
return get_data_error_result(
retmsg="Database error (Document update)!")
message="Database error (Document update)!")
if str(req["status"]) == "0":
ELASTICSEARCH.updateScriptByQuery(Q("term", doc_id=req["doc_id"]),
scripts="ctx._source.available_int=0;",
idxnm=search.index_name(
kb.tenant_id)
)
else:
ELASTICSEARCH.updateScriptByQuery(Q("term", doc_id=req["doc_id"]),
scripts="ctx._source.available_int=1;",
idxnm=search.index_name(
kb.tenant_id)
)
status = int(req["status"])
settings.docStoreConn.update({"doc_id": req["doc_id"]}, {"available_int": status},
search.index_name(kb.tenant_id), doc.kb_id)
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@ -286,6 +291,15 @@ def rm():
req = request.json
doc_ids = req["doc_id"]
if isinstance(doc_ids, str): doc_ids = [doc_ids]
for doc_id in doc_ids:
if not DocumentService.accessible4deletion(doc_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
root_folder = FileService.get_root_folder(current_user.id)
pf_id = root_folder["id"]
FileService.init_knowledgebase_docs(pf_id, current_user.id)
@ -294,16 +308,16 @@ def rm():
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc_id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
@ -314,7 +328,7 @@ def rm():
errors += str(e)
if errors:
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR)
return get_json_result(data=False, message=errors, code=settings.RetCode.SERVER_ERROR)
return get_json_result(data=True)
@ -324,6 +338,13 @@ def rm():
@validate_request("doc_ids", "run")
def run():
req = request.json
for doc_id in req["doc_ids"]:
if not DocumentService.accessible(doc_id, current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try:
for id in req["doc_ids"]:
info = {"run": str(req["run"]), "progress": 0}
@ -335,9 +356,12 @@ def run():
# if str(req["run"]) == TaskStatus.CANCEL.value:
tenant_id = DocumentService.get_tenant_id(id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=id), idxnm=search.index_name(tenant_id))
return get_data_error_result(message="Tenant not found!")
e, doc = DocumentService.get_by_id(id)
if not e:
return get_data_error_result(message="Document not found!")
if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
settings.docStoreConn.delete({"doc_id": id}, search.index_name(tenant_id), doc.kb_id)
if str(req["run"]) == TaskStatus.RUNNING.value:
TaskService.filter_delete([Task.doc_id == id])
@ -357,25 +381,31 @@ def run():
@validate_request("doc_id", "name")
def rename():
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
doc.name.lower()).suffix:
return get_json_result(
data=False,
retmsg="The extension of file can't be changed",
retcode=RetCode.ARGUMENT_ERROR)
message="The extension of file can't be changed",
code=settings.RetCode.ARGUMENT_ERROR)
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
if d.name == req["name"]:
return get_data_error_result(
retmsg="Duplicated document name in the same knowledgebase.")
message="Duplicated document name in the same knowledgebase.")
if not DocumentService.update_by_id(
req["doc_id"], {"name": req["name"]}):
return get_data_error_result(
retmsg="Database error (Document rename)!")
message="Database error (Document rename)!")
informs = File2DocumentService.get_by_document_id(req["doc_id"])
if informs:
@ -393,7 +423,7 @@ def get(doc_id):
try:
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
b, n = File2DocumentService.get_storage_address(doc_id=doc_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
@ -417,10 +447,17 @@ def get(doc_id):
@validate_request("doc_id", "parser_id")
def change_parser():
req = request.json
if not DocumentService.accessible(req["doc_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try:
e, doc = DocumentService.get_by_id(req["doc_id"])
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
if doc.parser_id.lower() == req["parser_id"].lower():
if "parser_config" in req:
if req["parser_config"] == doc.parser_config:
@ -428,27 +465,28 @@ def change_parser():
else:
return get_json_result(data=True)
if doc.type == FileType.VISUAL or re.search(
r"\.(ppt|pptx|pages)$", doc.name):
return get_data_error_result(retmsg="Not supported yet!")
if ((doc.type == FileType.VISUAL and req["parser_id"] != "picture")
or (re.search(
r"\.(ppt|pptx|pages)$", doc.name) and req["parser_id"] != "presentation")):
return get_data_error_result(message="Not supported yet!")
e = DocumentService.update_by_id(doc.id,
{"parser_id": req["parser_id"], "progress": 0, "progress_msg": "",
"run": TaskStatus.UNSTART.value})
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
if "parser_config" in req:
DocumentService.update_parser_config(doc.id, req["parser_config"])
if doc.token_num > 0:
e = DocumentService.increment_chunk_num(doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1,
doc.process_duation * -1)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id))
return get_data_error_result(message="Tenant not found!")
if settings.docStoreConn.indexExist(search.index_name(tenant_id), doc.kb_id):
settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
return get_json_result(data=True)
except Exception as e:
@ -473,14 +511,74 @@ def get_image(image_id):
def upload_and_parse():
if 'file' not in request.files:
return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
doc_ids = doc_upload_and_parse(request.form.get("conversation_id"), file_objs, current_user.id)
return get_json_result(data=doc_ids)
@manager.route('/parse', methods=['POST'])
@login_required
def parse():
url = request.json.get("url") if request.json else ""
if url:
if not is_valid_url(url):
return get_json_result(
data=False, message='The URL format is invalid', code=settings.RetCode.ARGUMENT_ERROR)
download_path = os.path.join(get_project_base_directory(), "logs/downloads")
os.makedirs(download_path, exist_ok=True)
from seleniumwire.webdriver import Chrome, ChromeOptions
options = ChromeOptions()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_experimental_option('prefs', {
'download.default_directory': download_path,
'download.prompt_for_download': False,
'download.directory_upgrade': True,
'safebrowsing.enabled': True
})
driver = Chrome(options=options)
driver.get(url)
res_headers = [r.response.headers for r in driver.requests]
if len(res_headers) > 1:
sections = RAGFlowHtmlParser().parser_txt(driver.page_source)
driver.quit()
return get_json_result(data="\n".join(sections))
class File:
filename: str
filepath: str
def __init__(self, filename, filepath):
self.filename = filename
self.filepath = filepath
def read(self):
with open(self.filepath, "rb") as f:
return f.read()
r = re.search(r"filename=\"([^\"]+)\"", str(res_headers))
if not r or not r.group(1):
return get_json_result(
data=False, message="Can't not identify downloaded file", code=settings.RetCode.ARGUMENT_ERROR)
f = File(r.group(1), os.path.join(download_path, r.group(1)))
txt = FileService.parse_docs([f], current_user.id)
return get_json_result(data=txt)
if 'file' not in request.files:
return get_json_result(
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
txt = FileService.parse_docs(file_objs, current_user.id)
return get_json_result(data=txt)

View File

@ -13,9 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License
#
from elasticsearch_dsl import Q
from api.db.db_models import File2Document
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
@ -26,10 +24,8 @@ from api.utils.api_utils import server_error_response, get_data_error_result, va
from api.utils import get_uuid
from api.db import FileType
from api.db.services.document_service import DocumentService
from api.settings import RetCode
from api import settings
from api.utils.api_utils import get_json_result
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
@manager.route('/convert', methods=['POST'])
@ -54,13 +50,13 @@ def convert():
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
message="Database error (Document removal)!")
File2DocumentService.delete_by_file_id(id)
# insert
@ -68,11 +64,11 @@ def convert():
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
e, file = FileService.get_by_id(id)
if not e:
return get_data_error_result(
retmsg="Can't find this file!")
message="Can't find this file!")
doc = DocumentService.insert({
"id": get_uuid(),
@ -104,26 +100,26 @@ def rm():
file_ids = req["file_ids"]
if not file_ids:
return get_json_result(
data=False, retmsg='Lack of "Files ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='Lack of "Files ID"', code=settings.RetCode.ARGUMENT_ERROR)
try:
for file_id in file_ids:
informs = File2DocumentService.get_by_file_id(file_id)
if not informs:
return get_data_error_result(retmsg="Inform not found!")
return get_data_error_result(message="Inform not found!")
for inform in informs:
if not inform:
return get_data_error_result(retmsg="Inform not found!")
return get_data_error_result(message="Inform not found!")
File2DocumentService.delete_by_file_id(file_id)
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
message="Database error (Document removal)!")
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -18,7 +18,6 @@ import pathlib
import re
import flask
from elasticsearch_dsl import Q
from flask import request
from flask_login import login_required, current_user
@ -29,11 +28,9 @@ from api.utils import get_uuid
from api.db import FileType, FileSource
from api.db.services import duplicate_name
from api.db.services.file_service import FileService
from api.settings import RetCode
from api import settings
from api.utils.api_utils import get_json_result
from api.utils.file_utils import filename_type
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL
@ -49,24 +46,24 @@ def upload():
if 'file' not in request.files:
return get_json_result(
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file part!', code=settings.RetCode.ARGUMENT_ERROR)
file_objs = request.files.getlist('file')
for file_obj in file_objs:
if file_obj.filename == '':
return get_json_result(
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
data=False, message='No file selected!', code=settings.RetCode.ARGUMENT_ERROR)
file_res = []
try:
for file_obj in file_objs:
e, file = FileService.get_by_id(pf_id)
if not e:
return get_data_error_result(
retmsg="Can't find this folder!")
message="Can't find this folder!")
MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(current_user.id) >= MAX_FILE_NUM_PER_USER:
return get_data_error_result(
retmsg="Exceed the maximum file number of a free user!")
message="Exceed the maximum file number of a free user!")
# split file name path
if not file_obj.filename:
@ -85,13 +82,13 @@ def upload():
if file_len != len_id_list:
e, file = FileService.get_by_id(file_id_list[len_id_list - 1])
if not e:
return get_data_error_result(retmsg="Folder not found!")
return get_data_error_result(message="Folder not found!")
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names,
len_id_list)
else:
e, file = FileService.get_by_id(file_id_list[len_id_list - 2])
if not e:
return get_data_error_result(retmsg="Folder not found!")
return get_data_error_result(message="Folder not found!")
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names,
len_id_list)
@ -137,10 +134,10 @@ def create():
try:
if not FileService.is_parent_folder_exist(pf_id):
return get_json_result(
data=False, retmsg="Parent Folder Doesn't Exist!", retcode=RetCode.OPERATING_ERROR)
data=False, message="Parent Folder Doesn't Exist!", code=settings.RetCode.OPERATING_ERROR)
if FileService.query(name=req["name"], parent_id=pf_id):
return get_data_error_result(
retmsg="Duplicated folder name in the same folder.")
message="Duplicated folder name in the same folder.")
if input_file_type == FileType.FOLDER.value:
file_type = FileType.FOLDER.value
@ -181,14 +178,14 @@ def list_files():
try:
e, file = FileService.get_by_id(pf_id)
if not e:
return get_data_error_result(retmsg="Folder not found!")
return get_data_error_result(message="Folder not found!")
files, total = FileService.get_by_pf_id(
current_user.id, pf_id, page_number, items_per_page, orderby, desc, keywords)
parent_folder = FileService.get_parent_folder(pf_id)
if not FileService.get_parent_folder(pf_id):
return get_json_result(retmsg="File not found!")
return get_json_result(message="File not found!")
return get_json_result(data={"total": total, "files": files, "parent_folder": parent_folder.to_json()})
except Exception as e:
@ -212,7 +209,7 @@ def get_parent_folder():
try:
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="Folder not found!")
return get_data_error_result(message="Folder not found!")
parent_folder = FileService.get_parent_folder(file_id)
return get_json_result(data={"parent_folder": parent_folder.to_json()})
@ -227,7 +224,7 @@ def get_all_parent_folders():
try:
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="Folder not found!")
return get_data_error_result(message="Folder not found!")
parent_folders = FileService.get_all_parent_folders(file_id)
parent_folders_res = []
@ -248,9 +245,9 @@ def rm():
for file_id in file_ids:
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="File or Folder not found!")
return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
if file.source_type == FileSource.KNOWLEDGEBASE:
continue
@ -259,13 +256,13 @@ def rm():
for inner_file_id in file_id_list:
e, file = FileService.get_by_id(inner_file_id)
if not e:
return get_data_error_result(retmsg="File not found!")
return get_data_error_result(message="File not found!")
STORAGE_IMPL.rm(file.parent_id, file.location)
FileService.delete_folder_by_pf_id(current_user.id, file_id)
else:
if not FileService.delete(file):
return get_data_error_result(
retmsg="Database error (File removal)!")
message="Database error (File removal)!")
# delete file2document
informs = File2DocumentService.get_by_file_id(file_id)
@ -273,13 +270,13 @@ def rm():
doc_id = inform.document_id
e, doc = DocumentService.get_by_id(doc_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
tenant_id = DocumentService.get_tenant_id(doc_id)
if not tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
if not DocumentService.remove_document(doc, tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
message="Database error (Document removal)!")
File2DocumentService.delete_by_file_id(file_id)
return get_json_result(data=True)
@ -295,30 +292,30 @@ def rename():
try:
e, file = FileService.get_by_id(req["file_id"])
if not e:
return get_data_error_result(retmsg="File not found!")
return get_data_error_result(message="File not found!")
if file.type != FileType.FOLDER.value \
and pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
file.name.lower()).suffix:
return get_json_result(
data=False,
retmsg="The extension of file can't be changed",
retcode=RetCode.ARGUMENT_ERROR)
message="The extension of file can't be changed",
code=settings.RetCode.ARGUMENT_ERROR)
for file in FileService.query(name=req["name"], pf_id=file.parent_id):
if file.name == req["name"]:
return get_data_error_result(
retmsg="Duplicated file name in the same folder.")
message="Duplicated file name in the same folder.")
if not FileService.update_by_id(
req["file_id"], {"name": req["name"]}):
return get_data_error_result(
retmsg="Database error (File rename)!")
message="Database error (File rename)!")
informs = File2DocumentService.get_by_file_id(req["file_id"])
if informs:
if not DocumentService.update_by_id(
informs[0].document_id, {"name": req["name"]}):
return get_data_error_result(
retmsg="Database error (Document rename)!")
message="Database error (Document rename)!")
return get_json_result(data=True)
except Exception as e:
@ -331,7 +328,7 @@ def get(file_id):
try:
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="Document not found!")
return get_data_error_result(message="Document not found!")
b, n = File2DocumentService.get_storage_address(file_id=file_id)
response = flask.make_response(STORAGE_IMPL.get(b, n))
ext = re.search(r"\.([^.]+)$", file.name)
@ -359,12 +356,12 @@ def move():
for file_id in file_ids:
e, file = FileService.get_by_id(file_id)
if not e:
return get_data_error_result(retmsg="File or Folder not found!")
return get_data_error_result(message="File or Folder not found!")
if not file.tenant_id:
return get_data_error_result(retmsg="Tenant not found!")
return get_data_error_result(message="Tenant not found!")
fe, _ = FileService.get_by_id(parent_id)
if not fe:
return get_data_error_result(retmsg="Parent Folder not found!")
return get_data_error_result(message="Parent Folder not found!")
FileService.move_file(file_ids, parent_id)
return get_json_result(data=True)
except Exception as e:

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from elasticsearch_dsl import Q
from flask import request
from flask_login import login_required, current_user
@ -23,14 +22,13 @@ from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.user_service import TenantService, UserTenantService
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.utils import get_uuid, get_format_time
from api.db import StatusEnum, UserTenantRole, FileSource
from api.utils import get_uuid
from api.db import StatusEnum, FileSource
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.db_models import Knowledgebase, File
from api.settings import stat_logger, RetCode
from api.db.db_models import File
from api.utils.api_utils import get_json_result
from api import settings
from rag.nlp import search
from rag.utils.es_conn import ELASTICSEARCH
@manager.route('/create', methods=['post'])
@ -50,7 +48,7 @@ def create():
req["created_by"] = current_user.id
e, t = TenantService.get_by_id(current_user.id)
if not e:
return get_data_error_result(retmsg="Tenant not found.")
return get_data_error_result(message="Tenant not found.")
req["embd_id"] = t.embd_id
if not KnowledgebaseService.save(**req):
return get_data_error_result()
@ -65,21 +63,27 @@ def create():
def update():
req = request.json
req["name"] = req["name"].strip()
if not KnowledgebaseService.accessible4deletion(req["kb_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try:
if not KnowledgebaseService.query(
created_by=current_user.id, id=req["kb_id"]):
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of knowledgebase authorized for this operation.', code=settings.RetCode.OPERATING_ERROR)
e, kb = KnowledgebaseService.get_by_id(req["kb_id"])
if not e:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
if req["name"].lower() != kb.name.lower() \
and len(KnowledgebaseService.query(name=req["name"], tenant_id=current_user.id, status=StatusEnum.VALID.value)) > 1:
return get_data_error_result(
retmsg="Duplicated knowledgebase name.")
message="Duplicated knowledgebase name.")
del req["kb_id"]
if not KnowledgebaseService.update_by_id(kb.id, req):
@ -88,7 +92,7 @@ def update():
e, kb = KnowledgebaseService.get_by_id(kb.id)
if not e:
return get_data_error_result(
retmsg="Database error (Knowledgebase rename)!")
message="Database error (Knowledgebase rename)!")
return get_json_result(data=kb.to_json())
except Exception as e:
@ -107,12 +111,12 @@ def detail():
break
else:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.',
retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of knowledgebase authorized for this operation.',
code=settings.RetCode.OPERATING_ERROR)
kb = KnowledgebaseService.get_detail(kb_id)
if not kb:
return get_data_error_result(
retmsg="Can't find this knowledgebase!")
message="Can't find this knowledgebase!")
return get_json_result(data=kb)
except Exception as e:
return server_error_response(e)
@ -139,24 +143,32 @@ def list_kbs():
@validate_request("kb_id")
def rm():
req = request.json
if not KnowledgebaseService.accessible4deletion(req["kb_id"], current_user.id):
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR
)
try:
kbs = KnowledgebaseService.query(
created_by=current_user.id, id=req["kb_id"])
if not kbs:
return get_json_result(
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', retcode=RetCode.OPERATING_ERROR)
data=False, message='Only owner of knowledgebase authorized for this operation.', code=settings.RetCode.OPERATING_ERROR)
for doc in DocumentService.query(kb_id=req["kb_id"]):
if not DocumentService.remove_document(doc, kbs[0].tenant_id):
return get_data_error_result(
retmsg="Database error (Document removal)!")
message="Database error (Document removal)!")
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kbs[0].name])
File2DocumentService.delete_by_document_id(doc.id)
if not KnowledgebaseService.delete_by_id(req["kb_id"]):
return get_data_error_result(
retmsg="Database error (Knowledgebase removal)!")
message="Database error (Knowledgebase removal)!")
settings.docStoreConn.delete({"kb_id": req["kb_id"]}, search.index_name(kbs[0].tenant_id), req["kb_id"])
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -13,12 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import json
from flask import request
from flask_login import login_required, current_user
from api.db.services.llm_service import LLMFactoriesService, TenantLLMService, LLMService
from api.settings import LIGHTEN
from api import settings
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
from api.db import StatusEnum, LLMType
from api.db.db_models import TenantLLM
@ -58,7 +59,7 @@ def set_api_key():
chat_passed, embd_passed, rerank_passed = False, False, False
factory = req["llm_factory"]
msg = ""
for llm in LLMService.query(fid=factory)[:3]:
for llm in LLMService.query(fid=factory):
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
mdl = EmbeddingModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
@ -73,14 +74,14 @@ def set_api_key():
mdl = ChatModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}],
{"temperature": 0.9,'max_tokens':50})
if m.find("**ERROR**") >=0:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}],
{"temperature": 0.9, 'max_tokens': 50})
if m.find("**ERROR**") >= 0:
raise Exception(m)
chat_passed = True
except Exception as e:
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
e)
chat_passed = True
elif not rerank_passed and llm.model_type == LLMType.RERANK:
mdl = RerankModel[factory](
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
@ -88,13 +89,17 @@ def set_api_key():
arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"])
if len(arr) == 0 or tc == 0:
raise Exception("Fail")
rerank_passed = True
logging.debug(f'passed model rerank {llm.llm_name}')
except Exception as e:
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
e)
rerank_passed = True
if any([embd_passed, chat_passed, rerank_passed]):
msg = ''
break
if msg:
return get_data_error_result(retmsg=msg)
return get_data_error_result(message=msg)
llm_config = {
"api_key": req["api_key"],
@ -105,6 +110,7 @@ def set_api_key():
llm_config[n] = req[n]
for llm in LLMService.query(fid=factory):
llm_config["max_tokens"]=llm.max_tokens
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id,
TenantLLM.llm_factory == factory,
@ -116,7 +122,8 @@ def set_api_key():
llm_name=llm.llm_name,
model_type=llm.model_type,
api_key=llm_config["api_key"],
api_base=llm_config["api_base"]
api_base=llm_config["api_base"],
max_tokens=llm_config["max_tokens"]
)
return get_json_result(data=True)
@ -153,23 +160,23 @@ def add_llm():
api_key = apikey_json(["bedrock_ak", "bedrock_sk", "bedrock_region"])
elif factory == "LocalAI":
llm_name = req["llm_name"]+"___LocalAI"
llm_name = req["llm_name"] + "___LocalAI"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "HuggingFace":
llm_name = req["llm_name"]+"___HuggingFace"
llm_name = req["llm_name"] + "___HuggingFace"
api_key = "xxxxxxxxxxxxxxx"
elif factory == "OpenAI-API-Compatible":
llm_name = req["llm_name"]+"___OpenAI-API"
api_key = req.get("api_key","xxxxxxxxxxxxxxx")
llm_name = req["llm_name"] + "___OpenAI-API"
api_key = req.get("api_key", "xxxxxxxxxxxxxxx")
elif factory =="XunFei Spark":
elif factory == "XunFei Spark":
llm_name = req["llm_name"]
if req["model_type"] == "chat":
api_key = req.get("spark_api_password", "xxxxxxxxxxxxxxx")
elif req["model_type"] == "tts":
api_key = apikey_json(["spark_app_id", "spark_api_secret","spark_api_key"])
api_key = apikey_json(["spark_app_id", "spark_api_secret", "spark_api_key"])
elif factory == "BaiduYiyan":
llm_name = req["llm_name"]
@ -183,6 +190,10 @@ def add_llm():
llm_name = req["llm_name"]
api_key = apikey_json(["google_project_id", "google_region", "google_service_account_key"])
elif factory == "Azure-OpenAI":
llm_name = req["llm_name"]
api_key = apikey_json(["api_key", "api_version"])
else:
llm_name = req["llm_name"]
api_key = req.get("api_key", "xxxxxxxxxxxxxxx")
@ -193,14 +204,15 @@ def add_llm():
"model_type": req["model_type"],
"llm_name": llm_name,
"api_base": req.get("api_base", ""),
"api_key": api_key
"api_key": api_key,
"max_tokens": req.get("max_tokens")
}
msg = ""
if llm["model_type"] == LLMType.EMBEDDING.value:
mdl = EmbeddingModel[factory](
key=llm['api_key'],
model_name=llm["llm_name"],
model_name=llm["llm_name"],
base_url=llm["api_base"])
try:
arr, tc = mdl.encode(["Test if the api key is available"])
@ -216,7 +228,7 @@ def add_llm():
)
try:
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {
"temperature": 0.9})
"temperature": 0.9})
if not tc:
raise Exception(m)
except Exception as e:
@ -224,12 +236,12 @@ def add_llm():
e)
elif llm["model_type"] == LLMType.RERANK:
mdl = RerankModel[factory](
key=llm["api_key"],
model_name=llm["llm_name"],
key=llm["api_key"],
model_name=llm["llm_name"],
base_url=llm["api_base"]
)
try:
arr, tc = mdl.similarity("Hello~ Ragflower!", ["Hi, there!"])
arr, tc = mdl.similarity("Hello~ Ragflower!", ["Hi, there!", "Ohh, my friend!"])
if len(arr) == 0 or tc == 0:
raise Exception("Not known.")
except Exception as e:
@ -237,8 +249,8 @@ def add_llm():
e)
elif llm["model_type"] == LLMType.IMAGE2TEXT.value:
mdl = CvModel[factory](
key=llm["api_key"],
model_name=llm["llm_name"],
key=llm["api_key"],
model_name=llm["llm_name"],
base_url=llm["api_base"]
)
try:
@ -270,10 +282,11 @@ def add_llm():
pass
if msg:
return get_data_error_result(retmsg=msg)
return get_data_error_result(message=msg)
if not TenantLLMService.filter_update(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm["llm_name"]], llm):
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory,
TenantLLM.llm_name == llm["llm_name"]], llm):
TenantLLMService.save(**llm)
return get_json_result(data=True)
@ -285,7 +298,8 @@ def add_llm():
def delete_llm():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]])
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"],
TenantLLM.llm_name == req["llm_name"]])
return get_json_result(data=True)
@ -295,7 +309,7 @@ def delete_llm():
def delete_factory():
req = request.json
TenantLLMService.filter_delete(
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"]])
return get_json_result(data=True)
@ -323,8 +337,8 @@ def my_llms():
@manager.route('/list', methods=['GET'])
@login_required
def list_app():
self_deploied = ["Youdao","FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio"]
weighted = ["Youdao","FastEmbed", "BAAI"] if LIGHTEN else []
self_deploied = ["Youdao", "FastEmbed", "BAAI", "Ollama", "Xinference", "LocalAI", "LM-Studio"]
weighted = ["Youdao", "FastEmbed", "BAAI"] if settings.LIGHTEN != 0 else []
model_type = request.args.get("model_type")
try:
objs = TenantLLMService.query(tenant_id=current_user.id)
@ -335,15 +349,15 @@ def list_app():
for m in llms:
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in self_deploied
llm_set = set([m["llm_name"] for m in llms])
llm_set = set([m["llm_name"] + "@" + m["fid"] for m in llms])
for o in objs:
if not o.api_key:continue
if o.llm_name in llm_set:continue
if not o.api_key: continue
if o.llm_name + "@" + o.llm_factory in llm_set: continue
llms.append({"llm_name": o.llm_name, "model_type": o.model_type, "fid": o.llm_factory, "available": True})
res = {}
for m in llms:
if model_type and m["model_type"].find(model_type)<0:
if model_type and m["model_type"].find(model_type) < 0:
continue
if m["fid"] not in res:
res[m["fid"]] = []
@ -351,4 +365,4 @@ def list_app():
return get_json_result(data=res)
except Exception as e:
return server_error_response(e)
return server_error_response(e)

View File

@ -1,304 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api.db import StatusEnum
from api.db.db_models import TenantLLM
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMService, TenantLLMService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result, token_required
from api.utils.api_utils import get_json_result
@manager.route('/save', methods=['POST'])
@token_required
def save(tenant_id):
req = request.json
# dataset
if req.get("knowledgebases") == []:
return get_data_error_result(retmsg="knowledgebases can not be empty list")
kb_list = []
if req.get("knowledgebases"):
for kb in req.get("knowledgebases"):
if not kb["id"]:
return get_data_error_result(retmsg="knowledgebase needs id")
if not KnowledgebaseService.query(id=kb["id"], tenant_id=tenant_id):
return get_data_error_result(retmsg="you do not own the knowledgebase")
# if not DocumentService.query(kb_id=kb["id"]):
# return get_data_error_result(retmsg="There is a invalid knowledgebase")
kb_list.append(kb["id"])
req["kb_ids"] = kb_list
# llm
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_data_error_result(retmsg="Tenant not found!")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
# create
if "id" not in req:
# dataset
if not kb_list:
return get_data_error_result(retmsg="knowledgebases are required!")
# init
req["id"] = get_uuid()
req["description"] = req.get("description", "A helpful Assistant")
req["icon"] = req.get("avatar", "")
req["top_n"] = req.get("top_n", 6)
req["top_k"] = req.get("top_k", 1024)
req["rerank_id"] = req.get("rerank_id", "")
if req.get("llm_id"):
if not TenantLLMService.query(llm_name=req["llm_id"]):
return get_data_error_result(retmsg="the model_name does not exist.")
else:
req["llm_id"] = tenant.llm_id
if not req.get("name"):
return get_data_error_result(retmsg="name is required.")
if DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="Duplicated assistant name in creating dataset.")
# tenant_id
if req.get("tenant_id"):
return get_data_error_result(retmsg="tenant_id must not be provided.")
req["tenant_id"] = tenant_id
# prompt more parameter
default_prompt = {
"system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。
以下是知识库:
{knowledge}
以上是知识库。""",
"prologue": "您好我是您的助手小樱长得可爱又善良can I help you?",
"parameters": [
{"key": "knowledge", "optional": False}
],
"empty_response": "Sorry! 知识库中未找到相关内容!"
}
key_list_2 = ["system", "prologue", "parameters", "empty_response"]
if "prompt_config" not in req:
req['prompt_config'] = {}
for key in key_list_2:
temp = req['prompt_config'].get(key)
if not temp:
req['prompt_config'][key] = default_prompt[key]
for p in req['prompt_config']["parameters"]:
if p["optional"]:
continue
if req['prompt_config']["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(
retmsg="Parameter '{}' is not used".format(p["key"]))
# save
if not DialogService.save(**req):
return get_data_error_result(retmsg="Fail to new an assistant!")
# response
e, res = DialogService.get_by_id(req["id"])
if not e:
return get_data_error_result(retmsg="Fail to new an assistant!")
res = res.to_json()
renamed_dict = {}
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
del res["kb_ids"]
res["knowledgebases"] = req["knowledgebases"]
res["avatar"] = res.pop("icon")
return get_json_result(data=res)
else:
# authorization
if not DialogService.query(tenant_id=tenant_id, id=req["id"], status=StatusEnum.VALID.value):
return get_json_result(data=False, retmsg='You do not own the assistant', retcode=RetCode.OPERATING_ERROR)
# prompt
if not req["id"]:
return get_data_error_result(retmsg="id can not be empty")
e, res = DialogService.get_by_id(req["id"])
res = res.to_json()
if "llm_id" in req:
if not TenantLLMService.query(llm_name=req["llm_id"]):
return get_data_error_result(retmsg="the model_name does not exist.")
if "name" in req:
if not req.get("name"):
return get_data_error_result(retmsg="name is not empty.")
if req["name"].lower() != res["name"].lower() \
and len(
DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)) > 0:
return get_data_error_result(retmsg="Duplicated assistant name in updating dataset.")
if "prompt_config" in req:
res["prompt_config"].update(req["prompt_config"])
for p in res["prompt_config"]["parameters"]:
if p["optional"]:
continue
if res["prompt_config"]["system"].find("{%s}" % p["key"]) < 0:
return get_data_error_result(retmsg="Parameter '{}' is not used".format(p["key"]))
if "llm_setting" in req:
res["llm_setting"].update(req["llm_setting"])
req["prompt_config"] = res["prompt_config"]
req["llm_setting"] = res["llm_setting"]
# avatar
if "avatar" in req:
req["icon"] = req.pop("avatar")
assistant_id = req.pop("id")
if "knowledgebases" in req:
req.pop("knowledgebases")
if not DialogService.update_by_id(assistant_id, req):
return get_data_error_result(retmsg="Assistant not found!")
return get_json_result(data=True)
@manager.route('/delete', methods=['DELETE'])
@token_required
def delete(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(retmsg="id is required")
id = req['id']
if not DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value):
return get_json_result(data=False, retmsg='you do not own the assistant.', retcode=RetCode.OPERATING_ERROR)
temp_dict = {"status": StatusEnum.INVALID.value}
DialogService.update_by_id(req["id"], temp_dict)
return get_json_result(data=True)
@manager.route('/get', methods=['GET'])
@token_required
def get(tenant_id):
req = request.args
if "id" in req:
id = req["id"]
ass = DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value)
if not ass:
return get_json_result(data=False, retmsg='You do not own the assistant.', retcode=RetCode.OPERATING_ERROR)
if "name" in req:
name = req["name"]
if ass[0].name != name:
return get_json_result(data=False, retmsg='name does not match id.', retcode=RetCode.OPERATING_ERROR)
res = ass[0].to_json()
else:
if "name" in req:
name = req["name"]
ass = DialogService.query(name=name, tenant_id=tenant_id, status=StatusEnum.VALID.value)
if not ass:
return get_json_result(data=False, retmsg='You do not own the assistant.',
retcode=RetCode.OPERATING_ERROR)
res = ass[0].to_json()
else:
return get_data_error_result(retmsg="At least one of `id` or `name` must be provided.")
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["knowledgebases"] = kb_list
res["avatar"] = res.pop("icon")
return get_json_result(data=res)
@manager.route('/list', methods=['GET'])
@token_required
def list_assistants(tenant_id):
assts = DialogService.query(
tenant_id=tenant_id,
status=StatusEnum.VALID.value,
reverse=True,
order_by=DialogService.model.create_time)
assts = [d.to_dict() for d in assts]
list_assts = []
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for res in assts:
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["knowledgebases"] = kb_list
res["avatar"] = res.pop("icon")
list_assts.append(res)
return get_json_result(data=list_assts)

313
api/apps/sdk/chat.py Normal file
View File

@ -0,0 +1,313 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api import settings
from api.db import StatusEnum
from api.db.services.dialog_service import DialogService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService
from api.db.services.user_service import TenantService
from api.utils import get_uuid
from api.utils.api_utils import get_error_data_result, token_required
from api.utils.api_utils import get_result
@manager.route('/chats', methods=['POST'])
@token_required
def create(tenant_id):
req=request.json
ids= req.get("dataset_ids")
if not ids:
return get_error_data_result(message="`dataset_ids` is required")
for kb_id in ids:
kbs = KnowledgebaseService.accessible(kb_id=kb_id,user_id=tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {kb_id}")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
kbs = KnowledgebaseService.get_by_ids(ids)
embd_count = list(set([kb.embd_id for kb in kbs]))
if len(embd_count) != 1:
return get_result(message='Datasets use different embedding models."',code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
# llm
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
if not TenantLLMService.query(tenant_id=tenant_id,llm_name=req["llm_id"],model_type="chat"):
return get_error_data_result(f"`model_name` {req.get('llm_id')} doesn't exist")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_error_data_result(message="Tenant not found!")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
# init
req["id"] = get_uuid()
req["description"] = req.get("description", "A helpful Assistant")
req["icon"] = req.get("avatar", "")
req["top_n"] = req.get("top_n", 6)
req["top_k"] = req.get("top_k", 1024)
req["rerank_id"] = req.get("rerank_id", "")
if req.get("rerank_id"):
if not TenantLLMService.query(tenant_id=tenant_id,llm_name=req.get("rerank_id"),model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_id')} doesn't exist")
if not req.get("llm_id"):
req["llm_id"] = tenant.llm_id
if not req.get("name"):
return get_error_data_result(message="`name` is required.")
if DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="Duplicated chat name in creating chat.")
# tenant_id
if req.get("tenant_id"):
return get_error_data_result(message="`tenant_id` must not be provided.")
req["tenant_id"] = tenant_id
# prompt more parameter
default_prompt = {
"system": """You are an intelligent assistant. Please summarize the content of the knowledge base to answer the question. Please list the data in the knowledge base and answer in detail. When all knowledge base content is irrelevant to the question, your answer must include the sentence "The answer you are looking for is not found in the knowledge base!" Answers need to consider chat history.
Here is the knowledge base:
{knowledge}
The above is the knowledge base.""",
"prologue": "Hi! I'm your assistant, what can I do for you?",
"parameters": [
{"key": "knowledge", "optional": False}
],
"empty_response": "Sorry! No relevant content was found in the knowledge base!"
}
key_list_2 = ["system", "prologue", "parameters", "empty_response"]
if "prompt_config" not in req:
req['prompt_config'] = {}
for key in key_list_2:
temp = req['prompt_config'].get(key)
if (not temp and key == 'system') or (key not in req["prompt_config"]):
req['prompt_config'][key] = default_prompt[key]
for p in req['prompt_config']["parameters"]:
if p["optional"]:
continue
if req['prompt_config']["system"].find("{%s}" % p["key"]) < 0:
return get_error_data_result(
message="Parameter '{}' is not used".format(p["key"]))
# save
if not DialogService.save(**req):
return get_error_data_result(message="Fail to new a chat!")
# response
e, res = DialogService.get_by_id(req["id"])
if not e:
return get_error_data_result(message="Fail to new a chat!")
res = res.to_json()
renamed_dict = {}
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
del res["kb_ids"]
res["dataset_ids"] = req["dataset_ids"]
res["avatar"] = res.pop("icon")
return get_result(data=res)
@manager.route('/chats/<chat_id>', methods=['PUT'])
@token_required
def update(tenant_id,chat_id):
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
return get_error_data_result(message='You do not own the chat')
req =request.json
ids = req.get("dataset_ids")
if "show_quotation" in req:
req["do_refer"]=req.pop("show_quotation")
if "dataset_ids" in req:
if not ids:
return get_error_data_result("`datasets` can't be empty")
if ids:
for kb_id in ids:
kbs = KnowledgebaseService.accessible(kb_id=chat_id, user_id=tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {kb_id}")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
kbs = KnowledgebaseService.get_by_ids(ids)
embd_count=list(set([kb.embd_id for kb in kbs]))
if len(embd_count) != 1 :
return get_result(
message='Datasets use different embedding models."',
code=settings.RetCode.AUTHENTICATION_ERROR)
req["kb_ids"] = ids
llm = req.get("llm")
if llm:
if "model_name" in llm:
req["llm_id"] = llm.pop("model_name")
if not TenantLLMService.query(tenant_id=tenant_id,llm_name=req["llm_id"],model_type="chat"):
return get_error_data_result(f"`model_name` {req.get('llm_id')} doesn't exist")
req["llm_setting"] = req.pop("llm")
e, tenant = TenantService.get_by_id(tenant_id)
if not e:
return get_error_data_result(message="Tenant not found!")
if req.get("rerank_model"):
if not TenantLLMService.query(tenant_id=tenant_id,llm_name=req.get("rerank_model"),model_type="rerank"):
return get_error_data_result(f"`rerank_model` {req.get('rerank_model')} doesn't exist")
# prompt
prompt = req.get("prompt")
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
if prompt:
for new_key, old_key in key_mapping.items():
if old_key in prompt:
prompt[new_key] = prompt.pop(old_key)
for key in key_list:
if key in prompt:
req[key] = prompt.pop(key)
req["prompt_config"] = req.pop("prompt")
e, res = DialogService.get_by_id(chat_id)
res = res.to_json()
if "name" in req:
if not req.get("name"):
return get_error_data_result(message="`name` is not empty.")
if req["name"].lower() != res["name"].lower() \
and len(
DialogService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value)) > 0:
return get_error_data_result(message="Duplicated chat name in updating dataset.")
if "prompt_config" in req:
res["prompt_config"].update(req["prompt_config"])
for p in res["prompt_config"]["parameters"]:
if p["optional"]:
continue
if res["prompt_config"]["system"].find("{%s}" % p["key"]) < 0:
return get_error_data_result(message="Parameter '{}' is not used".format(p["key"]))
if "llm_setting" in req:
res["llm_setting"].update(req["llm_setting"])
req["prompt_config"] = res["prompt_config"]
req["llm_setting"] = res["llm_setting"]
# avatar
if "avatar" in req:
req["icon"] = req.pop("avatar")
if "dataset_ids" in req:
req.pop("dataset_ids")
if not DialogService.update_by_id(chat_id, req):
return get_error_data_result(message="Chat not found!")
return get_result()
@manager.route('/chats', methods=['DELETE'])
@token_required
def delete(tenant_id):
req = request.json
if not req:
ids=None
else:
ids=req.get("ids")
if not ids:
id_list = []
dias=DialogService.query(tenant_id=tenant_id,status=StatusEnum.VALID.value)
for dia in dias:
id_list.append(dia.id)
else:
id_list=ids
for id in id_list:
if not DialogService.query(tenant_id=tenant_id, id=id, status=StatusEnum.VALID.value):
return get_error_data_result(message=f"You don't own the chat {id}")
temp_dict = {"status": StatusEnum.INVALID.value}
DialogService.update_by_id(id, temp_dict)
return get_result()
@manager.route('/chats', methods=['GET'])
@token_required
def list_chat(tenant_id):
id = request.args.get("id")
name = request.args.get("name")
chat = DialogService.query(id=id,name=name,status=StatusEnum.VALID.value,tenant_id=tenant_id)
if not chat:
return get_error_data_result(message="The chat doesn't exist")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
chats = DialogService.get_list(tenant_id,page_number,items_per_page,orderby,desc,id,name)
if not chats:
return get_result(data=[])
list_assts = []
renamed_dict = {}
key_mapping = {"parameters": "variables",
"prologue": "opener",
"quote": "show_quote",
"system": "prompt",
"rerank_id": "rerank_model",
"vector_similarity_weight": "keywords_similarity_weight",
"do_refer":"show_quotation"}
key_list = ["similarity_threshold", "vector_similarity_weight", "top_n", "rerank_id"]
for res in chats:
for key, value in res["prompt_config"].items():
new_key = key_mapping.get(key, key)
renamed_dict[new_key] = value
res["prompt"] = renamed_dict
del res["prompt_config"]
new_dict = {"similarity_threshold": res["similarity_threshold"],
"keywords_similarity_weight": res["vector_similarity_weight"],
"top_n": res["top_n"],
"rerank_model": res['rerank_id']}
res["prompt"].update(new_dict)
for key in key_list:
del res[key]
res["llm"] = res.pop("llm_setting")
res["llm"]["model_name"] = res.pop("llm_id")
kb_list = []
for kb_id in res["kb_ids"]:
kb = KnowledgebaseService.query(id=kb_id)
if not kb :
return get_error_data_result(message=f"Don't exist the kb {kb_id}")
kb_list.append(kb[0].to_json())
del res["kb_ids"]
res["datasets"] = kb_list
res["avatar"] = res.pop("icon")
list_assts.append(res)
return get_result(data=list_assts)

View File

@ -1,224 +1,531 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api.db import StatusEnum, FileSource
from api.db.db_models import File
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.user_service import TenantService
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, token_required, get_data_error_result
@manager.route('/save', methods=['POST'])
@token_required
def save(tenant_id):
req = request.json
e, t = TenantService.get_by_id(tenant_id)
if "id" not in req:
if "tenant_id" in req or "embedding_model" in req:
return get_data_error_result(
retmsg="Tenant_id or embedding_model must not be provided")
if "name" not in req:
return get_data_error_result(
retmsg="Name is not empty!")
req['id'] = get_uuid()
req["name"] = req["name"].strip()
if req["name"] == "":
return get_data_error_result(
retmsg="Name is not empty string!")
if KnowledgebaseService.query(name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(
retmsg="Duplicated knowledgebase name in creating dataset.")
req["tenant_id"] = req['created_by'] = tenant_id
req['embedding_model'] = t.embd_id
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "parse_method",
"embd_id": "embedding_model"
}
mapped_keys = {new_key: req[old_key] for new_key, old_key in key_mapping.items() if old_key in req}
req.update(mapped_keys)
if not KnowledgebaseService.save(**req):
return get_data_error_result(retmsg="Create dataset error.(Database error)")
renamed_data = {}
e, k = KnowledgebaseService.get_by_id(req["id"])
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_json_result(data=renamed_data)
else:
invalid_keys = {"embd_id", "chunk_num", "doc_num", "parser_id"}
if any(key in req for key in invalid_keys):
return get_data_error_result(retmsg="The input parameters are invalid.")
if "tenant_id" in req:
if req["tenant_id"] != tenant_id:
return get_data_error_result(
retmsg="Can't change tenant_id.")
if "embedding_model" in req:
if req["embedding_model"] != t.embd_id:
return get_data_error_result(
retmsg="Can't change embedding_model.")
req.pop("embedding_model")
if not KnowledgebaseService.query(
created_by=tenant_id, id=req["id"]):
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
if not req["id"]:
return get_data_error_result(
retmsg="id can not be empty.")
e, kb = KnowledgebaseService.get_by_id(req["id"])
if "chunk_count" in req:
if req["chunk_count"] != kb.chunk_num:
return get_data_error_result(
retmsg="Can't change chunk_count.")
req.pop("chunk_count")
if "document_count" in req:
if req['document_count'] != kb.doc_num:
return get_data_error_result(
retmsg="Can't change document_count.")
req.pop("document_count")
if "parse_method" in req:
if kb.chunk_num != 0 and req['parse_method'] != kb.parser_id:
return get_data_error_result(
retmsg="If chunk count is not 0, parse method is not changable.")
req['parser_id'] = req.pop('parse_method')
if "name" in req:
req["name"] = req["name"].strip()
if req["name"].lower() != kb.name.lower() \
and len(KnowledgebaseService.query(name=req["name"], tenant_id=tenant_id,
status=StatusEnum.VALID.value)) > 0:
return get_data_error_result(
retmsg="Duplicated knowledgebase name in updating dataset.")
del req["id"]
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_data_error_result(retmsg="Update dataset error.(Database error)")
return get_json_result(data=True)
@manager.route('/delete', methods=['DELETE'])
@token_required
def delete(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(
retmsg="id is required")
kbs = KnowledgebaseService.query(
created_by=tenant_id, id=req["id"])
if not kbs:
return get_json_result(
data=False, retmsg='You do not own the dataset',
retcode=RetCode.OPERATING_ERROR)
for doc in DocumentService.query(kb_id=req["id"]):
if not DocumentService.remove_document(doc, kbs[0].tenant_id):
return get_data_error_result(
retmsg="Remove document error.(Database error)")
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
File2DocumentService.delete_by_document_id(doc.id)
if not KnowledgebaseService.delete_by_id(req["id"]):
return get_data_error_result(
retmsg="Delete dataset error.(Database serror)")
return get_json_result(data=True)
@manager.route('/list', methods=['GET'])
@token_required
def list_datasets(tenant_id):
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 1024))
orderby = request.args.get("orderby", "create_time")
desc = bool(request.args.get("desc", True))
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs = KnowledgebaseService.get_by_tenant_ids(
[m["tenant_id"] for m in tenants], tenant_id, page_number, items_per_page, orderby, desc)
renamed_list = []
for kb in kbs:
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "parse_method",
"embd_id": "embedding_model"
}
renamed_data = {}
for key, value in kb.items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
renamed_list.append(renamed_data)
return get_json_result(data=renamed_list)
@manager.route('/detail', methods=['GET'])
@token_required
def detail(tenant_id):
req = request.args
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "parse_method",
"embd_id": "embedding_model"
}
renamed_data = {}
if "id" in req:
id = req["id"]
kb = KnowledgebaseService.query(created_by=tenant_id, id=req["id"])
if not kb:
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
if "name" in req:
name = req["name"]
if kb[0].name != name:
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
e, k = KnowledgebaseService.get_by_id(id)
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_json_result(data=renamed_data)
else:
if "name" in req:
name = req["name"]
e, k = KnowledgebaseService.get_by_name(kb_name=name, tenant_id=tenant_id)
if not e:
return get_json_result(
data=False, retmsg='You do not own the dataset.',
retcode=RetCode.OPERATING_ERROR)
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_json_result(data=renamed_data)
else:
return get_data_error_result(
retmsg="At least one of `id` or `name` must be provided.")
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request
from api.db import StatusEnum, FileSource
from api.db.db_models import File
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.db.services.file_service import FileService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import TenantLLMService, LLMService
from api.db.services.user_service import TenantService
from api import settings
from api.utils import get_uuid
from api.utils.api_utils import (
get_result,
token_required,
get_error_data_result,
valid,
get_parser_config,
)
@manager.route("/datasets", methods=["POST"])
@token_required
def create(tenant_id):
"""
Create a new dataset.
---
tags:
- Datasets
security:
- ApiKeyAuth: []
parameters:
- in: header
name: Authorization
type: string
required: true
description: Bearer token for authentication.
- in: body
name: body
description: Dataset creation parameters.
required: true
schema:
type: object
required:
- name
properties:
name:
type: string
description: Name of the dataset.
permission:
type: string
enum: ['me', 'team']
description: Dataset permission.
language:
type: string
enum: ['Chinese', 'English']
description: Language of the dataset.
chunk_method:
type: string
enum: ["naive", "manual", "qa", "table", "paper", "book", "laws",
"presentation", "picture", "one", "knowledge_graph", "email"]
description: Chunking method.
parser_config:
type: object
description: Parser configuration.
responses:
200:
description: Successful operation.
schema:
type: object
properties:
data:
type: object
"""
req = request.json
e, t = TenantService.get_by_id(tenant_id)
permission = req.get("permission")
language = req.get("language")
chunk_method = req.get("chunk_method")
parser_config = req.get("parser_config")
valid_permission = ["me", "team"]
valid_language = ["Chinese", "English"]
valid_chunk_method = [
"naive",
"manual",
"qa",
"table",
"paper",
"book",
"laws",
"presentation",
"picture",
"one",
"knowledge_graph",
"email",
]
check_validation = valid(
permission,
valid_permission,
language,
valid_language,
chunk_method,
valid_chunk_method,
)
if check_validation:
return check_validation
req["parser_config"] = get_parser_config(chunk_method, parser_config)
if "tenant_id" in req:
return get_error_data_result(message="`tenant_id` must not be provided")
if "chunk_count" in req or "document_count" in req:
return get_error_data_result(
message="`chunk_count` or `document_count` must not be provided"
)
if "name" not in req:
return get_error_data_result(message="`name` is not empty!")
req["id"] = get_uuid()
req["name"] = req["name"].strip()
if req["name"] == "":
return get_error_data_result(message="`name` is not empty string!")
if KnowledgebaseService.query(
name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value
):
return get_error_data_result(
message="Duplicated dataset name in creating dataset."
)
req["tenant_id"] = req["created_by"] = tenant_id
if not req.get("embedding_model"):
req["embedding_model"] = t.embd_id
else:
valid_embedding_models = [
"BAAI/bge-large-zh-v1.5",
"BAAI/bge-base-en-v1.5",
"BAAI/bge-large-en-v1.5",
"BAAI/bge-small-en-v1.5",
"BAAI/bge-small-zh-v1.5",
"jinaai/jina-embeddings-v2-base-en",
"jinaai/jina-embeddings-v2-small-en",
"nomic-ai/nomic-embed-text-v1.5",
"sentence-transformers/all-MiniLM-L6-v2",
"text-embedding-v2",
"text-embedding-v3",
"maidalun1020/bce-embedding-base_v1",
]
embd_model = LLMService.query(
llm_name=req["embedding_model"], model_type="embedding"
)
if embd_model:
if req["embedding_model"] not in valid_embedding_models and not TenantLLMService.query(tenant_id=tenant_id,model_type="embedding",llm_name=req.get("embedding_model"),):
return get_error_data_result(f"`embedding_model` {req.get('embedding_model')} doesn't exist")
if not embd_model:
embd_model=TenantLLMService.query(tenant_id=tenant_id,model_type="embedding", llm_name=req.get("embedding_model"))
if not embd_model:
return get_error_data_result(
f"`embedding_model` {req.get('embedding_model')} doesn't exist"
)
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "chunk_method",
"embd_id": "embedding_model",
}
mapped_keys = {
new_key: req[old_key]
for new_key, old_key in key_mapping.items()
if old_key in req
}
req.update(mapped_keys)
if not KnowledgebaseService.save(**req):
return get_error_data_result(message="Create dataset error.(Database error)")
renamed_data = {}
e, k = KnowledgebaseService.get_by_id(req["id"])
for key, value in k.to_dict().items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
return get_result(data=renamed_data)
@manager.route("/datasets", methods=["DELETE"])
@token_required
def delete(tenant_id):
"""
Delete datasets.
---
tags:
- Datasets
security:
- ApiKeyAuth: []
parameters:
- in: header
name: Authorization
type: string
required: true
description: Bearer token for authentication.
- in: body
name: body
description: Dataset deletion parameters.
required: true
schema:
type: object
properties:
ids:
type: array
items:
type: string
description: List of dataset IDs to delete.
responses:
200:
description: Successful operation.
schema:
type: object
"""
req = request.json
if not req:
ids = None
else:
ids = req.get("ids")
if not ids:
id_list = []
kbs = KnowledgebaseService.query(tenant_id=tenant_id)
for kb in kbs:
id_list.append(kb.id)
else:
id_list = ids
for id in id_list:
kbs = KnowledgebaseService.query(id=id, tenant_id=tenant_id)
if not kbs:
return get_error_data_result(message=f"You don't own the dataset {id}")
for doc in DocumentService.query(kb_id=id):
if not DocumentService.remove_document(doc, tenant_id):
return get_error_data_result(
message="Remove document error.(Database error)"
)
f2d = File2DocumentService.get_by_document_id(doc.id)
FileService.filter_delete(
[
File.source_type == FileSource.KNOWLEDGEBASE,
File.id == f2d[0].file_id,
]
)
FileService.filter_delete(
[File.source_type == FileSource.KNOWLEDGEBASE, File.type == "folder", File.name == kbs[0].name])
File2DocumentService.delete_by_document_id(doc.id)
if not KnowledgebaseService.delete_by_id(id):
return get_error_data_result(message="Delete dataset error.(Database error)")
return get_result(code=settings.RetCode.SUCCESS)
@manager.route("/datasets/<dataset_id>", methods=["PUT"])
@token_required
def update(tenant_id, dataset_id):
"""
Update a dataset.
---
tags:
- Datasets
security:
- ApiKeyAuth: []
parameters:
- in: path
name: dataset_id
type: string
required: true
description: ID of the dataset to update.
- in: header
name: Authorization
type: string
required: true
description: Bearer token for authentication.
- in: body
name: body
description: Dataset update parameters.
required: true
schema:
type: object
properties:
name:
type: string
description: New name of the dataset.
permission:
type: string
enum: ['me', 'team']
description: Updated permission.
language:
type: string
enum: ['Chinese', 'English']
description: Updated language.
chunk_method:
type: string
enum: ["naive", "manual", "qa", "table", "paper", "book", "laws",
"presentation", "picture", "one", "knowledge_graph", "email"]
description: Updated chunking method.
parser_config:
type: object
description: Updated parser configuration.
responses:
200:
description: Successful operation.
schema:
type: object
"""
if not KnowledgebaseService.query(id=dataset_id, tenant_id=tenant_id):
return get_error_data_result(message="You don't own the dataset")
req = request.json
e, t = TenantService.get_by_id(tenant_id)
invalid_keys = {"id", "embd_id", "chunk_num", "doc_num", "parser_id"}
if any(key in req for key in invalid_keys):
return get_error_data_result(message="The input parameters are invalid.")
permission = req.get("permission")
language = req.get("language")
chunk_method = req.get("chunk_method")
parser_config = req.get("parser_config")
valid_permission = ["me", "team"]
valid_language = ["Chinese", "English"]
valid_chunk_method = [
"naive",
"manual",
"qa",
"table",
"paper",
"book",
"laws",
"presentation",
"picture",
"one",
"knowledge_graph",
"email",
]
check_validation = valid(
permission,
valid_permission,
language,
valid_language,
chunk_method,
valid_chunk_method,
)
if check_validation:
return check_validation
if "tenant_id" in req:
if req["tenant_id"] != tenant_id:
return get_error_data_result(message="Can't change `tenant_id`.")
e, kb = KnowledgebaseService.get_by_id(dataset_id)
if "parser_config" in req:
temp_dict = kb.parser_config
temp_dict.update(req["parser_config"])
req["parser_config"] = temp_dict
if "chunk_count" in req:
if req["chunk_count"] != kb.chunk_num:
return get_error_data_result(message="Can't change `chunk_count`.")
req.pop("chunk_count")
if "document_count" in req:
if req["document_count"] != kb.doc_num:
return get_error_data_result(message="Can't change `document_count`.")
req.pop("document_count")
if "chunk_method" in req:
if kb.chunk_num != 0 and req["chunk_method"] != kb.parser_id:
return get_error_data_result(
message="If `chunk_count` is not 0, `chunk_method` is not changeable."
)
req["parser_id"] = req.pop("chunk_method")
if req["parser_id"] != kb.parser_id:
if not req.get("parser_config"):
req["parser_config"] = get_parser_config(chunk_method, parser_config)
if "embedding_model" in req:
if kb.chunk_num != 0 and req["embedding_model"] != kb.embd_id:
return get_error_data_result(
message="If `chunk_count` is not 0, `embedding_model` is not changeable."
)
if not req.get("embedding_model"):
return get_error_data_result("`embedding_model` can't be empty")
valid_embedding_models = [
"BAAI/bge-large-zh-v1.5",
"BAAI/bge-base-en-v1.5",
"BAAI/bge-large-en-v1.5",
"BAAI/bge-small-en-v1.5",
"BAAI/bge-small-zh-v1.5",
"jinaai/jina-embeddings-v2-base-en",
"jinaai/jina-embeddings-v2-small-en",
"nomic-ai/nomic-embed-text-v1.5",
"sentence-transformers/all-MiniLM-L6-v2",
"text-embedding-v2",
"text-embedding-v3",
"maidalun1020/bce-embedding-base_v1",
]
embd_model = LLMService.query(
llm_name=req["embedding_model"], model_type="embedding"
)
if embd_model:
if req["embedding_model"] not in valid_embedding_models and not TenantLLMService.query(tenant_id=tenant_id,model_type="embedding",llm_name=req.get("embedding_model"),):
return get_error_data_result(f"`embedding_model` {req.get('embedding_model')} doesn't exist")
if not embd_model:
embd_model=TenantLLMService.query(tenant_id=tenant_id,model_type="embedding", llm_name=req.get("embedding_model"))
if not embd_model:
return get_error_data_result(
f"`embedding_model` {req.get('embedding_model')} doesn't exist"
)
req["embd_id"] = req.pop("embedding_model")
if "name" in req:
req["name"] = req["name"].strip()
if (
req["name"].lower() != kb.name.lower()
and len(
KnowledgebaseService.query(
name=req["name"], tenant_id=tenant_id, status=StatusEnum.VALID.value
)
)
> 0
):
return get_error_data_result(
message="Duplicated dataset name in updating dataset."
)
if not KnowledgebaseService.update_by_id(kb.id, req):
return get_error_data_result(message="Update dataset error.(Database error)")
return get_result(code=settings.RetCode.SUCCESS)
@manager.route("/datasets", methods=["GET"])
@token_required
def list(tenant_id):
"""
List datasets.
---
tags:
- Datasets
security:
- ApiKeyAuth: []
parameters:
- in: query
name: id
type: string
required: false
description: Dataset ID to filter.
- in: query
name: name
type: string
required: false
description: Dataset name to filter.
- in: query
name: page
type: integer
required: false
default: 1
description: Page number.
- in: query
name: page_size
type: integer
required: false
default: 1024
description: Number of items per page.
- in: query
name: orderby
type: string
required: false
default: "create_time"
description: Field to order by.
- in: query
name: desc
type: boolean
required: false
default: true
description: Order in descending.
- in: header
name: Authorization
type: string
required: true
description: Bearer token for authentication.
responses:
200:
description: Successful operation.
schema:
type: array
items:
type: object
"""
id = request.args.get("id")
name = request.args.get("name")
if id:
kbs = KnowledgebaseService.get_kb_by_id(id,tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {id}")
if name:
kbs = KnowledgebaseService.get_kb_by_name(name,tenant_id)
if not kbs:
return get_error_data_result(f"You don't own the dataset {name}")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
tenants = TenantService.get_joined_tenants_by_user_id(tenant_id)
kbs = KnowledgebaseService.get_list(
[m["tenant_id"] for m in tenants],
tenant_id,
page_number,
items_per_page,
orderby,
desc,
id,
name,
)
renamed_list = []
for kb in kbs:
key_mapping = {
"chunk_num": "chunk_count",
"doc_num": "document_count",
"parser_id": "chunk_method",
"embd_id": "embedding_model",
}
renamed_data = {}
for key, value in kb.items():
new_key = key_mapping.get(key, key)
renamed_data[new_key] = value
renamed_list.append(renamed_data)
return get_result(data=renamed_list)

View File

@ -0,0 +1,76 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from flask import request, jsonify
from api.db import LLMType, ParserType
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMBundle
from api import settings
from api.utils.api_utils import validate_request, build_error_result, apikey_required
@manager.route('/dify/retrieval', methods=['POST'])
@apikey_required
@validate_request("knowledge_id", "query")
def retrieval(tenant_id):
req = request.json
question = req["query"]
kb_id = req["knowledge_id"]
retrieval_setting = req.get("retrieval_setting", {})
similarity_threshold = float(retrieval_setting.get("score_threshold", 0.0))
top = int(retrieval_setting.get("top_k", 1024))
try:
e, kb = KnowledgebaseService.get_by_id(kb_id)
if not e:
return build_error_result(message="Knowledgebase not found!", code=settings.RetCode.NOT_FOUND)
if kb.tenant_id != tenant_id:
return build_error_result(message="Knowledgebase not found!", code=settings.RetCode.NOT_FOUND)
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
retr = settings.retrievaler if kb.parser_id != ParserType.KG else settings.kg_retrievaler
ranks = retr.retrieval(
question,
embd_mdl,
kb.tenant_id,
[kb_id],
page=1,
page_size=top,
similarity_threshold=similarity_threshold,
vector_similarity_weight=0.3,
top=top
)
records = []
for c in ranks["chunks"]:
c.pop("vector", None)
records.append({
"content": c["content_ltks"],
"score": c["similarity"],
"title": c["docnm_kwd"],
"metadata": {}
})
return jsonify({"records": records})
except Exception as e:
if str(e).find("not_found") > 0:
return build_error_result(
message='No chunk found! Check the chunk status please!',
code=settings.RetCode.NOT_FOUND
)
return build_error_result(message=str(e), code=settings.RetCode.SERVER_ERROR)

File diff suppressed because it is too large Load Diff

View File

@ -1,266 +1,531 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import json
from uuid import uuid4
from flask import request, Response
from api.db import StatusEnum
from api.db.services.dialog_service import DialogService, ConversationService, chat
from api.settings import RetCode
from api.utils import get_uuid
from api.utils.api_utils import get_data_error_result
from api.utils.api_utils import get_json_result, token_required
@manager.route('/save', methods=['POST'])
@token_required
def set_conversation(tenant_id):
req = request.json
conv_id = req.get("id")
if "assistant_id" in req:
req["dialog_id"] = req.pop("assistant_id")
if "id" in req:
del req["id"]
conv = ConversationService.query(id=conv_id)
if not conv:
return get_data_error_result(retmsg="Session does not exist")
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
if req.get("dialog_id"):
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
if not dia:
return get_data_error_result(retmsg="You do not own the assistant")
if "dialog_id" in req and not req.get("dialog_id"):
return get_data_error_result(retmsg="assistant_id can not be empty.")
if "message" in req:
return get_data_error_result(retmsg="message can not be change")
if "reference" in req:
return get_data_error_result(retmsg="reference can not be change")
if "name" in req and not req.get("name"):
return get_data_error_result(retmsg="name can not be empty.")
if not ConversationService.update_by_id(conv_id, req):
return get_data_error_result(retmsg="Session updates error")
return get_json_result(data=True)
if not req.get("dialog_id"):
return get_data_error_result(retmsg="assistant_id is required.")
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
if not dia:
return get_data_error_result(retmsg="You do not own the assistant")
conv = {
"id": get_uuid(),
"dialog_id": req["dialog_id"],
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}]
}
if not conv.get("name"):
return get_data_error_result(retmsg="name can not be empty.")
ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
if not e:
return get_data_error_result(retmsg="Fail to new session!")
conv = conv.to_dict()
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
del conv["reference"]
return get_json_result(data=conv)
@manager.route('/completion', methods=['POST'])
@token_required
def completion(tenant_id):
req = request.json
# req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
# {"role": "user", "content": "上海有吗?"}
# ]}
if "session_id" not in req:
return get_data_error_result(retmsg="session_id is required")
conv = ConversationService.query(id=req["session_id"])
if not conv:
return get_data_error_result(retmsg="Session does not exist")
conv = conv[0]
if not DialogService.query(id=conv.dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
msg = []
question = {
"content": req.get("question"),
"role": "user",
"id": str(uuid4())
}
conv.message.append(question)
for m in conv.message:
if m["role"] == "system": continue
if m["role"] == "assistant" and not msg: continue
msg.append(m)
message_id = msg[-1].get("id")
e, dia = DialogService.get_by_id(conv.dialog_id)
del req["session_id"]
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(ans["reference"])
else:
conv.reference[-1] = ans["reference"]
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
ans["id"] = message_id
def stream():
nonlocal dia, msg, req, conv
try:
for ans in chat(dia, msg, **req):
fillin_conv(ans)
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
else:
answer = None
for ans in chat(dia, msg, **req):
answer = ans
fillin_conv(ans)
ConversationService.update_by_id(conv.id, conv.to_dict())
break
return get_json_result(data=answer)
@manager.route('/get', methods=['GET'])
@token_required
def get(tenant_id):
req = request.args
if "id" not in req:
return get_data_error_result(retmsg="id is required")
conv_id = req["id"]
conv = ConversationService.query(id=conv_id)
if not conv:
return get_data_error_result(retmsg="Session does not exist")
if not DialogService.query(id=conv[0].dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You do not own the session")
if "assistant_id" in req:
if req["assistant_id"] != conv[0].dialog_id:
return get_data_error_result(retmsg="The session doesn't belong to the assistant")
conv = conv[0].to_dict()
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"knowledgebase_id": chunk["kb_id"],
"image_id": chunk["img_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_json_result(data=conv)
@manager.route('/list', methods=["GET"])
@token_required
def list(tenant_id):
assistant_id = request.args["assistant_id"]
if not DialogService.query(tenant_id=tenant_id, id=assistant_id, status=StatusEnum.VALID.value):
return get_json_result(
data=False, retmsg=f"You don't own the assistant.",
retcode=RetCode.OPERATING_ERROR)
convs = ConversationService.query(
dialog_id=assistant_id,
order_by=ConversationService.model.create_time,
reverse=True)
convs = [d.to_dict() for d in convs]
for conv in convs:
conv['messages'] = conv.pop("message")
conv["assistant_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"knowledgebase_id": chunk["kb_id"],
"image_id": chunk["img_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_json_result(data=convs)
@manager.route('/delete', methods=["DELETE"])
@token_required
def delete(tenant_id):
id = request.args.get("id")
if not id:
return get_data_error_result(retmsg="`id` is required in deleting operation")
conv = ConversationService.query(id=id)
if not conv:
return get_data_error_result(retmsg="Session doesn't exist")
conv = conv[0]
if not DialogService.query(id=conv.dialog_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_data_error_result(retmsg="You don't own the session")
ConversationService.delete_by_id(id)
return get_json_result(data=True)
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import re
import json
from copy import deepcopy
from uuid import uuid4
from api.db import LLMType
from flask import request, Response
from api.db.services.dialog_service import ask
from agent.canvas import Canvas
from api.db import StatusEnum
from api.db.db_models import API4Conversation
from api.db.services.api_service import API4ConversationService
from api.db.services.canvas_service import UserCanvasService
from api.db.services.dialog_service import DialogService, ConversationService, chat
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.utils import get_uuid
from api.utils.api_utils import get_error_data_result
from api.utils.api_utils import get_result, token_required
from api.db.services.llm_service import LLMBundle
@manager.route('/chats/<chat_id>/sessions', methods=['POST'])
@token_required
def create(tenant_id,chat_id):
req = request.json
req["dialog_id"] = chat_id
dia = DialogService.query(tenant_id=tenant_id, id=req["dialog_id"], status=StatusEnum.VALID.value)
if not dia:
return get_error_data_result(message="You do not own the assistant.")
conv = {
"id": get_uuid(),
"dialog_id": req["dialog_id"],
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}]
}
if not conv.get("name"):
return get_error_data_result(message="`name` can not be empty.")
ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
if not e:
return get_error_data_result(message="Fail to create a session!")
conv = conv.to_dict()
conv['messages'] = conv.pop("message")
conv["chat_id"] = conv.pop("dialog_id")
del conv["reference"]
return get_result(data=conv)
@manager.route('/agents/<agent_id>/sessions', methods=['POST'])
@token_required
def create_agent_session(tenant_id, agent_id):
req = request.json
e, cvs = UserCanvasService.get_by_id(agent_id)
if not e:
return get_error_data_result("Agent not found.")
if cvs.user_id != tenant_id:
return get_error_data_result(message="You do not own the agent.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, tenant_id)
conv = {
"id": get_uuid(),
"dialog_id": cvs.id,
"user_id": req.get("usr_id","") if isinstance(req, dict) else "",
"message": [{"role": "assistant", "content": canvas.get_prologue()}],
"source": "agent"
}
API4ConversationService.save(**conv)
conv["agent_id"] = conv.pop("dialog_id")
return get_result(data=conv)
@manager.route('/chats/<chat_id>/sessions/<session_id>', methods=['PUT'])
@token_required
def update(tenant_id,chat_id,session_id):
req = request.json
req["dialog_id"] = chat_id
conv_id = session_id
conv = ConversationService.query(id=conv_id,dialog_id=chat_id)
if not conv:
return get_error_data_result(message="Session does not exist")
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You do not own the session")
if "message" in req or "messages" in req:
return get_error_data_result(message="`message` can not be change")
if "reference" in req:
return get_error_data_result(message="`reference` can not be change")
if "name" in req and not req.get("name"):
return get_error_data_result(message="`name` can not be empty.")
if not ConversationService.update_by_id(conv_id, req):
return get_error_data_result(message="Session updates error")
return get_result()
@manager.route('/chats/<chat_id>/completions', methods=['POST'])
@token_required
def completion(tenant_id, chat_id):
req = request.json
if not req.get("session_id"):
conv = {
"id": get_uuid(),
"dialog_id": chat_id,
"name": req.get("name", "New session"),
"message": [{"role": "assistant", "content": "Hi! I am your assistantcan I help you?"}]
}
if not conv.get("name"):
return get_error_data_result(message="`name` can not be empty.")
ConversationService.save(**conv)
e, conv = ConversationService.get_by_id(conv["id"])
session_id=conv.id
else:
session_id = req.get("session_id")
if not req.get("question"):
return get_error_data_result(message="Please input your question.")
conv = ConversationService.query(id=session_id,dialog_id=chat_id)
if not conv:
return get_error_data_result(message="Session does not exist")
conv = conv[0]
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You do not own the chat")
msg = []
question = {
"content": req.get("question"),
"role": "user",
"id": str(uuid4())
}
conv.message.append(question)
for m in conv.message:
if m["role"] == "system": continue
if m["role"] == "assistant" and not msg: continue
msg.append(m)
message_id = msg[-1].get("id")
e, dia = DialogService.get_by_id(conv.dialog_id)
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
def fillin_conv(ans):
reference = ans["reference"]
temp_reference = deepcopy(ans["reference"])
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(temp_reference)
else:
conv.reference[-1] = temp_reference
conv.message[-1] = {"role": "assistant", "content": ans["answer"],
"id": message_id, "prompt": ans.get("prompt", "")}
if "chunks" in reference:
chunks = reference.get("chunks")
chunk_list = []
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"dataset_id": chunk["kb_id"],
"image_id": chunk.get("image_id", ""),
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk.get("positions", []),
}
chunk_list.append(new_chunk)
reference["chunks"] = chunk_list
ans["id"] = message_id
ans["session_id"]=session_id
def stream():
nonlocal dia, msg, req, conv
try:
for ans in chat(dia, msg, **req):
fillin_conv(ans)
yield "data:" + json.dumps({"code": 0, "data": ans}, ensure_ascii=False) + "\n\n"
ConversationService.update_by_id(conv.id, conv.to_dict())
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e),"reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "data": True}, ensure_ascii=False) + "\n\n"
if req.get("stream", True):
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
else:
answer = None
for ans in chat(dia, msg, **req):
answer = ans
fillin_conv(ans)
ConversationService.update_by_id(conv.id, conv.to_dict())
break
return get_result(data=answer)
@manager.route('/agents/<agent_id>/completions', methods=['POST'])
@token_required
def agent_completion(tenant_id, agent_id):
req = request.json
e, cvs = UserCanvasService.get_by_id(agent_id)
if not e:
return get_error_data_result("Agent not found.")
if cvs.user_id != tenant_id:
return get_error_data_result(message="You do not own the agent.")
if not isinstance(cvs.dsl, str):
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
canvas = Canvas(cvs.dsl, tenant_id)
if not req.get("session_id"):
session_id = get_uuid()
conv = {
"id": session_id,
"dialog_id": cvs.id,
"user_id": req.get("user_id",""),
"message": [{"role": "assistant", "content": canvas.get_prologue()}],
"source": "agent"
}
API4ConversationService.save(**conv)
conv = API4Conversation(**conv)
else:
session_id = req.get("session_id")
e, conv = API4ConversationService.get_by_id(req["session_id"])
if not e:
return get_error_data_result(message="Session not found!")
messages = conv.message
question = req.get("question")
if not question:
return get_error_data_result("`question` is required.")
question={
"role":"user",
"content":question,
"id": str(uuid4())
}
messages.append(question)
msg = []
for m in messages:
if m["role"] == "system":
continue
if m["role"] == "assistant" and not msg:
continue
msg.append(m)
if not msg[-1].get("id"): msg[-1]["id"] = get_uuid()
message_id = msg[-1]["id"]
if "quote" not in req: req["quote"] = False
stream = req.get("stream", True)
def fillin_conv(ans):
reference = ans["reference"]
temp_reference = deepcopy(ans["reference"])
nonlocal conv, message_id
if not conv.reference:
conv.reference.append(temp_reference)
else:
conv.reference[-1] = temp_reference
conv.message[-1] = {"role": "assistant", "content": ans["answer"], "id": message_id}
if "chunks" in reference:
chunks = reference.get("chunks")
chunk_list = []
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"dataset_id": chunk["kb_id"],
"image_id": chunk["image_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
reference["chunks"] = chunk_list
ans["id"] = message_id
ans["session_id"] = session_id
def rename_field(ans):
reference = ans['reference']
if not isinstance(reference, dict):
return
for chunk_i in reference.get('chunks', []):
if 'docnm_kwd' in chunk_i:
chunk_i['doc_name'] = chunk_i['docnm_kwd']
chunk_i.pop('docnm_kwd')
conv.message.append(msg[-1])
if not conv.reference:
conv.reference = []
conv.message.append({"role": "assistant", "content": "", "id": message_id})
conv.reference.append({"chunks": [], "doc_aggs": []})
final_ans = {"reference": [], "content": ""}
canvas.messages.append(msg[-1])
canvas.add_user_input(msg[-1]["content"])
if stream:
def sse():
nonlocal answer, cvs
try:
for ans in canvas.run(stream=True):
if ans.get("running_status"):
yield "data:" + json.dumps({"code": 0, "message": "",
"data": {"answer": ans["content"],
"running_status": True}},
ensure_ascii=False) + "\n\n"
continue
for k in ans.keys():
final_ans[k] = ans[k]
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
fillin_conv(ans)
rename_field(ans)
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans},
ensure_ascii=False) + "\n\n"
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
canvas.history.append(("assistant", final_ans["content"]))
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
except Exception as e:
cvs.dsl = json.loads(str(canvas))
API4ConversationService.append_message(conv.id, conv.to_dict())
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(sse(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
for answer in canvas.run(stream=False):
if answer.get("running_status"): continue
final_ans["content"] = "\n".join(answer["content"]) if "content" in answer else ""
canvas.messages.append({"role": "assistant", "content": final_ans["content"], "id": message_id})
if final_ans.get("reference"):
canvas.reference.append(final_ans["reference"])
cvs.dsl = json.loads(str(canvas))
result = {"answer": final_ans["content"], "reference": final_ans.get("reference", [])}
fillin_conv(result)
API4ConversationService.append_message(conv.id, conv.to_dict())
rename_field(result)
return get_result(data=result)
@manager.route('/chats/<chat_id>/sessions', methods=['GET'])
@token_required
def list_session(chat_id,tenant_id):
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
return get_error_data_result(message=f"You don't own the assistant {chat_id}.")
id = request.args.get("id")
name = request.args.get("name")
page_number = int(request.args.get("page", 1))
items_per_page = int(request.args.get("page_size", 30))
orderby = request.args.get("orderby", "create_time")
if request.args.get("desc") == "False" or request.args.get("desc") == "false":
desc = False
else:
desc = True
convs = ConversationService.get_list(chat_id,page_number,items_per_page,orderby,desc,id,name)
if not convs:
return get_result(data=[])
for conv in convs:
conv['messages'] = conv.pop("message")
infos = conv["messages"]
for info in infos:
if "prompt" in info:
info.pop("prompt")
conv["chat_id"] = conv.pop("dialog_id")
if conv["reference"]:
messages = conv["messages"]
message_num = 0
chunk_num = 0
while message_num < len(messages):
if message_num != 0 and messages[message_num]["role"] != "user":
chunk_list = []
if "chunks" in conv["reference"][chunk_num]:
chunks = conv["reference"][chunk_num]["chunks"]
for chunk in chunks:
new_chunk = {
"id": chunk["chunk_id"],
"content": chunk["content_with_weight"],
"document_id": chunk["doc_id"],
"document_name": chunk["docnm_kwd"],
"dataset_id": chunk["kb_id"],
"image_id": chunk["image_id"],
"similarity": chunk["similarity"],
"vector_similarity": chunk["vector_similarity"],
"term_similarity": chunk["term_similarity"],
"positions": chunk["positions"],
}
chunk_list.append(new_chunk)
chunk_num += 1
messages[message_num]["reference"] = chunk_list
message_num += 1
del conv["reference"]
return get_result(data=convs)
@manager.route('/chats/<chat_id>/sessions', methods=["DELETE"])
@token_required
def delete(tenant_id,chat_id):
if not DialogService.query(id=chat_id, tenant_id=tenant_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You don't own the chat")
req = request.json
convs = ConversationService.query(dialog_id=chat_id)
if not req:
ids = None
else:
ids=req.get("ids")
if not ids:
conv_list = []
for conv in convs:
conv_list.append(conv.id)
else:
conv_list=ids
for id in conv_list:
conv = ConversationService.query(id=id,dialog_id=chat_id)
if not conv:
return get_error_data_result(message="The chat doesn't own the session")
ConversationService.delete_by_id(id)
return get_result()
@manager.route('/sessions/ask', methods=['POST'])
@token_required
def ask_about(tenant_id):
req = request.json
if not req.get("question"):
return get_error_data_result("`question` is required.")
if not req.get("dataset_ids"):
return get_error_data_result("`dataset_ids` is required.")
if not isinstance(req.get("dataset_ids"),list):
return get_error_data_result("`dataset_ids` should be a list.")
req["kb_ids"]=req.pop("dataset_ids")
for kb_id in req["kb_ids"]:
if not KnowledgebaseService.accessible(kb_id,tenant_id):
return get_error_data_result(f"You don't own the dataset {kb_id}.")
kbs = KnowledgebaseService.query(id=kb_id)
kb = kbs[0]
if kb.chunk_num == 0:
return get_error_data_result(f"The dataset {kb_id} doesn't own parsed file")
uid = tenant_id
def stream():
nonlocal req, uid
try:
for ans in ask(req["question"], req["kb_ids"], uid):
yield "data:" + json.dumps({"code": 0, "message": "", "data": ans}, ensure_ascii=False) + "\n\n"
except Exception as e:
yield "data:" + json.dumps({"code": 500, "message": str(e),
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
ensure_ascii=False) + "\n\n"
yield "data:" + json.dumps({"code": 0, "message": "", "data": True}, ensure_ascii=False) + "\n\n"
resp = Response(stream(), mimetype="text/event-stream")
resp.headers.add_header("Cache-control", "no-cache")
resp.headers.add_header("Connection", "keep-alive")
resp.headers.add_header("X-Accel-Buffering", "no")
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
return resp
@manager.route('/sessions/related_questions', methods=['POST'])
@token_required
def related_questions(tenant_id):
req = request.json
if not req.get("question"):
return get_error_data_result("`question` is required.")
question = req["question"]
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT)
prompt = """
Objective: To generate search terms related to the user's search keywords, helping users find more valuable information.
Instructions:
- Based on the keywords provided by the user, generate 5-10 related search terms.
- Each search term should be directly or indirectly related to the keyword, guiding the user to find more valuable information.
- Use common, general terms as much as possible, avoiding obscure words or technical jargon.
- Keep the term length between 2-4 words, concise and clear.
- DO NOT translate, use the language of the original keywords.
### Example:
Keywords: Chinese football
Related search terms:
1. Current status of Chinese football
2. Reform of Chinese football
3. Youth training of Chinese football
4. Chinese football in the Asian Cup
5. Chinese football in the World Cup
Reason:
- When searching, users often only use one or two keywords, making it difficult to fully express their information needs.
- Generating related search terms can help users dig deeper into relevant information and improve search efficiency.
- At the same time, related terms can also help search engines better understand user needs and return more accurate search results.
"""
ans = chat_mdl.chat(prompt, [{"role": "user", "content": f"""
Keywords: {question}
Related search terms:
"""}], {"temperature": 0.9})
return get_result(data=[re.sub(r"^[0-9]\. ", "", a) for a in ans.split("\n") if re.match(r"^[0-9]\. ", a)])

View File

@ -13,78 +13,281 @@
# See the License for the specific language governing permissions and
# limitations under the License
#
import logging
from datetime import datetime
import json
from flask_login import login_required
from flask_login import login_required, current_user
from api.db.db_models import APIToken
from api.db.services.api_service import APITokenService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.settings import DATABASE_TYPE
from api.utils.api_utils import get_json_result
from api.versions import get_rag_version
from rag.settings import SVR_QUEUE_NAME
from rag.utils.es_conn import ELASTICSEARCH
from api.db.services.user_service import UserTenantService
from api import settings
from api.utils import current_timestamp, datetime_format
from api.utils.api_utils import (
get_json_result,
get_data_error_result,
server_error_response,
generate_confirmation_token,
)
from api.versions import get_ragflow_version
from rag.utils.storage_factory import STORAGE_IMPL, STORAGE_IMPL_TYPE
from timeit import default_timer as timer
from rag.utils.redis_conn import REDIS_CONN
@manager.route('/version', methods=['GET'])
@manager.route("/version", methods=["GET"])
@login_required
def version():
return get_json_result(data=get_rag_version())
"""
Get the current version of the application.
---
tags:
- System
security:
- ApiKeyAuth: []
responses:
200:
description: Version retrieved successfully.
schema:
type: object
properties:
version:
type: string
description: Version number.
"""
return get_json_result(data=get_ragflow_version())
@manager.route('/status', methods=['GET'])
@manager.route("/status", methods=["GET"])
@login_required
def status():
"""
Get the system status.
---
tags:
- System
security:
- ApiKeyAuth: []
responses:
200:
description: System is operational.
schema:
type: object
properties:
es:
type: object
description: Elasticsearch status.
storage:
type: object
description: Storage status.
database:
type: object
description: Database status.
503:
description: Service unavailable.
schema:
type: object
properties:
error:
type: string
description: Error message.
"""
res = {}
st = timer()
try:
res["es"] = ELASTICSEARCH.health()
res["es"]["elapsed"] = "{:.1f}".format((timer() - st)*1000.)
res["doc_engine"] = settings.docStoreConn.health()
res["doc_engine"]["elapsed"] = "{:.1f}".format((timer() - st) * 1000.0)
except Exception as e:
res["es"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["doc_engine"] = {
"type": "unknown",
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
st = timer()
try:
STORAGE_IMPL.health()
res["storage"] = {"storage": STORAGE_IMPL_TYPE.lower(), "status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
res["storage"] = {
"storage": STORAGE_IMPL_TYPE.lower(),
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e:
res["storage"] = {"storage": STORAGE_IMPL_TYPE.lower(), "status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["storage"] = {
"storage": STORAGE_IMPL_TYPE.lower(),
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
st = timer()
try:
KnowledgebaseService.get_by_id("x")
res["database"] = {"database": DATABASE_TYPE.lower(), "status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
res["database"] = {
"database": settings.DATABASE_TYPE.lower(),
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e:
res["database"] = {"database": DATABASE_TYPE.lower(), "status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["database"] = {
"database": settings.DATABASE_TYPE.lower(),
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
st = timer()
try:
if not REDIS_CONN.health():
raise Exception("Lost connection!")
res["redis"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
res["redis"] = {
"status": "green",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
}
except Exception as e:
res["redis"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
res["redis"] = {
"status": "red",
"elapsed": "{:.1f}".format((timer() - st) * 1000.0),
"error": str(e),
}
task_executor_heartbeats = {}
try:
v = REDIS_CONN.get("TASKEXE")
if not v:
raise Exception("No task executor running!")
obj = json.loads(v)
color = "green"
for id in obj.keys():
arr = obj[id]
if len(arr) == 1:
obj[id] = [0]
else:
obj[id] = [arr[i+1]-arr[i] for i in range(len(arr)-1)]
elapsed = max(obj[id])
if elapsed > 50: color = "yellow"
if elapsed > 120: color = "red"
res["task_executor"] = {"status": color, "elapsed": obj}
except Exception as e:
res["task_executor"] = {"status": "red", "error": str(e)}
task_executors = REDIS_CONN.smembers("TASKEXE")
now = datetime.now().timestamp()
for task_executor_id in task_executors:
heartbeats = REDIS_CONN.zrangebyscore(task_executor_id, now - 60*30, now)
heartbeats = [json.loads(heartbeat) for heartbeat in heartbeats]
task_executor_heartbeats[task_executor_id] = heartbeats
except Exception:
logging.exception("get task executor heartbeats failed!")
res["task_executor_heartbeats"] = task_executor_heartbeats
return get_json_result(data=res)
@manager.route("/new_token", methods=["POST"])
@login_required
def new_token():
"""
Generate a new API token.
---
tags:
- API Tokens
security:
- ApiKeyAuth: []
parameters:
- in: query
name: name
type: string
required: false
description: Name of the token.
responses:
200:
description: Token generated successfully.
schema:
type: object
properties:
token:
type: string
description: The generated API token.
"""
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(message="Tenant not found!")
tenant_id = tenants[0].tenant_id
obj = {
"tenant_id": tenant_id,
"token": generate_confirmation_token(tenant_id),
"create_time": current_timestamp(),
"create_date": datetime_format(datetime.now()),
"update_time": None,
"update_date": None,
}
if not APITokenService.save(**obj):
return get_data_error_result(message="Fail to new a dialog!")
return get_json_result(data=obj)
except Exception as e:
return server_error_response(e)
@manager.route("/token_list", methods=["GET"])
@login_required
def token_list():
"""
List all API tokens for the current user.
---
tags:
- API Tokens
security:
- ApiKeyAuth: []
responses:
200:
description: List of API tokens.
schema:
type: object
properties:
tokens:
type: array
items:
type: object
properties:
token:
type: string
description: The API token.
name:
type: string
description: Name of the token.
create_time:
type: string
description: Token creation time.
"""
try:
tenants = UserTenantService.query(user_id=current_user.id)
if not tenants:
return get_data_error_result(message="Tenant not found!")
objs = APITokenService.query(tenant_id=tenants[0].tenant_id)
return get_json_result(data=[o.to_dict() for o in objs])
except Exception as e:
return server_error_response(e)
@manager.route("/token/<token>", methods=["DELETE"])
@login_required
def rm(token):
"""
Remove an API token.
---
tags:
- API Tokens
security:
- ApiKeyAuth: []
parameters:
- in: path
name: token
type: string
required: true
description: The API token to remove.
responses:
200:
description: Token removed successfully.
schema:
type: object
properties:
success:
type: boolean
description: Deletion status.
"""
APITokenService.filter_delete(
[APIToken.tenant_id == current_user.id, APIToken.token == token]
)
return get_json_result(data=True)

View File

@ -15,32 +15,30 @@
#
from flask import request
from flask_login import current_user, login_required
from flask_login import login_required, current_user
from api import settings
from api.db import UserTenantRole, StatusEnum
from api.db.db_models import UserTenant
from api.db.services.user_service import TenantService, UserTenantService
from api.settings import RetCode
from api.db.services.user_service import UserTenantService, UserService
from api.utils import get_uuid
from api.utils.api_utils import get_json_result, validate_request, server_error_response
@manager.route("/list", methods=["GET"])
@login_required
def tenant_list():
try:
tenants = TenantService.get_by_user_id(current_user.id)
return get_json_result(data=tenants)
except Exception as e:
return server_error_response(e)
from api.utils import get_uuid, delta_seconds
from api.utils.api_utils import get_json_result, validate_request, server_error_response, get_data_error_result
@manager.route("/<tenant_id>/user/list", methods=["GET"])
@login_required
def user_list(tenant_id):
if current_user.id != tenant_id:
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try:
users = UserTenantService.get_by_tenant_id(tenant_id)
for u in users:
u["delta_seconds"] = delta_seconds(str(u["update_date"]))
return get_json_result(data=users)
except Exception as e:
return server_error_response(e)
@ -48,38 +46,73 @@ def user_list(tenant_id):
@manager.route('/<tenant_id>/user', methods=['POST'])
@login_required
@validate_request("user_id")
@validate_request("email")
def create(tenant_id):
user_id = request.json.get("user_id")
if not user_id:
if current_user.id != tenant_id:
return get_json_result(
data=False, retmsg='Lack of "USER ID"', retcode=RetCode.ARGUMENT_ERROR)
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try:
user_tenants = UserTenantService.query(user_id=user_id, tenant_id=tenant_id)
if user_tenants:
uuid = user_tenants[0].id
return get_json_result(data={"id": uuid})
req = request.json
usrs = UserService.query(email=req["email"])
if not usrs:
return get_data_error_result(message="User not found.")
uuid = get_uuid()
UserTenantService.save(
id = uuid,
user_id = user_id,
tenant_id = tenant_id,
role = UserTenantRole.NORMAL.value,
status = StatusEnum.VALID.value)
user_id = usrs[0].id
user_tenants = UserTenantService.query(user_id=user_id, tenant_id=tenant_id)
if user_tenants:
if user_tenants[0].status == UserTenantRole.NORMAL.value:
return get_data_error_result(message="This user is in the team already.")
return get_data_error_result(message="Invitation notification is sent.")
return get_json_result(data={"id": uuid})
except Exception as e:
return server_error_response(e)
UserTenantService.save(
id=get_uuid(),
user_id=user_id,
tenant_id=tenant_id,
invited_by=current_user.id,
role=UserTenantRole.INVITE,
status=StatusEnum.VALID.value)
usr = usrs[0].to_dict()
usr = {k: v for k, v in usr.items() if k in ["id", "avatar", "email", "nickname"]}
return get_json_result(data=usr)
@manager.route('/<tenant_id>/user/<user_id>', methods=['DELETE'])
@login_required
def rm(tenant_id, user_id):
if current_user.id != tenant_id and current_user.id != user_id:
return get_json_result(
data=False,
message='No authorization.',
code=settings.RetCode.AUTHENTICATION_ERROR)
try:
UserTenantService.filter_delete([UserTenant.tenant_id == tenant_id, UserTenant.user_id == user_id])
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)
@manager.route("/list", methods=["GET"])
@login_required
def tenant_list():
try:
users = UserTenantService.get_tenants_by_user_id(current_user.id)
for u in users:
u["delta_seconds"] = delta_seconds(str(u["update_date"]))
return get_json_result(data=users)
except Exception as e:
return server_error_response(e)
@manager.route("/agree/<tenant_id>", methods=["PUT"])
@login_required
def agree(tenant_id):
try:
UserTenantService.filter_update([UserTenant.tenant_id == tenant_id, UserTenant.user_id == current_user.id], {"role": UserTenantRole.NORMAL})
return get_json_result(data=True)
except Exception as e:
return server_error_response(e)

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import json
import re
from datetime import datetime
@ -23,65 +24,127 @@ from flask_login import login_required, current_user, login_user, logout_user
from api.db.db_models import TenantLLM
from api.db.services.llm_service import TenantLLMService, LLMService
from api.utils.api_utils import server_error_response, validate_request
from api.utils import get_uuid, get_format_time, decrypt, download_img, current_timestamp, datetime_format
from api.db import UserTenantRole, LLMType, FileType
from api.settings import RetCode, GITHUB_OAUTH, FEISHU_OAUTH, CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSERS, \
API_KEY, \
LLM_FACTORY, LLM_BASE_URL, RERANK_MDL
from api.utils.api_utils import (
server_error_response,
validate_request,
get_data_error_result,
)
from api.utils import (
get_uuid,
get_format_time,
decrypt,
download_img,
current_timestamp,
datetime_format,
)
from api.db import UserTenantRole, FileType
from api import settings
from api.db.services.user_service import UserService, TenantService, UserTenantService
from api.db.services.file_service import FileService
from api.settings import stat_logger
from api.utils.api_utils import get_json_result, construct_response
@manager.route('/login', methods=['POST', 'GET'])
@manager.route("/login", methods=["POST", "GET"])
def login():
"""
User login endpoint.
---
tags:
- User
parameters:
- in: body
name: body
description: Login credentials.
required: true
schema:
type: object
properties:
email:
type: string
description: User email.
password:
type: string
description: User password.
responses:
200:
description: Login successful.
schema:
type: object
401:
description: Authentication failed.
schema:
type: object
"""
if not request.json:
return get_json_result(data=False,
retcode=RetCode.AUTHENTICATION_ERROR,
retmsg='Unauthorized!')
return get_json_result(
data=False, code=settings.RetCode.AUTHENTICATION_ERROR, message="Unauthorized!"
)
email = request.json.get('email', "")
email = request.json.get("email", "")
users = UserService.query(email=email)
if not users:
return get_json_result(data=False,
retcode=RetCode.AUTHENTICATION_ERROR,
retmsg=f'Email: {email} is not registered!')
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
message=f"Email: {email} is not registered!",
)
password = request.json.get('password')
password = request.json.get("password")
try:
password = decrypt(password)
except BaseException:
return get_json_result(data=False,
retcode=RetCode.SERVER_ERROR,
retmsg='Fail to crypt password')
return get_json_result(
data=False, code=settings.RetCode.SERVER_ERROR, message="Fail to crypt password"
)
user = UserService.query_user(email, password)
if user:
response_data = user.to_json()
user.access_token = get_uuid()
login_user(user)
user.update_time = current_timestamp(),
user.update_date = datetime_format(datetime.now()),
user.update_time = (current_timestamp(),)
user.update_date = (datetime_format(datetime.now()),)
user.save()
msg = "Welcome back!"
return construct_response(data=response_data, auth=user.get_id(), retmsg=msg)
return construct_response(data=response_data, auth=user.get_id(), message=msg)
else:
return get_json_result(data=False,
retcode=RetCode.AUTHENTICATION_ERROR,
retmsg='Email and password do not match!')
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
message="Email and password do not match!",
)
@manager.route('/github_callback', methods=['GET'])
@manager.route("/github_callback", methods=["GET"])
def github_callback():
"""
GitHub OAuth callback endpoint.
---
tags:
- OAuth
parameters:
- in: query
name: code
type: string
required: true
description: Authorization code from GitHub.
responses:
200:
description: Authentication successful.
schema:
type: object
"""
import requests
res = requests.post(GITHUB_OAUTH.get("url"),
data={
"client_id": GITHUB_OAUTH.get("client_id"),
"client_secret": GITHUB_OAUTH.get("secret_key"),
"code": request.args.get('code')},
headers={"Accept": "application/json"})
res = requests.post(
settings.GITHUB_OAUTH.get("url"),
data={
"client_id": settings.GITHUB_OAUTH.get("client_id"),
"client_secret": settings.GITHUB_OAUTH.get("secret_key"),
"code": request.args.get("code"),
},
headers={"Accept": "application/json"},
)
res = res.json()
if "error" in res:
return redirect("/?error=%s" % res["error_description"])
@ -101,21 +164,24 @@ def github_callback():
try:
avatar = download_img(user_info["avatar_url"])
except Exception as e:
stat_logger.exception(e)
logging.exception(e)
avatar = ""
users = user_register(user_id, {
"access_token": session["access_token"],
"email": email_address,
"avatar": avatar,
"nickname": user_info["login"],
"login_channel": "github",
"last_login_time": get_format_time(),
"is_superuser": False,
})
users = user_register(
user_id,
{
"access_token": session["access_token"],
"email": email_address,
"avatar": avatar,
"nickname": user_info["login"],
"login_channel": "github",
"last_login_time": get_format_time(),
"is_superuser": False,
},
)
if not users:
raise Exception(f'Fail to register {email_address}.')
raise Exception(f"Fail to register {email_address}.")
if len(users) > 1:
raise Exception(f'Same email: {email_address} exists!')
raise Exception(f"Same email: {email_address} exists!")
# Try to log in
user = users[0]
@ -123,7 +189,7 @@ def github_callback():
return redirect("/?auth=%s" % user.get_id())
except Exception as e:
rollback_user_registration(user_id)
stat_logger.exception(e)
logging.exception(e)
return redirect("/?error=%s" % str(e))
# User has already registered, try to log in
@ -134,30 +200,56 @@ def github_callback():
return redirect("/?auth=%s" % user.get_id())
@manager.route('/feishu_callback', methods=['GET'])
@manager.route("/feishu_callback", methods=["GET"])
def feishu_callback():
"""
Feishu OAuth callback endpoint.
---
tags:
- OAuth
parameters:
- in: query
name: code
type: string
required: true
description: Authorization code from Feishu.
responses:
200:
description: Authentication successful.
schema:
type: object
"""
import requests
app_access_token_res = requests.post(FEISHU_OAUTH.get("app_access_token_url"),
data=json.dumps({
"app_id": FEISHU_OAUTH.get("app_id"),
"app_secret": FEISHU_OAUTH.get("app_secret")
}),
headers={"Content-Type": "application/json; charset=utf-8"})
app_access_token_res = requests.post(
settings.FEISHU_OAUTH.get("app_access_token_url"),
data=json.dumps(
{
"app_id": settings.FEISHU_OAUTH.get("app_id"),
"app_secret": settings.FEISHU_OAUTH.get("app_secret"),
}
),
headers={"Content-Type": "application/json; charset=utf-8"},
)
app_access_token_res = app_access_token_res.json()
if app_access_token_res['code'] != 0:
if app_access_token_res["code"] != 0:
return redirect("/?error=%s" % app_access_token_res)
res = requests.post(FEISHU_OAUTH.get("user_access_token_url"),
data=json.dumps({
"grant_type": FEISHU_OAUTH.get("grant_type"),
"code": request.args.get('code')
}),
headers={
"Content-Type": "application/json; charset=utf-8",
'Authorization': f"Bearer {app_access_token_res['app_access_token']}"
})
res = requests.post(
settings.FEISHU_OAUTH.get("user_access_token_url"),
data=json.dumps(
{
"grant_type": settings.FEISHU_OAUTH.get("grant_type"),
"code": request.args.get("code"),
}
),
headers={
"Content-Type": "application/json; charset=utf-8",
"Authorization": f"Bearer {app_access_token_res['app_access_token']}",
},
)
res = res.json()
if res['code'] != 0:
if res["code"] != 0:
return redirect("/?error=%s" % res["message"])
if "contact:user.email:readonly" not in res["data"]["scope"].split(" "):
@ -174,21 +266,24 @@ def feishu_callback():
try:
avatar = download_img(user_info["avatar_url"])
except Exception as e:
stat_logger.exception(e)
logging.exception(e)
avatar = ""
users = user_register(user_id, {
"access_token": session["access_token"],
"email": email_address,
"avatar": avatar,
"nickname": user_info["en_name"],
"login_channel": "feishu",
"last_login_time": get_format_time(),
"is_superuser": False,
})
users = user_register(
user_id,
{
"access_token": session["access_token"],
"email": email_address,
"avatar": avatar,
"nickname": user_info["en_name"],
"login_channel": "feishu",
"last_login_time": get_format_time(),
"is_superuser": False,
},
)
if not users:
raise Exception(f'Fail to register {email_address}.')
raise Exception(f"Fail to register {email_address}.")
if len(users) > 1:
raise Exception(f'Same email: {email_address} exists!')
raise Exception(f"Same email: {email_address} exists!")
# Try to log in
user = users[0]
@ -196,7 +291,7 @@ def feishu_callback():
return redirect("/?auth=%s" % user.get_id())
except Exception as e:
rollback_user_registration(user_id)
stat_logger.exception(e)
logging.exception(e)
return redirect("/?error=%s" % str(e))
# User has already registered, try to log in
@ -209,11 +304,14 @@ def feishu_callback():
def user_info_from_feishu(access_token):
import requests
headers = {"Content-Type": "application/json; charset=utf-8",
'Authorization': f"Bearer {access_token}"}
headers = {
"Content-Type": "application/json; charset=utf-8",
"Authorization": f"Bearer {access_token}",
}
res = requests.get(
f"https://open.feishu.cn/open-apis/authen/v1/user_info",
headers=headers)
"https://open.feishu.cn/open-apis/authen/v1/user_info", headers=headers
)
user_info = res.json()["data"]
user_info["email"] = None if user_info.get("email") == "" else user_info["email"]
return user_info
@ -221,24 +319,38 @@ def user_info_from_feishu(access_token):
def user_info_from_github(access_token):
import requests
headers = {"Accept": "application/json",
'Authorization': f"token {access_token}"}
headers = {"Accept": "application/json", "Authorization": f"token {access_token}"}
res = requests.get(
f"https://api.github.com/user?access_token={access_token}",
headers=headers)
f"https://api.github.com/user?access_token={access_token}", headers=headers
)
user_info = res.json()
email_info = requests.get(
f"https://api.github.com/user/emails?access_token={access_token}",
headers=headers).json()
headers=headers,
).json()
user_info["email"] = next(
(email for email in email_info if email['primary'] == True),
None)["email"]
(email for email in email_info if email["primary"] == True), None
)["email"]
return user_info
@manager.route("/logout", methods=['GET'])
@manager.route("/logout", methods=["GET"])
@login_required
def log_out():
"""
User logout endpoint.
---
tags:
- User
security:
- ApiKeyAuth: []
responses:
200:
description: Logout successful.
schema:
type: object
"""
current_user.access_token = ""
current_user.save()
logout_user()
@ -248,19 +360,62 @@ def log_out():
@manager.route("/setting", methods=["POST"])
@login_required
def setting_user():
"""
Update user settings.
---
tags:
- User
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
description: User settings to update.
required: true
schema:
type: object
properties:
nickname:
type: string
description: New nickname.
email:
type: string
description: New email.
responses:
200:
description: Settings updated successfully.
schema:
type: object
"""
update_dict = {}
request_data = request.json
if request_data.get("password"):
new_password = request_data.get("new_password")
if not check_password_hash(
current_user.password, decrypt(request_data["password"])):
return get_json_result(data=False, retcode=RetCode.AUTHENTICATION_ERROR, retmsg='Password error!')
current_user.password, decrypt(request_data["password"])
):
return get_json_result(
data=False,
code=settings.RetCode.AUTHENTICATION_ERROR,
message="Password error!",
)
if new_password:
update_dict["password"] = generate_password_hash(decrypt(new_password))
for k in request_data.keys():
if k in ["password", "new_password"]:
if k in [
"password",
"new_password",
"email",
"status",
"is_superuser",
"login_channel",
"is_anonymous",
"is_active",
"is_authenticated",
"last_login_time",
]:
continue
update_dict[k] = request_data[k]
@ -268,34 +423,59 @@ def setting_user():
UserService.update_by_id(current_user.id, update_dict)
return get_json_result(data=True)
except Exception as e:
stat_logger.exception(e)
return get_json_result(data=False, retmsg='Update failure!', retcode=RetCode.EXCEPTION_ERROR)
logging.exception(e)
return get_json_result(
data=False, message="Update failure!", code=settings.RetCode.EXCEPTION_ERROR
)
@manager.route("/info", methods=["GET"])
@login_required
def user_profile():
"""
Get user profile information.
---
tags:
- User
security:
- ApiKeyAuth: []
responses:
200:
description: User profile retrieved successfully.
schema:
type: object
properties:
id:
type: string
description: User ID.
nickname:
type: string
description: User nickname.
email:
type: string
description: User email.
"""
return get_json_result(data=current_user.to_dict())
def rollback_user_registration(user_id):
try:
UserService.delete_by_id(user_id)
except Exception as e:
except Exception:
pass
try:
TenantService.delete_by_id(user_id)
except Exception as e:
except Exception:
pass
try:
u = UserTenantService.query(tenant_id=user_id)
if u:
UserTenantService.delete_by_id(u[0].id)
except Exception as e:
except Exception:
pass
try:
TenantLLM.delete().where(TenantLLM.tenant_id == user_id).execute()
except Exception as e:
except Exception:
pass
@ -304,18 +484,18 @@ def user_register(user_id, user):
tenant = {
"id": user_id,
"name": user["nickname"] + "s Kingdom",
"llm_id": CHAT_MDL,
"embd_id": EMBEDDING_MDL,
"asr_id": ASR_MDL,
"parser_ids": PARSERS,
"img2txt_id": IMAGE2TEXT_MDL,
"rerank_id": RERANK_MDL
"llm_id": settings.CHAT_MDL,
"embd_id": settings.EMBEDDING_MDL,
"asr_id": settings.ASR_MDL,
"parser_ids": settings.PARSERS,
"img2txt_id": settings.IMAGE2TEXT_MDL,
"rerank_id": settings.RERANK_MDL,
}
usr_tenant = {
"tenant_id": user_id,
"user_id": user_id,
"invited_by": user_id,
"role": UserTenantRole.OWNER
"role": UserTenantRole.OWNER,
}
file_id = get_uuid()
file = {
@ -329,14 +509,18 @@ def user_register(user_id, user):
"location": "",
}
tenant_llm = []
for llm in LLMService.query(fid=LLM_FACTORY):
tenant_llm.append({"tenant_id": user_id,
"llm_factory": LLM_FACTORY,
"llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": API_KEY,
"api_base": LLM_BASE_URL
})
for llm in LLMService.query(fid=settings.LLM_FACTORY):
tenant_llm.append(
{
"tenant_id": user_id,
"llm_factory": settings.LLM_FACTORY,
"llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": settings.API_KEY,
"api_base": settings.LLM_BASE_URL,
"max_tokens": llm.max_tokens if llm.max_tokens else 8192
}
)
if not UserService.save(**user):
return
@ -350,21 +534,52 @@ def user_register(user_id, user):
@manager.route("/register", methods=["POST"])
@validate_request("nickname", "email", "password")
def user_add():
"""
Register a new user.
---
tags:
- User
parameters:
- in: body
name: body
description: Registration details.
required: true
schema:
type: object
properties:
nickname:
type: string
description: User nickname.
email:
type: string
description: User email.
password:
type: string
description: User password.
responses:
200:
description: Registration successful.
schema:
type: object
"""
req = request.json
email_address = req["email"]
# Validate the email address
if not re.match(r"^[\w\._-]+@([\w_-]+\.)+[\w-]{2,4}$", email_address):
return get_json_result(data=False,
retmsg=f'Invalid email address: {email_address}!',
retcode=RetCode.OPERATING_ERROR)
if not re.match(r"^[\w\._-]+@([\w_-]+\.)+[\w-]{2,5}$", email_address):
return get_json_result(
data=False,
message=f"Invalid email address: {email_address}!",
code=settings.RetCode.OPERATING_ERROR,
)
# Check if the email address is already used
if UserService.query(email=email_address):
return get_json_result(
data=False,
retmsg=f'Email: {email_address} has already registered!',
retcode=RetCode.OPERATING_ERROR)
message=f"Email: {email_address} has already registered!",
code=settings.RetCode.OPERATING_ERROR,
)
# Construct user info data
nickname = req["nickname"]
@ -382,28 +597,60 @@ def user_add():
try:
users = user_register(user_id, user_dict)
if not users:
raise Exception(f'Fail to register {email_address}.')
raise Exception(f"Fail to register {email_address}.")
if len(users) > 1:
raise Exception(f'Same email: {email_address} exists!')
raise Exception(f"Same email: {email_address} exists!")
user = users[0]
login_user(user)
return construct_response(data=user.to_json(),
auth=user.get_id(),
retmsg=f"{nickname}, welcome aboard!")
return construct_response(
data=user.to_json(),
auth=user.get_id(),
message=f"{nickname}, welcome aboard!",
)
except Exception as e:
rollback_user_registration(user_id)
stat_logger.exception(e)
return get_json_result(data=False,
retmsg=f'User registration failure, error: {str(e)}',
retcode=RetCode.EXCEPTION_ERROR)
logging.exception(e)
return get_json_result(
data=False,
message=f"User registration failure, error: {str(e)}",
code=settings.RetCode.EXCEPTION_ERROR,
)
@manager.route("/tenant_info", methods=["GET"])
@login_required
def tenant_info():
"""
Get tenant information.
---
tags:
- Tenant
security:
- ApiKeyAuth: []
responses:
200:
description: Tenant information retrieved successfully.
schema:
type: object
properties:
tenant_id:
type: string
description: Tenant ID.
name:
type: string
description: Tenant name.
llm_id:
type: string
description: LLM ID.
embd_id:
type: string
description: Embedding model ID.
"""
try:
tenants = TenantService.get_by_user_id(current_user.id)[0]
return get_json_result(data=tenants)
tenants = TenantService.get_info_by(current_user.id)
if not tenants:
return get_data_error_result(message="Tenant not found!")
return get_json_result(data=tenants[0])
except Exception as e:
return server_error_response(e)
@ -412,10 +659,45 @@ def tenant_info():
@login_required
@validate_request("tenant_id", "asr_id", "embd_id", "img2txt_id", "llm_id")
def set_tenant_info():
"""
Update tenant information.
---
tags:
- Tenant
security:
- ApiKeyAuth: []
parameters:
- in: body
name: body
description: Tenant information to update.
required: true
schema:
type: object
properties:
tenant_id:
type: string
description: Tenant ID.
llm_id:
type: string
description: LLM ID.
embd_id:
type: string
description: Embedding model ID.
asr_id:
type: string
description: ASR model ID.
img2txt_id:
type: string
description: Image to Text model ID.
responses:
200:
description: Tenant information updated successfully.
schema:
type: object
"""
req = request.json
try:
tid = req["tenant_id"]
del req["tenant_id"]
tid = req.pop("tenant_id")
TenantService.update_by_id(tid, req)
return get_json_result(data=True)
except Exception as e:

View File

@ -12,10 +12,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import operator
import time
import typing
from api.utils.log_utils import sql_logger
import peewee
NAME_LENGTH_LIMIT = 2 ** 10
IMG_BASE64_PREFIX = 'data:image/png;base64,'
SERVICE_CONF = "service_conf.yaml"
API_VERSION = "v1"
RAG_FLOW_SERVICE_NAME = "ragflow"
REQUEST_WAIT_SEC = 2
REQUEST_MAX_WAIT_SEC = 300

View File

@ -1,16 +0,0 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NAME_LENGTH_LIMIT = 2 ** 10

View File

@ -27,6 +27,7 @@ class UserTenantRole(StrEnum):
OWNER = 'owner'
ADMIN = 'admin'
NORMAL = 'normal'
INVITE = 'invite'
class TenantPermission(StrEnum):

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import inspect
import os
import sys
@ -29,12 +30,11 @@ from peewee import (
Field, Model, Metadata
)
from playhouse.pool import PooledMySQLDatabase, PooledPostgresqlDatabase
from api.db import SerializedType, ParserType
from api.settings import DATABASE, stat_logger, SECRET_KEY, DATABASE_TYPE
from api.utils.log_utils import getLogger
from api import utils
LOGGER = getLogger()
from api.db import SerializedType, ParserType
from api import settings
from api import utils
def singleton(cls, *args, **kw):
@ -65,7 +65,7 @@ class TextFieldType(Enum):
class LongTextField(TextField):
field_type = TextFieldType[DATABASE_TYPE.upper()].value
field_type = TextFieldType[settings.DATABASE_TYPE.upper()].value
class JSONField(LongTextField):
@ -272,6 +272,7 @@ class JsonSerializedField(SerializedField):
super(JsonSerializedField, self).__init__(serialized_type=SerializedType.JSON, object_hook=object_hook,
object_pairs_hook=object_pairs_hook, **kwargs)
class PooledDatabase(Enum):
MYSQL = PooledMySQLDatabase
POSTGRES = PooledPostgresqlDatabase
@ -285,10 +286,11 @@ class DatabaseMigrator(Enum):
@singleton
class BaseDataBase:
def __init__(self):
database_config = DATABASE.copy()
database_config = settings.DATABASE.copy()
db_name = database_config.pop("name")
self.database_connection = PooledDatabase[DATABASE_TYPE.upper()].value(db_name, **database_config)
stat_logger.info('init database on cluster mode successfully')
self.database_connection = PooledDatabase[settings.DATABASE_TYPE.upper()].value(db_name, **database_config)
logging.info('init database on cluster mode successfully')
class PostgresDatabaseLock:
def __init__(self, lock_name, timeout=10, db=None):
@ -334,6 +336,7 @@ class PostgresDatabaseLock:
return magic
class MysqlDatabaseLock:
def __init__(self, lock_name, timeout=10, db=None):
self.lock_name = lock_name
@ -388,7 +391,7 @@ class DatabaseLock(Enum):
DB = BaseDataBase().database_connection
DB.lock = DatabaseLock[DATABASE_TYPE.upper()].value
DB.lock = DatabaseLock[settings.DATABASE_TYPE.upper()].value
def close_connection():
@ -396,7 +399,7 @@ def close_connection():
if DB:
DB.close_stale(age=30)
except Exception as e:
LOGGER.exception(e)
logging.exception(e)
class DataBaseModel(BaseModel):
@ -412,15 +415,15 @@ def init_database_tables(alter_fields=[]):
for name, obj in members:
if obj != DataBaseModel and issubclass(obj, DataBaseModel):
table_objs.append(obj)
LOGGER.info(f"start create table {obj.__name__}")
logging.debug(f"start create table {obj.__name__}")
try:
obj.create_table()
LOGGER.info(f"create table success: {obj.__name__}")
logging.debug(f"create table success: {obj.__name__}")
except Exception as e:
LOGGER.exception(e)
logging.exception(e)
create_failed_list.append(obj.__name__)
if create_failed_list:
LOGGER.info(f"create tables failed: {create_failed_list}")
logging.error(f"create tables failed: {create_failed_list}")
raise Exception(f"create tables failed: {create_failed_list}")
migrate_db()
@ -470,7 +473,7 @@ class User(DataBaseModel, UserMixin):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
is_superuser = BooleanField(null=True, help_text="is root", default=False, index=True)
@ -479,7 +482,7 @@ class User(DataBaseModel, UserMixin):
return self.email
def get_id(self):
jwt = Serializer(secret_key=SECRET_KEY)
jwt = Serializer(secret_key=settings.SECRET_KEY)
return jwt.dumps(str(self.access_token))
class Meta:
@ -525,7 +528,7 @@ class Tenant(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -542,7 +545,7 @@ class UserTenant(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -559,7 +562,7 @@ class InvitationCode(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -582,7 +585,7 @@ class LLMFactories(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -616,7 +619,7 @@ class LLM(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -648,7 +651,7 @@ class TenantLLM(DataBaseModel):
index=True)
api_key = CharField(max_length=1024, null=True, help_text="API KEY", index=True)
api_base = CharField(max_length=255, null=True, help_text="API Base")
max_tokens = IntegerField(default=8192, index=True)
used_tokens = IntegerField(default=0, index=True)
def __str__(self):
@ -703,7 +706,7 @@ class Knowledgebase(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -767,7 +770,7 @@ class Document(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -840,7 +843,7 @@ class Task(DataBaseModel):
doc_id = CharField(max_length=32, null=False, index=True)
from_page = IntegerField(default=0)
to_page = IntegerField(default=-1)
to_page = IntegerField(default=100000000)
begin_at = DateTimeField(null=True, index=True)
process_duation = FloatField(default=0)
@ -879,8 +882,10 @@ class Dialog(DataBaseModel):
default="simple",
help_text="simple|advanced",
index=True)
prompt_config = JSONField(null=False, default={"system": "", "prologue": "您好我是您的助手小樱长得可爱又善良can I help you?",
"parameters": [], "empty_response": "Sorry! 知识库中未找到相关内容!"})
prompt_config = JSONField(null=False,
default={"system": "", "prologue": "Hi! I'm your assistant, what can I do for you?",
"parameters": [],
"empty_response": "Sorry! No relevant content was found in the knowledge base!"})
similarity_threshold = FloatField(default=0.2)
vector_similarity_weight = FloatField(default=0.3)
@ -894,7 +899,7 @@ class Dialog(DataBaseModel):
null=False,
default="1",
help_text="it needs to insert reference index into answer or not")
rerank_id = CharField(
max_length=128,
null=False,
@ -904,7 +909,7 @@ class Dialog(DataBaseModel):
status = CharField(
max_length=1,
null=True,
help_text="is it validate(0: wasted1: validate)",
help_text="is it validate(0: wasted, 1: validate)",
default="1",
index=True)
@ -980,14 +985,14 @@ class CanvasTemplate(DataBaseModel):
def migrate_db():
with DB.transaction():
migrator = DatabaseMigrator[DATABASE_TYPE.upper()].value(DB)
migrator = DatabaseMigrator[settings.DATABASE_TYPE.upper()].value(DB)
try:
migrate(
migrator.add_column('file', 'source_type', CharField(max_length=128, null=False, default="",
help_text="where dose this document come from",
index=True))
)
except Exception as e:
except Exception:
pass
try:
migrate(
@ -996,7 +1001,7 @@ def migrate_db():
help_text="default rerank model ID"))
)
except Exception as e:
except Exception:
pass
try:
migrate(
@ -1004,52 +1009,64 @@ def migrate_db():
help_text="default rerank model ID"))
)
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.add_column('dialog', 'top_k', IntegerField(default=1024))
)
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.alter_column_type('tenant_llm', 'api_key',
CharField(max_length=1024, null=True, help_text="API KEY", index=True))
)
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.add_column('api_token', 'source',
CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True))
)
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.add_column("tenant","tts_id",
CharField(max_length=256,null=True,help_text="default tts model ID",index=True))
migrator.add_column("tenant", "tts_id",
CharField(max_length=256, null=True, help_text="default tts model ID", index=True))
)
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.add_column('api_4_conversation', 'source',
CharField(max_length=16, null=True, help_text="none|agent|dialog", index=True))
)
except Exception as e:
except Exception:
pass
try:
DB.execute_sql('ALTER TABLE llm DROP PRIMARY KEY;')
DB.execute_sql('ALTER TABLE llm ADD PRIMARY KEY (llm_name,fid);')
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.add_column('task', 'retry_count', IntegerField(default=0))
)
except Exception as e:
except Exception:
pass
try:
migrate(
migrator.alter_column_type('api_token', 'dialog_id',
CharField(max_length=32, null=True, index=True))
)
except Exception:
pass
try:
migrate(
migrator.add_column("tenant_llm","max_tokens",IntegerField(default=8192,index=True))
)
except Exception:
pass

View File

@ -15,19 +15,12 @@
#
import operator
from functools import reduce
from typing import Dict, Type, Union
from playhouse.pool import PooledMySQLDatabase
from api.utils import current_timestamp, timestamp_to_date
from api.db.db_models import DB, DataBaseModel
from api.db.runtime_config import RuntimeConfig
from api.utils.log_utils import getLogger
from enum import Enum
LOGGER = getLogger()
@DB.connection_context()
@ -93,7 +86,7 @@ supported_operators = {
def query_dict2expression(
model: Type[DataBaseModel], query: Dict[str, Union[bool, int, str, list, tuple]]):
model: type[DataBaseModel], query: dict[str, bool | int | str | list | tuple]):
expression = []
for field, value in query.items():
@ -111,8 +104,8 @@ def query_dict2expression(
return reduce(operator.iand, expression)
def query_db(model: Type[DataBaseModel], limit: int = 0, offset: int = 0,
query: dict = None, order_by: Union[str, list, tuple] = None):
def query_db(model: type[DataBaseModel], limit: int = 0, offset: int = 0,
query: dict = None, order_by: str | list | tuple | None = None):
data = model.select()
if query:
data = data.where(query_dict2expression(model, query))

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import base64
import json
import os
@ -28,7 +29,7 @@ from api.db.services.document_service import DocumentService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMFactoriesService, LLMService, TenantLLMService, LLMBundle
from api.db.services.user_service import TenantService, UserTenantService
from api.settings import CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSERS, LLM_FACTORY, API_KEY, LLM_BASE_URL
from api import settings
from api.utils.file_utils import get_project_base_directory
@ -50,11 +51,11 @@ def init_superuser():
tenant = {
"id": user_info["id"],
"name": user_info["nickname"] + "s Kingdom",
"llm_id": CHAT_MDL,
"embd_id": EMBEDDING_MDL,
"asr_id": ASR_MDL,
"parser_ids": PARSERS,
"img2txt_id": IMAGE2TEXT_MDL
"llm_id": settings.CHAT_MDL,
"embd_id": settings.EMBEDDING_MDL,
"asr_id": settings.ASR_MDL,
"parser_ids": settings.PARSERS,
"img2txt_id": settings.IMAGE2TEXT_MDL
}
usr_tenant = {
"tenant_id": user_info["id"],
@ -63,42 +64,43 @@ def init_superuser():
"role": UserTenantRole.OWNER
}
tenant_llm = []
for llm in LLMService.query(fid=LLM_FACTORY):
for llm in LLMService.query(fid=settings.LLM_FACTORY):
tenant_llm.append(
{"tenant_id": user_info["id"], "llm_factory": LLM_FACTORY, "llm_name": llm.llm_name, "model_type": llm.model_type,
"api_key": API_KEY, "api_base": LLM_BASE_URL})
{"tenant_id": user_info["id"], "llm_factory": settings.LLM_FACTORY, "llm_name": llm.llm_name,
"model_type": llm.model_type,
"api_key": settings.API_KEY, "api_base": settings.LLM_BASE_URL})
if not UserService.save(**user_info):
print("\033[93m【ERROR】\033[0mcan't init admin.")
logging.error("can't init admin.")
return
TenantService.insert(**tenant)
UserTenantService.insert(**usr_tenant)
TenantLLMService.insert_many(tenant_llm)
print(
"【INFO】Super user initialized. \033[93memail: admin@ragflow.io, password: admin\033[0m. Changing the password after logining is strongly recomanded.")
logging.info(
"Super user initialized. email: admin@ragflow.io, password: admin. Changing the password after login is strongly recommended.")
chat_mdl = LLMBundle(tenant["id"], LLMType.CHAT, tenant["llm_id"])
msg = chat_mdl.chat(system="", history=[
{"role": "user", "content": "Hello!"}], gen_conf={})
{"role": "user", "content": "Hello!"}], gen_conf={})
if msg.find("ERROR: ") == 0:
print(
"\33[91m【ERROR】\33[0m: ",
logging.error(
"'{}' dosen't work. {}".format(
tenant["llm_id"],
msg))
embd_mdl = LLMBundle(tenant["id"], LLMType.EMBEDDING, tenant["embd_id"])
v, c = embd_mdl.encode(["Hello!"])
if c == 0:
print(
"\33[91m【ERROR】\33[0m:",
" '{}' dosen't work!".format(
logging.error(
"'{}' dosen't work!".format(
tenant["embd_id"]))
def init_llm_factory():
try:
LLMService.filter_delete([(LLM.fid == "MiniMax" or LLM.fid == "Minimax")])
except Exception as e:
LLMService.filter_delete([(LLM.fid == "cohere")])
LLMFactoriesService.filter_delete([LLMFactories.name == "cohere"])
except Exception:
pass
factory_llm_infos = json.load(
@ -111,14 +113,14 @@ def init_llm_factory():
llm_infos = factory_llm_info.pop("llm")
try:
LLMFactoriesService.save(**factory_llm_info)
except Exception as e:
except Exception:
pass
LLMService.filter_delete([LLM.fid == factory_llm_info["name"]])
for llm_info in llm_infos:
llm_info["fid"] = factory_llm_info["name"]
try:
LLMService.save(**llm_info)
except Exception as e:
except Exception:
pass
LLMFactoriesService.filter_delete([LLMFactories.name == "Local"])
@ -129,10 +131,11 @@ def init_llm_factory():
LLMFactoriesService.filter_delete([LLMFactoriesService.model.name == "QAnything"])
LLMService.filter_delete([LLMService.model.fid == "QAnything"])
TenantLLMService.filter_update([TenantLLMService.model.llm_factory == "QAnything"], {"llm_factory": "Youdao"})
TenantLLMService.filter_update([TenantLLMService.model.llm_factory == "cohere"], {"llm_factory": "Cohere"})
TenantService.filter_update([1 == 1], {
"parser_ids": "naive:General,qa:Q&A,resume:Resume,manual:Manual,table:Table,paper:Paper,book:Book,laws:Laws,presentation:Presentation,picture:Picture,one:One,audio:Audio,knowledge_graph:Knowledge Graph,email:Email"})
## insert openai two embedding models to the current openai user.
print("Start to insert 2 OpenAI embedding models...")
# print("Start to insert 2 OpenAI embedding models...")
tenant_ids = set([row["tenant_id"] for row in TenantLLMService.get_openai_models()])
for tid in tenant_ids:
for row in TenantLLMService.query(llm_factory="OpenAI", tenant_id=tid):
@ -145,7 +148,7 @@ def init_llm_factory():
row = deepcopy(row)
row["llm_name"] = "text-embedding-3-large"
TenantLLMService.save(**row)
except Exception as e:
except Exception:
pass
break
for kb_id in KnowledgebaseService.get_all_ids():
@ -169,20 +172,19 @@ def add_graph_templates():
CanvasTemplateService.save(**cnvs)
except:
CanvasTemplateService.update_by_id(cnvs["id"], cnvs)
except Exception as e:
print("Add graph templates error: ", e)
print("------------", flush=True)
except Exception:
logging.exception("Add graph templates error: ")
def init_web_data():
start_time = time.time()
init_llm_factory()
#if not UserService.get_all().count():
# if not UserService.get_all().count():
# init_superuser()
add_graph_templates()
print("init web data success:{}".format(time.time() - start_time))
logging.info("init web data success:{}".format(time.time() - start_time))
if __name__ == '__main__':

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
from api.versions import get_versions
from api.versions import get_ragflow_version
from .reload_config_base import ReloadConfigBase
@ -35,7 +35,7 @@ class RuntimeConfig(ReloadConfigBase):
@classmethod
def init_env(cls):
cls.ENV.update(get_versions())
cls.ENV.update({"version": get_ragflow_version()})
@classmethod
def load_config_manager(cls):

View File

@ -22,5 +22,6 @@ from api.db.services.common_service import CommonService
class CanvasTemplateService(CommonService):
model = CanvasTemplate
class UserCanvasService(CommonService):
model = UserCanvas

View File

@ -13,20 +13,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import binascii
import os
import json
import re
from copy import deepcopy
from timeit import default_timer as timer
from api.db import LLMType, ParserType
from api.db.db_models import Dialog, Conversation
import datetime
from datetime import timedelta
from api.db import LLMType, ParserType,StatusEnum
from api.db.db_models import Dialog, Conversation,DB
from api.db.services.common_service import CommonService
from api.db.services.knowledgebase_service import KnowledgebaseService
from api.db.services.llm_service import LLMService, TenantLLMService, LLMBundle
from api.settings import chat_logger, retrievaler, kg_retrievaler
from api import settings
from rag.app.resume import forbidden_select_fields4resume
from rag.nlp import keyword_extraction
from rag.nlp.search import index_name
from rag.utils import rmSpace, num_tokens_from_string, encoder
from api.utils.file_utils import get_project_base_directory
@ -35,10 +37,49 @@ from api.utils.file_utils import get_project_base_directory
class DialogService(CommonService):
model = Dialog
@classmethod
@DB.connection_context()
def get_list(cls, tenant_id,
page_number, items_per_page, orderby, desc, id , name):
chats = cls.model.select()
if id:
chats = chats.where(cls.model.id == id)
if name:
chats = chats.where(cls.model.name == name)
chats = chats.where(
(cls.model.tenant_id == tenant_id)
& (cls.model.status == StatusEnum.VALID.value)
)
if desc:
chats = chats.order_by(cls.model.getter_by(orderby).desc())
else:
chats = chats.order_by(cls.model.getter_by(orderby).asc())
chats = chats.paginate(page_number, items_per_page)
return list(chats.dicts())
class ConversationService(CommonService):
model = Conversation
@classmethod
@DB.connection_context()
def get_list(cls,dialog_id,page_number, items_per_page, orderby, desc, id , name):
sessions = cls.model.select().where(cls.model.dialog_id ==dialog_id)
if id:
sessions = sessions.where(cls.model.id == id)
if name:
sessions = sessions.where(cls.model.name == name)
if desc:
sessions = sessions.order_by(cls.model.getter_by(orderby).desc())
else:
sessions = sessions.order_by(cls.model.getter_by(orderby).asc())
sessions = sessions.paginate(page_number, items_per_page)
return list(sessions.dicts())
def message_fit_in(msg, max_length=4000):
def count():
@ -57,7 +98,8 @@ def message_fit_in(msg, max_length=4000):
return c, msg
msg_ = [m for m in msg[:-1] if m["role"] == "system"]
msg_.append(msg[-1])
if len(msg) > 1:
msg_.append(msg[-1])
msg = msg_
c = count()
if c < max_length:
@ -85,7 +127,7 @@ def llm_id2llm_type(llm_id):
for llm in llm_factory["llm"]:
if llm_id == llm["llm_name"]:
return llm["model_type"].strip(",")[-1]
def chat(dialog, messages, stream=True, **kwargs):
assert messages[-1]["role"] == "user", "The last content of this conversation is not from user."
@ -111,7 +153,7 @@ def chat(dialog, messages, stream=True, **kwargs):
return {"answer": "**ERROR**: Knowledge bases use different embedding models.", "reference": []}
is_kg = all([kb.parser_id == ParserType.KG for kb in kbs])
retr = retrievaler if not is_kg else kg_retrievaler
retr = settings.retrievaler if not is_kg else settings.kg_retrievaler
questions = [m["content"] for m in messages if m["role"] == "user"][-3:]
attachments = kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else None
@ -122,6 +164,9 @@ def chat(dialog, messages, stream=True, **kwargs):
attachments.extend(m["doc_ids"])
embd_mdl = LLMBundle(dialog.tenant_id, LLMType.EMBEDDING, embd_nms[0])
if not embd_mdl:
raise LookupError("Embedding model(%s) not found" % embd_nms[0])
if llm_id2llm_type(dialog.llm_id) == "image2text":
chat_mdl = LLMBundle(dialog.tenant_id, LLMType.IMAGE2TEXT, dialog.llm_id)
else:
@ -134,7 +179,7 @@ def chat(dialog, messages, stream=True, **kwargs):
tts_mdl = LLMBundle(dialog.tenant_id, LLMType.TTS)
# try to use sql if field mapping is good to go
if field_map:
chat_logger.info("Use SQL to retrieval:{}".format(questions[-1]))
logging.debug("Use SQL to retrieval:{}".format(questions[-1]))
ans = use_sql(questions[-1], field_map, dialog.tenant_id, chat_mdl, prompt_config.get("quote", True))
if ans:
yield ans
@ -153,6 +198,8 @@ def chat(dialog, messages, stream=True, **kwargs):
questions = [full_question(dialog.tenant_id, dialog.llm_id, messages)]
else:
questions = questions[-1:]
refineQ_tm = timer()
keyword_tm = timer()
rerank_mdl = None
if dialog.rerank_id:
@ -165,13 +212,16 @@ def chat(dialog, messages, stream=True, **kwargs):
else:
if prompt_config.get("keyword", False):
questions[-1] += keyword_extraction(chat_mdl, questions[-1])
kbinfos = retr.retrieval(" ".join(questions), embd_mdl, dialog.tenant_id, dialog.kb_ids, 1, dialog.top_n,
keyword_tm = timer()
tenant_ids = list(set([kb.tenant_id for kb in kbs]))
kbinfos = retr.retrieval(" ".join(questions), embd_mdl, tenant_ids, dialog.kb_ids, 1, dialog.top_n,
dialog.similarity_threshold,
dialog.vector_similarity_weight,
doc_ids=attachments,
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
chat_logger.info(
logging.debug(
"{}->{}".format(" ".join(questions), "\n->".join(knowledges)))
retrieval_tm = timer()
@ -189,6 +239,7 @@ def chat(dialog, messages, stream=True, **kwargs):
used_token_count, msg = message_fit_in(msg, int(max_tokens * 0.97))
assert len(msg) >= 2, f"message_fit_in has bug: {msg}"
prompt = msg[0]["content"]
prompt += "\n\n### Query:\n%s" % " ".join(questions)
if "max_tokens" in gen_conf:
gen_conf["max_tokens"] = min(
@ -221,7 +272,9 @@ def chat(dialog, messages, stream=True, **kwargs):
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
done_tm = timer()
prompt += "\n\n### Elapsed\n - Retrieval: %.1f ms\n - LLM: %.1f ms"%((retrieval_tm-st)*1000, (done_tm-st)*1000)
prompt += "\n\n### Elapsed\n - Refine Question: %.1f ms\n - Keywords: %.1f ms\n - Retrieval: %.1f ms\n - LLM: %.1f ms" % (
(refineQ_tm - st) * 1000, (keyword_tm - refineQ_tm) * 1000, (retrieval_tm - keyword_tm) * 1000,
(done_tm - retrieval_tm) * 1000)
return {"answer": answer, "reference": refs, "prompt": prompt}
if stream:
@ -240,7 +293,7 @@ def chat(dialog, messages, stream=True, **kwargs):
yield decorate_answer(answer)
else:
answer = chat_mdl.chat(prompt, msg[1:], gen_conf)
chat_logger.info("User: {}|Assistant: {}".format(
logging.debug("User: {}|Assistant: {}".format(
msg[-1]["content"], answer))
res = decorate_answer(answer)
res["audio_binary"] = tts(tts_mdl, answer)
@ -268,8 +321,7 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
nonlocal sys_prompt, user_promt, question, tried_times
sql = chat_mdl.chat(sys_prompt, [{"role": "user", "content": user_promt}], {
"temperature": 0.06})
print(user_promt, sql)
chat_logger.info(f"{question}”==>{user_promt} get SQL: {sql}")
logging.debug(f"{question} ==> {user_promt} get SQL: {sql}")
sql = re.sub(r"[\r\n]+", " ", sql.lower())
sql = re.sub(r".*select ", "select ", sql.lower())
sql = re.sub(r" +", " ", sql)
@ -289,11 +341,9 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
flds.append(k)
sql = "select doc_id,docnm_kwd," + ",".join(flds) + sql[8:]
print(f"{question} get SQL(refined): {sql}")
chat_logger.info(f"{question}” get SQL(refined): {sql}")
logging.debug(f"{question} get SQL(refined): {sql}")
tried_times += 1
return retrievaler.sql_retrieval(sql, format="json"), sql
return settings.retrievaler.sql_retrieval(sql, format="json"), sql
tbl, sql = get_table()
if tbl is None:
@ -320,10 +370,9 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
question, sql, tbl["error"]
)
tbl, sql = get_table()
chat_logger.info("TRY it again: {}".format(sql))
logging.debug("TRY it again: {}".format(sql))
chat_logger.info("GET table: {}".format(tbl))
print(tbl)
logging.debug("GET table: {}".format(tbl))
if tbl.get("error") or len(tbl["rows"]) == 0:
return None
@ -345,6 +394,7 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
rows = ["|" +
"|".join([rmSpace(str(r[i])) for i in clmn_idx]).replace("None", " ") +
"|" for r in tbl["rows"]]
rows = [r for r in rows if re.sub(r"[ |]+", "", r)]
if quota:
rows = "\n".join([r + f" ##{ii}$$ |" for ii, r in enumerate(rows)])
else:
@ -352,7 +402,7 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
rows = re.sub(r"T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+Z)?\|", "|", rows)
if not docid_idx or not docnm_idx:
chat_logger.warning("SQL missing field: " + sql)
logging.warning("SQL missing field: " + sql)
return {
"answer": "\n".join([clmns, line, rows]),
"reference": {"chunks": [], "doc_aggs": []},
@ -415,6 +465,58 @@ def rewrite(tenant_id, llm_id, question):
return ans
def keyword_extraction(chat_mdl, content, topn=3):
prompt = f"""
Role: You're a text analyzer.
Task: extract the most important keywords/phrases of a given piece of text content.
Requirements:
- Summarize the text content, and give top {topn} important keywords/phrases.
- The keywords MUST be in language of the given piece of text content.
- The keywords are delimited by ENGLISH COMMA.
- Keywords ONLY in output.
### Text Content
{content}
"""
msg = [
{"role": "system", "content": prompt},
{"role": "user", "content": "Output: "}
]
_, msg = message_fit_in(msg, chat_mdl.max_length)
kwd = chat_mdl.chat(prompt, msg[1:], {"temperature": 0.2})
if isinstance(kwd, tuple): kwd = kwd[0]
if kwd.find("**ERROR**") >=0: return ""
return kwd
def question_proposal(chat_mdl, content, topn=3):
prompt = f"""
Role: You're a text analyzer.
Task: propose {topn} questions about a given piece of text content.
Requirements:
- Understand and summarize the text content, and propose top {topn} important questions.
- The questions SHOULD NOT have overlapping meanings.
- The questions SHOULD cover the main content of the text as much as possible.
- The questions MUST be in language of the given piece of text content.
- One question per line.
- Question ONLY in output.
### Text Content
{content}
"""
msg = [
{"role": "system", "content": prompt},
{"role": "user", "content": "Output: "}
]
_, msg = message_fit_in(msg, chat_mdl.max_length)
kwd = chat_mdl.chat(prompt, msg[1:], {"temperature": 0.2})
if isinstance(kwd, tuple): kwd = kwd[0]
if kwd.find("**ERROR**") >= 0: return ""
return kwd
def full_question(tenant_id, llm_id, messages):
if llm_id2llm_type(llm_id) == "image2text":
chat_mdl = LLMBundle(tenant_id, LLMType.IMAGE2TEXT, llm_id)
@ -425,9 +527,16 @@ def full_question(tenant_id, llm_id, messages):
if m["role"] not in ["user", "assistant"]: continue
conv.append("{}: {}".format(m["role"].upper(), m["content"]))
conv = "\n".join(conv)
today = datetime.date.today().isoformat()
yesterday = (datetime.date.today() - timedelta(days=1)).isoformat()
tomorrow = (datetime.date.today() + timedelta(days=1)).isoformat()
prompt = f"""
Role: A helpful assistant
Task: Generate a full user question that would follow the conversation.
Task and steps:
1. Generate a full user question that would follow the conversation.
2. If the user's question involves relative date, you need to convert it into absolute date based on the current date, which is {today}. For example: 'yesterday' would be converted to {yesterday}.
Requirements & Restrictions:
- Text generated MUST be in the same language of the original user's question.
- If the user's latest question is completely, don't do anything, just return the original question.
@ -456,6 +565,14 @@ User: What's her full name?
###############
Output: What's the full name of Donald Trump's mother Mary Trump?
------------
# Example 3
## Conversation
USER: What's the weather today in London?
ASSISTANT: Cloudy.
USER: What's about tomorrow in Rochester?
###############
Output: What's the weather in Rochester on {tomorrow}?
######################
# Real Data
@ -477,16 +594,17 @@ def tts(tts_mdl, text):
def ask(question, kb_ids, tenant_id):
kbs = KnowledgebaseService.get_by_ids(kb_ids)
tenant_ids = [kb.tenant_id for kb in kbs]
embd_nms = list(set([kb.embd_id for kb in kbs]))
is_kg = all([kb.parser_id == ParserType.KG for kb in kbs])
retr = retrievaler if not is_kg else kg_retrievaler
retr = settings.retrievaler if not is_kg else settings.kg_retrievaler
embd_mdl = LLMBundle(tenant_id, LLMType.EMBEDDING, embd_nms[0])
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT)
max_tokens = chat_mdl.max_length
kbinfos = retr.retrieval(question, embd_mdl, tenant_id, kb_ids, 1, 12, 0.1, 0.3, aggs=False)
kbinfos = retr.retrieval(question, embd_mdl, tenant_ids, kb_ids, 1, 12, 0.1, 0.3, aggs=False)
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
used_token_count = 0

View File

@ -13,32 +13,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import hashlib
import json
import os
import random
import re
import traceback
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy
from datetime import datetime
from io import BytesIO
from elasticsearch_dsl import Q
from peewee import fn
from api.db.db_utils import bulk_insert_into_db
from api.settings import stat_logger
from api import settings
from api.utils import current_timestamp, get_format_time, get_uuid
from api.utils.file_utils import get_project_base_directory
from graphrag.mind_map_extractor import MindMapExtractor
from rag.settings import SVR_QUEUE_NAME
from rag.utils.es_conn import ELASTICSEARCH
from rag.utils.storage_factory import STORAGE_IMPL
from rag.nlp import search, rag_tokenizer
from api.db import FileType, TaskStatus, ParserType, LLMType
from api.db.db_models import DB, Knowledgebase, Tenant, Task
from api.db.db_models import DB, Knowledgebase, Tenant, Task, UserTenant
from api.db.db_models import Document
from api.db.services.common_service import CommonService
from api.db.services.knowledgebase_service import KnowledgebaseService
@ -49,6 +45,31 @@ from rag.utils.redis_conn import REDIS_CONN
class DocumentService(CommonService):
model = Document
@classmethod
@DB.connection_context()
def get_list(cls, kb_id, page_number, items_per_page,
orderby, desc, keywords, id, name):
docs = cls.model.select().where(cls.model.kb_id == kb_id)
if id:
docs = docs.where(
cls.model.id == id)
if name:
docs = docs.where(
cls.model.name == name
)
if keywords:
docs = docs.where(
fn.LOWER(cls.model.name).contains(keywords.lower())
)
if desc:
docs = docs.order_by(cls.model.getter_by(orderby).desc())
else:
docs = docs.order_by(cls.model.getter_by(orderby).asc())
docs = docs.paginate(page_number, items_per_page)
count = docs.count()
return list(docs.dicts()), count
@classmethod
@DB.connection_context()
def get_by_kb_id(cls, kb_id, page_number, items_per_page,
@ -70,35 +91,6 @@ class DocumentService(CommonService):
return list(docs.dicts()), count
@classmethod
@DB.connection_context()
def list_documents_in_dataset(cls, dataset_id, offset, count, order_by, descend, keywords):
if keywords:
docs = cls.model.select().where(
(cls.model.kb_id == dataset_id),
(fn.LOWER(cls.model.name).contains(keywords.lower()))
)
else:
docs = cls.model.select().where(cls.model.kb_id == dataset_id)
total = docs.count()
if descend == 'True':
docs = docs.order_by(cls.model.getter_by(order_by).desc())
if descend == 'False':
docs = docs.order_by(cls.model.getter_by(order_by).asc())
docs = list(docs.dicts())
docs_length = len(docs)
if offset < 0 or offset > docs_length:
raise IndexError("Offset is out of the valid range.")
if count == -1:
return docs[offset:], total
return docs[offset:offset + count], total
@classmethod
@DB.connection_context()
def insert(cls, doc):
@ -116,8 +108,7 @@ class DocumentService(CommonService):
@classmethod
@DB.connection_context()
def remove_document(cls, doc, tenant_id):
ELASTICSEARCH.deleteByQuery(
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id))
settings.docStoreConn.delete({"doc_id": doc.id}, search.index_name(tenant_id), doc.kb_id)
cls.clear_chunk_num(doc.id)
return cls.delete_by_id(doc.id)
@ -140,26 +131,27 @@ class DocumentService(CommonService):
cls.model.update_time]
docs = cls.model.select(*fields) \
.join(Knowledgebase, on=(cls.model.kb_id == Knowledgebase.id)) \
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id))\
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id)) \
.where(
cls.model.status == StatusEnum.VALID.value,
~(cls.model.type == FileType.VIRTUAL.value),
cls.model.progress == 0,
cls.model.update_time >= current_timestamp() - 1000 * 600,
cls.model.run == TaskStatus.RUNNING.value)\
cls.model.status == StatusEnum.VALID.value,
~(cls.model.type == FileType.VIRTUAL.value),
cls.model.progress == 0,
cls.model.update_time >= current_timestamp() - 1000 * 600,
cls.model.run == TaskStatus.RUNNING.value) \
.order_by(cls.model.update_time.asc())
return list(docs.dicts())
@classmethod
@DB.connection_context()
def get_unfinished_docs(cls):
fields = [cls.model.id, cls.model.process_begin_at, cls.model.parser_config, cls.model.progress_msg, cls.model.run]
fields = [cls.model.id, cls.model.process_begin_at, cls.model.parser_config, cls.model.progress_msg,
cls.model.run]
docs = cls.model.select(*fields) \
.where(
cls.model.status == StatusEnum.VALID.value,
~(cls.model.type == FileType.VIRTUAL.value),
cls.model.progress < 1,
cls.model.progress > 0)
cls.model.status == StatusEnum.VALID.value,
~(cls.model.type == FileType.VIRTUAL.value),
cls.model.progress < 1,
cls.model.progress > 0)
return list(docs.dicts())
@classmethod
@ -174,12 +166,12 @@ class DocumentService(CommonService):
"Document not found which is supposed to be there")
num = Knowledgebase.update(
token_num=Knowledgebase.token_num +
token_num,
token_num,
chunk_num=Knowledgebase.chunk_num +
chunk_num).where(
chunk_num).where(
Knowledgebase.id == kb_id).execute()
return num
@classmethod
@DB.connection_context()
def decrement_chunk_num(cls, doc_id, kb_id, token_num, chunk_num, duation):
@ -192,13 +184,13 @@ class DocumentService(CommonService):
"Document not found which is supposed to be there")
num = Knowledgebase.update(
token_num=Knowledgebase.token_num -
token_num,
token_num,
chunk_num=Knowledgebase.chunk_num -
chunk_num
chunk_num
).where(
Knowledgebase.id == kb_id).execute()
return num
@classmethod
@DB.connection_context()
def clear_chunk_num(cls, doc_id):
@ -207,10 +199,10 @@ class DocumentService(CommonService):
num = Knowledgebase.update(
token_num=Knowledgebase.token_num -
doc.token_num,
doc.token_num,
chunk_num=Knowledgebase.chunk_num -
doc.chunk_num,
doc_num=Knowledgebase.doc_num-1
doc.chunk_num,
doc_num=Knowledgebase.doc_num - 1
).where(
Knowledgebase.id == doc.kb_id).execute()
return num
@ -221,13 +213,22 @@ class DocumentService(CommonService):
docs = cls.model.select(
Knowledgebase.tenant_id).join(
Knowledgebase, on=(
Knowledgebase.id == cls.model.kb_id)).where(
cls.model.id == doc_id, Knowledgebase.status == StatusEnum.VALID.value)
Knowledgebase.id == cls.model.kb_id)).where(
cls.model.id == doc_id, Knowledgebase.status == StatusEnum.VALID.value)
docs = docs.dicts()
if not docs:
return
return docs[0]["tenant_id"]
@classmethod
@DB.connection_context()
def get_knowledgebase_id(cls, doc_id):
docs = cls.model.select(cls.model.kb_id).where(cls.model.id == doc_id)
docs = docs.dicts()
if not docs:
return
return docs[0]["kb_id"]
@classmethod
@DB.connection_context()
def get_tenant_id_by_name(cls, name):
@ -241,19 +242,46 @@ class DocumentService(CommonService):
return
return docs[0]["tenant_id"]
@classmethod
@DB.connection_context()
def accessible(cls, doc_id, user_id):
docs = cls.model.select(
cls.model.id).join(
Knowledgebase, on=(
Knowledgebase.id == cls.model.kb_id)
).join(UserTenant, on=(UserTenant.tenant_id == Knowledgebase.tenant_id)
).where(cls.model.id == doc_id, UserTenant.user_id == user_id).paginate(0, 1)
docs = docs.dicts()
if not docs:
return False
return True
@classmethod
@DB.connection_context()
def accessible4deletion(cls, doc_id, user_id):
docs = cls.model.select(
cls.model.id).join(
Knowledgebase, on=(
Knowledgebase.id == cls.model.kb_id)
).where(cls.model.id == doc_id, Knowledgebase.created_by == user_id).paginate(0, 1)
docs = docs.dicts()
if not docs:
return False
return True
@classmethod
@DB.connection_context()
def get_embd_id(cls, doc_id):
docs = cls.model.select(
Knowledgebase.embd_id).join(
Knowledgebase, on=(
Knowledgebase.id == cls.model.kb_id)).where(
cls.model.id == doc_id, Knowledgebase.status == StatusEnum.VALID.value)
Knowledgebase.id == cls.model.kb_id)).where(
cls.model.id == doc_id, Knowledgebase.status == StatusEnum.VALID.value)
docs = docs.dicts()
if not docs:
return
return docs[0]["embd_id"]
@classmethod
@DB.connection_context()
def get_doc_id_by_doc_name(cls, doc_name):
@ -268,7 +296,7 @@ class DocumentService(CommonService):
@classmethod
@DB.connection_context()
def get_thumbnails(cls, docids):
fields = [cls.model.id, cls.model.thumbnail]
fields = [cls.model.id, cls.model.kb_id, cls.model.thumbnail]
return list(cls.model.select(
*fields).where(cls.model.id.in_(docids)).dicts())
@ -289,6 +317,7 @@ class DocumentService(CommonService):
dfs_update(old[k], v)
else:
old[k] = v
dfs_update(d.parser_config, config)
cls.update_by_id(id, {"parser_config": d.parser_config})
@ -305,7 +334,7 @@ class DocumentService(CommonService):
def begin2parse(cls, docid):
cls.update_by_id(
docid, {"progress": random.random() * 1 / 100.,
"progress_msg": "Task dispatched...",
"progress_msg": "Task is queued...",
"process_begin_at": get_format_time()
})
@ -323,7 +352,7 @@ class DocumentService(CommonService):
finished = True
bad = 0
e, doc = DocumentService.get_by_id(d["id"])
status = doc.run#TaskStatus.RUNNING.value
status = doc.run # TaskStatus.RUNNING.value
for t in tsks:
if 0 <= t.progress < 1:
finished = False
@ -337,9 +366,10 @@ class DocumentService(CommonService):
prg = -1
status = TaskStatus.FAIL.value
elif finished:
if d["parser_config"].get("raptor", {}).get("use_raptor") and d["progress_msg"].lower().find(" raptor")<0:
if d["parser_config"].get("raptor", {}).get("use_raptor") and d["progress_msg"].lower().find(
" raptor") < 0:
queue_raptor_tasks(d)
prg *= 0.98
prg = 0.98 * len(tsks) / (len(tsks) + 1)
msg.append("------ RAPTOR -------")
else:
status = TaskStatus.DONE.value
@ -356,7 +386,8 @@ class DocumentService(CommonService):
info["progress_msg"] = msg
cls.update_by_id(d["id"], info)
except Exception as e:
stat_logger.error("fetch task exception:" + str(e))
if str(e).find("'0'") < 0:
logging.exception("fetch task exception")
@classmethod
@DB.connection_context()
@ -364,14 +395,13 @@ class DocumentService(CommonService):
return len(cls.model.select(cls.model.id).where(
cls.model.kb_id == kb_id).dicts())
@classmethod
@DB.connection_context()
def do_cancel(cls, doc_id):
try:
_, doc = DocumentService.get_by_id(doc_id)
return doc.run == TaskStatus.CANCEL.value or doc.progress < 0
except Exception as e:
except Exception:
pass
return False
@ -384,7 +414,7 @@ def queue_raptor_tasks(doc):
"doc_id": doc["id"],
"from_page": 0,
"to_page": -1,
"progress_msg": "Start to do RAPTOR (Recursive Abstractive Processing For Tree-Organized Retrieval)."
"progress_msg": "Start to do RAPTOR (Recursive Abstractive Processing for Tree-Organized Retrieval)."
}
task = new_task()
@ -412,11 +442,6 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
if not e:
raise LookupError("Can't find this knowledgebase!")
idxnm = search.index_name(kb.tenant_id)
if not ELASTICSEARCH.indexExist(idxnm):
ELASTICSEARCH.createIdx(idxnm, json.load(
open(os.path.join(get_project_base_directory(), "conf", "mapping.json"), "r")))
embd_mdl = LLMBundle(kb.tenant_id, LLMType.EMBEDDING, llm_name=kb.embd_id, lang=kb.language)
err, files = FileService.upload_document(kb, file_objs, user_id)
@ -460,7 +485,7 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
md5 = hashlib.md5()
md5.update((ck["content_with_weight"] +
str(d["doc_id"])).encode("utf-8"))
d["_id"] = md5.hexdigest()
d["id"] = md5.hexdigest()
d["create_time"] = str(datetime.now()).replace("T", " ")[:19]
d["create_timestamp_flt"] = datetime.now().timestamp()
if not d.get("image"):
@ -473,9 +498,9 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
else:
d["image"].save(output_buffer, format='JPEG')
STORAGE_IMPL.put(kb.id, d["_id"], output_buffer.getvalue())
d["img_id"] = "{}-{}".format(kb.id, d["_id"])
del d["image"]
STORAGE_IMPL.put(kb.id, d["id"], output_buffer.getvalue())
d["img_id"] = "{}-{}".format(kb.id, d["id"])
d.pop("image", None)
docs.append(d)
parser_ids = {d["id"]: d["parser_id"] for d, _ in files}
@ -494,6 +519,9 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
token_counts[doc_id] += c
return vects
idxnm = search.index_name(kb.tenant_id)
try_create_idx = True
_, tenant = TenantService.get_by_id(kb.tenant_id)
llm_bdl = LLMBundle(kb.tenant_id, LLMType.CHAT, tenant.llm_id)
for doc_id in docids:
@ -516,7 +544,7 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
"knowledge_graph_kwd": "mind_map"
})
except Exception as e:
stat_logger.error("Mind map generation error:", traceback.format_exc())
logging.exception("Mind map generation error")
vects = embedding(doc_id, [c["content_with_weight"] for c in cks])
assert len(cks) == len(vects)
@ -524,9 +552,13 @@ def doc_upload_and_parse(conversation_id, file_objs, user_id):
v = vects[i]
d["q_%d_vec" % len(v)] = v
for b in range(0, len(cks), es_bulk_size):
ELASTICSEARCH.bulk(cks[b:b + es_bulk_size], idxnm)
if try_create_idx:
if not settings.docStoreConn.indexExist(idxnm, kb_id):
settings.docStoreConn.createIdx(idxnm, kb_id, len(vects[0]))
try_create_idx = False
settings.docStoreConn.insert(cks[b:b + es_bulk_size], idxnm, kb_id)
DocumentService.increment_chunk_num(
doc_id, kb.id, token_counts[doc_id], chunk_counts[doc_id], 0)
return [d["id"] for d,_ in files]
return [d["id"] for d, _ in files]

View File

@ -13,8 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import re
import os
from concurrent.futures import ThreadPoolExecutor
from flask_login import current_user
from peewee import fn
@ -26,7 +29,7 @@ from api.db.services.common_service import CommonService
from api.db.services.document_service import DocumentService
from api.db.services.file2document_service import File2DocumentService
from api.utils import get_uuid
from api.utils.file_utils import filename_type, thumbnail
from api.utils.file_utils import filename_type, thumbnail_img
from rag.utils.storage_factory import STORAGE_IMPL
@ -272,8 +275,8 @@ class FileService(CommonService):
cls.delete_folder_by_pf_id(user_id, file.id)
return cls.model.delete().where((cls.model.tenant_id == user_id)
& (cls.model.id == folder_id)).execute(),
except Exception as e:
print(e)
except Exception:
logging.exception("delete_folder_by_pf_id")
raise RuntimeError("Database error (File retrieval)!")
@classmethod
@ -321,8 +324,8 @@ class FileService(CommonService):
def move_file(cls, file_ids, folder_id):
try:
cls.filter_update((cls.model.id << file_ids, ), { 'parent_id': folder_id })
except Exception as e:
print(e)
except Exception:
logging.exception("move_file")
raise RuntimeError("Database error (File move)!")
@classmethod
@ -354,8 +357,17 @@ class FileService(CommonService):
location += "_"
blob = file.read()
STORAGE_IMPL.put(kb.id, location, blob)
doc_id = get_uuid()
img = thumbnail_img(filename, blob)
thumbnail_location = ''
if img is not None:
thumbnail_location = f'thumbnail_{doc_id}.png'
STORAGE_IMPL.put(kb.id, thumbnail_location, img)
doc = {
"id": get_uuid(),
"id": doc_id,
"kb_id": kb.id,
"parser_id": self.get_parser(filetype, filename, kb.parser_id),
"parser_config": kb.parser_config,
@ -364,7 +376,7 @@ class FileService(CommonService):
"name": filename,
"location": location,
"size": len(blob),
"thumbnail": thumbnail(filename, blob)
"thumbnail": thumbnail_location
}
DocumentService.insert(doc)
@ -375,6 +387,41 @@ class FileService(CommonService):
return err, files
@staticmethod
def parse_docs(file_objs, user_id):
from rag.app import presentation, picture, naive, audio, email
def dummy(prog=None, msg=""):
pass
FACTORY = {
ParserType.PRESENTATION.value: presentation,
ParserType.PICTURE.value: picture,
ParserType.AUDIO.value: audio,
ParserType.EMAIL.value: email
}
parser_config = {"chunk_token_num": 16096, "delimiter": "\n!?;。;!?", "layout_recognize": False}
exe = ThreadPoolExecutor(max_workers=12)
threads = []
for file in file_objs:
kwargs = {
"lang": "English",
"callback": dummy,
"parser_config": parser_config,
"from_page": 0,
"to_page": 100000,
"tenant_id": user_id
}
filetype = filename_type(file.filename)
blob = file.read()
threads.append(exe.submit(FACTORY.get(FileService.get_parser(filetype, file.filename, ""), naive).chunk, file.filename, blob, **kwargs))
res = []
for th in threads:
res.append("\n".join([ck["content_with_weight"] for ck in th.result()]))
return "\n\n".join(res)
@staticmethod
def get_parser(doc_type, filename, default):
if doc_type == FileType.VISUAL:

View File

@ -14,21 +14,47 @@
# limitations under the License.
#
from api.db import StatusEnum, TenantPermission
from api.db.db_models import Knowledgebase, DB, Tenant
from api.db.db_models import Knowledgebase, DB, Tenant, User, UserTenant,Document
from api.db.services.common_service import CommonService
class KnowledgebaseService(CommonService):
model = Knowledgebase
@classmethod
@DB.connection_context()
def list_documents_by_ids(cls,kb_ids):
doc_ids=cls.model.select(Document.id.alias("document_id")).join(Document,on=(cls.model.id == Document.kb_id)).where(
cls.model.id.in_(kb_ids)
)
doc_ids =list(doc_ids.dicts())
doc_ids = [doc["document_id"] for doc in doc_ids]
return doc_ids
@classmethod
@DB.connection_context()
def get_by_tenant_ids(cls, joined_tenant_ids, user_id,
page_number, items_per_page, orderby, desc):
kbs = cls.model.select().where(
fields = [
cls.model.id,
cls.model.avatar,
cls.model.name,
cls.model.language,
cls.model.description,
cls.model.permission,
cls.model.doc_num,
cls.model.token_num,
cls.model.chunk_num,
cls.model.parser_id,
cls.model.embd_id,
User.nickname,
User.avatar.alias('tenant_avatar'),
cls.model.update_time
]
kbs = cls.model.select(*fields).join(User, on=(cls.model.tenant_id == User.id)).where(
((cls.model.tenant_id.in_(joined_tenant_ids) & (cls.model.permission ==
TenantPermission.TEAM.value)) | (
cls.model.tenant_id == user_id))
cls.model.tenant_id == user_id))
& (cls.model.status == StatusEnum.VALID.value)
)
if desc:
@ -42,35 +68,20 @@ class KnowledgebaseService(CommonService):
@classmethod
@DB.connection_context()
def get_by_tenant_ids_by_offset(cls, joined_tenant_ids, user_id, offset, count, orderby, desc):
kbs = cls.model.select().where(
((cls.model.tenant_id.in_(joined_tenant_ids) & (cls.model.permission ==
TenantPermission.TEAM.value)) | (
cls.model.tenant_id == user_id))
& (cls.model.status == StatusEnum.VALID.value)
)
if desc:
kbs = kbs.order_by(cls.model.getter_by(orderby).desc())
else:
kbs = kbs.order_by(cls.model.getter_by(orderby).asc())
kbs = list(kbs.dicts())
kbs_length = len(kbs)
if offset < 0 or offset > kbs_length:
raise IndexError("Offset is out of the valid range.")
if count == -1:
return kbs[offset:]
return kbs[offset:offset+count]
def get_kb_ids(cls, tenant_id):
fields = [
cls.model.id,
]
kbs = cls.model.select(*fields).where(cls.model.tenant_id == tenant_id)
kb_ids = [kb.id for kb in kbs]
return kb_ids
@classmethod
@DB.connection_context()
def get_detail(cls, kb_id):
fields = [
cls.model.id,
#Tenant.embd_id,
# Tenant.embd_id,
cls.model.embd_id,
cls.model.avatar,
cls.model.name,
@ -83,14 +94,14 @@ class KnowledgebaseService(CommonService):
cls.model.parser_id,
cls.model.parser_config]
kbs = cls.model.select(*fields).join(Tenant, on=(
(Tenant.id == cls.model.tenant_id) & (Tenant.status == StatusEnum.VALID.value))).where(
(Tenant.id == cls.model.tenant_id) & (Tenant.status == StatusEnum.VALID.value))).where(
(cls.model.id == kb_id),
(cls.model.status == StatusEnum.VALID.value)
)
if not kbs:
return
d = kbs[0].to_dict()
#d["embd_id"] = kbs[0].tenant.embd_id
# d["embd_id"] = kbs[0].tenant.embd_id
return d
@classmethod
@ -142,3 +153,65 @@ class KnowledgebaseService(CommonService):
@DB.connection_context()
def get_all_ids(cls):
return [m["id"] for m in cls.model.select(cls.model.id).dicts()]
@classmethod
@DB.connection_context()
def get_list(cls, joined_tenant_ids, user_id,
page_number, items_per_page, orderby, desc, id, name):
kbs = cls.model.select()
if id:
kbs = kbs.where(cls.model.id == id)
if name:
kbs = kbs.where(cls.model.name == name)
kbs = kbs.where(
((cls.model.tenant_id.in_(joined_tenant_ids) & (cls.model.permission ==
TenantPermission.TEAM.value)) | (
cls.model.tenant_id == user_id))
& (cls.model.status == StatusEnum.VALID.value)
)
if desc:
kbs = kbs.order_by(cls.model.getter_by(orderby).desc())
else:
kbs = kbs.order_by(cls.model.getter_by(orderby).asc())
kbs = kbs.paginate(page_number, items_per_page)
return list(kbs.dicts())
@classmethod
@DB.connection_context()
def accessible(cls, kb_id, user_id):
docs = cls.model.select(
cls.model.id).join(UserTenant, on=(UserTenant.tenant_id == Knowledgebase.tenant_id)
).where(cls.model.id == kb_id, UserTenant.user_id == user_id).paginate(0, 1)
docs = docs.dicts()
if not docs:
return False
return True
@classmethod
@DB.connection_context()
def get_kb_by_id(cls, kb_id, user_id):
kbs = cls.model.select().join(UserTenant, on=(UserTenant.tenant_id == Knowledgebase.tenant_id)
).where(cls.model.id == kb_id, UserTenant.user_id == user_id).paginate(0, 1)
kbs = kbs.dicts()
return list(kbs)
@classmethod
@DB.connection_context()
def get_kb_by_name(cls, kb_name, user_id):
kbs = cls.model.select().join(UserTenant, on=(UserTenant.tenant_id == Knowledgebase.tenant_id)
).where(cls.model.name == kb_name, UserTenant.user_id == user_id).paginate(0, 1)
kbs = kbs.dicts()
return list(kbs)
@classmethod
@DB.connection_context()
def accessible4deletion(cls, kb_id, user_id):
docs = cls.model.select(
cls.model.id).where(cls.model.id == kb_id, cls.model.created_by == user_id).paginate(0, 1)
docs = docs.dicts()
if not docs:
return False
return True

View File

@ -13,8 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
from api.db.services.user_service import TenantService
from api.settings import database_logger
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel, Seq2txtModel, TTSModel
from api.db import LLMType
from api.db.db_models import DB
@ -133,7 +133,8 @@ class TenantLLMService(CommonService):
if model_config["llm_factory"] not in Seq2txtModel:
return
return Seq2txtModel[model_config["llm_factory"]](
model_config["api_key"], model_config["llm_name"], lang,
key=model_config["api_key"], model_name=model_config["llm_name"],
lang=lang,
base_url=model_config["api_base"]
)
if llm_type == LLMType.TTS:
@ -167,11 +168,13 @@ class TenantLLMService(CommonService):
else:
assert False, "LLM type error"
llm_name = mdlnm.split("@")[0] if "@" in mdlnm else mdlnm
num = 0
try:
for u in cls.query(tenant_id=tenant_id, llm_name=mdlnm):
for u in cls.query(tenant_id=tenant_id, llm_name=llm_name):
num += cls.model.update(used_tokens=u.used_tokens + used_tokens)\
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == mdlnm)\
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == llm_name)\
.execute()
except Exception as e:
pass
@ -195,7 +198,7 @@ class LLMBundle(object):
self.llm_name = llm_name
self.mdl = TenantLLMService.model_instance(
tenant_id, llm_type, llm_name, lang=lang)
assert self.mdl, "Can't find mole for {}/{}/{}".format(
assert self.mdl, "Can't find model for {}/{}/{}".format(
tenant_id, llm_type, llm_name)
self.max_length = 8192
for lm in LLMService.query(llm_name=llm_name):
@ -206,40 +209,40 @@ class LLMBundle(object):
emd, used_tokens = self.mdl.encode(texts, batch_size)
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens):
database_logger.error(
"Can't update token usage for {}/EMBEDDING".format(self.tenant_id))
logging.error(
"LLMBundle.encode can't update token usage for {}/EMBEDDING used_tokens: {}".format(self.tenant_id, used_tokens))
return emd, used_tokens
def encode_queries(self, query: str):
emd, used_tokens = self.mdl.encode_queries(query)
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens):
database_logger.error(
"Can't update token usage for {}/EMBEDDING".format(self.tenant_id))
logging.error(
"LLMBundle.encode_queries can't update token usage for {}/EMBEDDING used_tokens: {}".format(self.tenant_id, used_tokens))
return emd, used_tokens
def similarity(self, query: str, texts: list):
sim, used_tokens = self.mdl.similarity(query, texts)
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens):
database_logger.error(
"Can't update token usage for {}/RERANK".format(self.tenant_id))
logging.error(
"LLMBundle.similarity can't update token usage for {}/RERANK used_tokens: {}".format(self.tenant_id, used_tokens))
return sim, used_tokens
def describe(self, image, max_tokens=300):
txt, used_tokens = self.mdl.describe(image, max_tokens)
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens):
database_logger.error(
"Can't update token usage for {}/IMAGE2TEXT".format(self.tenant_id))
logging.error(
"LLMBundle.describe can't update token usage for {}/IMAGE2TEXT used_tokens: {}".format(self.tenant_id, used_tokens))
return txt
def transcription(self, audio):
txt, used_tokens = self.mdl.transcription(audio)
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens):
database_logger.error(
"Can't update token usage for {}/SEQUENCE2TXT".format(self.tenant_id))
logging.error(
"LLMBundle.transcription can't update token usage for {}/SEQUENCE2TXT used_tokens: {}".format(self.tenant_id, used_tokens))
return txt
def tts(self, text):
@ -247,17 +250,17 @@ class LLMBundle(object):
if isinstance(chunk,int):
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, chunk, self.llm_name):
database_logger.error(
"Can't update token usage for {}/TTS".format(self.tenant_id))
logging.error(
"LLMBundle.tts can't update token usage for {}/TTS".format(self.tenant_id))
return
yield chunk
def chat(self, system, history, gen_conf):
txt, used_tokens = self.mdl.chat(system, history, gen_conf)
if not TenantLLMService.increase_usage(
if isinstance(txt, int) and not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, used_tokens, self.llm_name):
database_logger.error(
"Can't update token usage for {}/CHAT".format(self.tenant_id))
logging.error(
"LLMBundle.chat can't update token usage for {}/CHAT llm_name: {}, used_tokens: {}".format(self.tenant_id, self.llm_name, used_tokens))
return txt
def chat_streamly(self, system, history, gen_conf):
@ -265,7 +268,7 @@ class LLMBundle(object):
if isinstance(txt, int):
if not TenantLLMService.increase_usage(
self.tenant_id, self.llm_type, txt, self.llm_name):
database_logger.error(
"Can't update token usage for {}/CHAT".format(self.tenant_id))
logging.error(
"LLMBundle.chat_streamly can't update token usage for {}/CHAT llm_name: {}, content: {}".format(self.tenant_id, self.llm_name, txt))
return
yield txt

View File

@ -36,7 +36,7 @@ class TaskService(CommonService):
@classmethod
@DB.connection_context()
def get_tasks(cls, task_id):
def get_task(cls, task_id):
fields = [
cls.model.id,
cls.model.doc_id,
@ -63,7 +63,7 @@ class TaskService(CommonService):
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id)) \
.where(cls.model.id == task_id)
docs = list(docs.dicts())
if not docs: return []
if not docs: return None
msg = "\nTask has been received."
prog = random.random() / 10.
@ -77,9 +77,9 @@ class TaskService(CommonService):
).where(
cls.model.id == docs[0]["id"]).execute()
if docs[0]["retry_count"] >= 3: return []
if docs[0]["retry_count"] >= 3: return None
return docs
return docs[0]
@classmethod
@DB.connection_context()
@ -108,7 +108,7 @@ class TaskService(CommonService):
task = cls.model.get_by_id(id)
_, doc = DocumentService.get_by_id(task.doc_id)
return doc.run == TaskStatus.CANCEL.value or doc.progress < 0
except Exception as e:
except Exception:
pass
return False

View File

@ -87,7 +87,7 @@ class TenantService(CommonService):
@classmethod
@DB.connection_context()
def get_by_user_id(cls, user_id):
def get_info_by(cls, user_id):
fields = [
cls.model.id.alias("tenant_id"),
cls.model.name,
@ -100,7 +100,7 @@ class TenantService(CommonService):
cls.model.parser_ids,
UserTenant.role]
return list(cls.model.select(*fields)
.join(UserTenant, on=((cls.model.id == UserTenant.tenant_id) & (UserTenant.user_id == user_id) & (UserTenant.status == StatusEnum.VALID.value)))
.join(UserTenant, on=((cls.model.id == UserTenant.tenant_id) & (UserTenant.user_id == user_id) & (UserTenant.status == StatusEnum.VALID.value) & (UserTenant.role == UserTenantRole.OWNER)))
.where(cls.model.status == StatusEnum.VALID.value).dicts())
@classmethod
@ -115,7 +115,7 @@ class TenantService(CommonService):
cls.model.img2txt_id,
UserTenant.role]
return list(cls.model.select(*fields)
.join(UserTenant, on=((cls.model.id == UserTenant.tenant_id) & (UserTenant.user_id == user_id) & (UserTenant.status == StatusEnum.VALID.value) & (UserTenant.role == UserTenantRole.NORMAL.value)))
.join(UserTenant, on=((cls.model.id == UserTenant.tenant_id) & (UserTenant.user_id == user_id) & (UserTenant.status == StatusEnum.VALID.value) & (UserTenant.role == UserTenantRole.NORMAL)))
.where(cls.model.status == StatusEnum.VALID.value).dicts())
@classmethod
@ -143,9 +143,8 @@ class UserTenantService(CommonService):
def get_by_tenant_id(cls, tenant_id):
fields = [
cls.model.user_id,
cls.model.tenant_id,
cls.model.role,
cls.model.status,
cls.model.role,
User.nickname,
User.email,
User.avatar,
@ -153,8 +152,24 @@ class UserTenantService(CommonService):
User.is_active,
User.is_anonymous,
User.status,
User.update_date,
User.is_superuser]
return list(cls.model.select(*fields)
.join(User, on=((cls.model.user_id == User.id) & (cls.model.status == StatusEnum.VALID.value)))
.join(User, on=((cls.model.user_id == User.id) & (cls.model.status == StatusEnum.VALID.value) & (cls.model.role != UserTenantRole.OWNER)))
.where(cls.model.tenant_id == tenant_id)
.dicts())
.dicts())
@classmethod
@DB.connection_context()
def get_tenants_by_user_id(cls, user_id):
fields = [
cls.model.tenant_id,
cls.model.role,
User.nickname,
User.email,
User.avatar,
User.update_date
]
return list(cls.model.select(*fields)
.join(User, on=((cls.model.tenant_id == User.id) & (UserTenant.user_id == user_id) & (UserTenant.status == StatusEnum.VALID.value)))
.where(cls.model.status == StatusEnum.VALID.value).dicts())

View File

@ -14,7 +14,21 @@
# limitations under the License.
#
# from beartype import BeartypeConf
# from beartype.claw import beartype_all # <-- you didn't sign up for this
# beartype_all(conf=BeartypeConf(violation_type=UserWarning)) # <-- emit warnings from all code
import logging
from api.utils.log_utils import initRootLogger
initRootLogger("ragflow_server")
for module in ["pdfminer"]:
module_logger = logging.getLogger(module)
module_logger.setLevel(logging.WARNING)
for module in ["peewee"]:
module_logger = logging.getLogger(module)
module_logger.handlers.clear()
module_logger.propagate = True
import os
import signal
import sys
@ -23,77 +37,84 @@ import traceback
from concurrent.futures import ThreadPoolExecutor
from werkzeug.serving import run_simple
from api import settings
from api.apps import app
from api.db.runtime_config import RuntimeConfig
from api.db.services.document_service import DocumentService
from api.settings import (
HOST, HTTP_PORT, access_logger, database_logger, stat_logger,
)
from api import utils
from api.db.db_models import init_database_tables as init_web_db
from api.db.init_data import init_web_data
from api.versions import get_versions
from api.versions import get_ragflow_version
from api.utils import show_configs
def update_progress():
while True:
time.sleep(1)
time.sleep(3)
try:
DocumentService.update_progress()
except Exception as e:
stat_logger.error("update_progress exception:" + str(e))
except Exception:
logging.exception("update_progress exception")
if __name__ == '__main__':
print(r"""
logging.info(r"""
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
""", flush=True)
stat_logger.info(
""")
logging.info(
f'RAGFlow version: {get_ragflow_version()}'
)
logging.info(
f'project base: {utils.file_utils.get_project_base_directory()}'
)
show_configs()
settings.init_settings()
# init db
init_web_db()
init_web_data()
# init runtime config
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--version', default=False, help="rag flow version", action='store_true')
parser.add_argument('--debug', default=False, help="debug mode", action='store_true')
parser.add_argument(
"--version", default=False, help="RAGFlow version", action="store_true"
)
parser.add_argument(
"--debug", default=False, help="debug mode", action="store_true"
)
args = parser.parse_args()
if args.version:
print(get_versions())
print(get_ragflow_version())
sys.exit(0)
RuntimeConfig.DEBUG = args.debug
if RuntimeConfig.DEBUG:
stat_logger.info("run on debug mode")
logging.info("run on debug mode")
RuntimeConfig.init_env()
RuntimeConfig.init_config(JOB_SERVER_HOST=HOST, HTTP_PORT=HTTP_PORT)
RuntimeConfig.init_config(JOB_SERVER_HOST=settings.HOST_IP, HTTP_PORT=settings.HOST_PORT)
peewee_logger = logging.getLogger('peewee')
peewee_logger.propagate = False
# rag_arch.common.log.ROpenHandler
peewee_logger.addHandler(database_logger.handlers[0])
peewee_logger.setLevel(database_logger.level)
thr = ThreadPoolExecutor(max_workers=1)
thr.submit(update_progress)
thread = ThreadPoolExecutor(max_workers=1)
thread.submit(update_progress)
# start http server
try:
stat_logger.info("RAG Flow http server start...")
werkzeug_logger = logging.getLogger("werkzeug")
for h in access_logger.handlers:
werkzeug_logger.addHandler(h)
run_simple(hostname=HOST, port=HTTP_PORT, application=app, threaded=True, use_reloader=RuntimeConfig.DEBUG, use_debugger=RuntimeConfig.DEBUG)
logging.info("RAGFlow HTTP server start...")
run_simple(
hostname=settings.HOST_IP,
port=settings.HOST_PORT,
application=app,
threaded=True,
use_reloader=RuntimeConfig.DEBUG,
use_debugger=RuntimeConfig.DEBUG,
)
except Exception:
traceback.print_exc()
os.kill(os.getpid(), signal.SIGKILL)
os.kill(os.getpid(), signal.SIGKILL)

View File

@ -14,199 +14,164 @@
# limitations under the License.
#
import os
from datetime import date
from enum import IntEnum, Enum
from api.utils.file_utils import get_project_base_directory
from api.utils.log_utils import LoggerFactory, getLogger
import rag.utils.es_conn
import rag.utils.infinity_conn
# Logger
LoggerFactory.set_directory(
os.path.join(
get_project_base_directory(),
"logs",
"api"))
# {CRITICAL: 50, FATAL:50, ERROR:40, WARNING:30, WARN:30, INFO:20, DEBUG:10, NOTSET:0}
LoggerFactory.LEVEL = 30
stat_logger = getLogger("stat")
access_logger = getLogger("access")
database_logger = getLogger("database")
chat_logger = getLogger("chat")
from rag.utils.es_conn import ELASTICSEARCH
import rag.utils
from rag.nlp import search
from graphrag import search as kg_search
from api.utils import get_base_config, decrypt_database_config
from api.constants import RAG_FLOW_SERVICE_NAME
API_VERSION = "v1"
RAG_FLOW_SERVICE_NAME = "ragflow"
SERVER_MODULE = "rag_flow_server.py"
TEMP_DIRECTORY = os.path.join(get_project_base_directory(), "temp")
RAG_FLOW_CONF_PATH = os.path.join(get_project_base_directory(), "conf")
LIGHTEN = os.environ.get('LIGHTEN')
LIGHTEN = int(os.environ.get('LIGHTEN', "0"))
SUBPROCESS_STD_LOG_NAME = "std.log"
ERROR_REPORT = True
ERROR_REPORT_WITH_PATH = False
MAX_TIMESTAMP_INTERVAL = 60
SESSION_VALID_PERIOD = 7 * 24 * 60 * 60
REQUEST_TRY_TIMES = 3
REQUEST_WAIT_SEC = 2
REQUEST_MAX_WAIT_SEC = 300
USE_REGISTRY = get_base_config("use_registry")
LLM = get_base_config("user_default_llm", {})
LLM_FACTORY = LLM.get("factory", "Tongyi-Qianwen")
LLM_BASE_URL = LLM.get("base_url")
if not LIGHTEN:
default_llm = {
"Tongyi-Qianwen": {
"chat_model": "qwen-plus",
"embedding_model": "text-embedding-v2",
"image2text_model": "qwen-vl-max",
"asr_model": "paraformer-realtime-8k-v1",
},
"OpenAI": {
"chat_model": "gpt-3.5-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"Azure-OpenAI": {
"chat_model": "gpt-35-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"ZHIPU-AI": {
"chat_model": "glm-3-turbo",
"embedding_model": "embedding-2",
"image2text_model": "glm-4v",
"asr_model": "",
},
"Ollama": {
"chat_model": "qwen-14B-chat",
"embedding_model": "flag-embedding",
"image2text_model": "",
"asr_model": "",
},
"Moonshot": {
"chat_model": "moonshot-v1-8k",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"DeepSeek": {
"chat_model": "deepseek-chat",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"VolcEngine": {
"chat_model": "",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"BAAI": {
"chat_model": "",
"embedding_model": "BAAI/bge-large-zh-v1.5",
"image2text_model": "",
"asr_model": "",
"rerank_model": "BAAI/bge-reranker-v2-m3",
}
}
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"]
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"]
RERANK_MDL = default_llm["BAAI"]["rerank_model"] if not LIGHTEN else ""
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"]
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"]
else:
CHAT_MDL = EMBEDDING_MDL = RERANK_MDL = ASR_MDL = IMAGE2TEXT_MDL = ""
API_KEY = LLM.get("api_key", "")
PARSERS = LLM.get(
"parsers",
"naive:General,qa:Q&A,resume:Resume,manual:Manual,table:Table,paper:Paper,book:Book,laws:Laws,presentation:Presentation,picture:Picture,one:One,audio:Audio,knowledge_graph:Knowledge Graph,email:Email")
# distribution
DEPENDENT_DISTRIBUTION = get_base_config("dependent_distribution", False)
RAG_FLOW_UPDATE_CHECK = False
HOST = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("host", "127.0.0.1")
HTTP_PORT = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("http_port")
SECRET_KEY = get_base_config(
RAG_FLOW_SERVICE_NAME,
{}).get(
"secret_key",
"infiniflow")
TOKEN_EXPIRE_IN = get_base_config(
RAG_FLOW_SERVICE_NAME, {}).get(
"token_expires_in", 3600)
NGINX_HOST = get_base_config(
RAG_FLOW_SERVICE_NAME, {}).get(
"nginx", {}).get("host") or HOST
NGINX_HTTP_PORT = get_base_config(
RAG_FLOW_SERVICE_NAME, {}).get(
"nginx", {}).get("http_port") or HTTP_PORT
RANDOM_INSTANCE_ID = get_base_config(
RAG_FLOW_SERVICE_NAME, {}).get(
"random_instance_id", False)
PROXY = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("proxy")
PROXY_PROTOCOL = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("protocol")
LLM = None
LLM_FACTORY = None
LLM_BASE_URL = None
CHAT_MDL = ""
EMBEDDING_MDL = ""
RERANK_MDL = ""
ASR_MDL = ""
IMAGE2TEXT_MDL = ""
API_KEY = None
PARSERS = None
HOST_IP = None
HOST_PORT = None
SECRET_KEY = None
DATABASE_TYPE = os.getenv("DB_TYPE", 'mysql')
DATABASE = decrypt_database_config(name=DATABASE_TYPE)
# Switch
# upload
UPLOAD_DATA_FROM_CLIENT = True
# authentication
AUTHENTICATION_CONF = get_base_config("authentication", {})
AUTHENTICATION_CONF = None
# client
CLIENT_AUTHENTICATION = AUTHENTICATION_CONF.get(
"client", {}).get(
CLIENT_AUTHENTICATION = None
HTTP_APP_KEY = None
GITHUB_OAUTH = None
FEISHU_OAUTH = None
DOC_ENGINE = None
docStoreConn = None
retrievaler = None
kg_retrievaler = None
def init_settings():
global LLM, LLM_FACTORY, LLM_BASE_URL, LIGHTEN, DATABASE_TYPE, DATABASE
LIGHTEN = int(os.environ.get('LIGHTEN', "0"))
DATABASE_TYPE = os.getenv("DB_TYPE", 'mysql')
DATABASE = decrypt_database_config(name=DATABASE_TYPE)
LLM = get_base_config("user_default_llm", {})
LLM_FACTORY = LLM.get("factory", "Tongyi-Qianwen")
LLM_BASE_URL = LLM.get("base_url")
global CHAT_MDL, EMBEDDING_MDL, RERANK_MDL, ASR_MDL, IMAGE2TEXT_MDL
if not LIGHTEN:
default_llm = {
"Tongyi-Qianwen": {
"chat_model": "qwen-plus",
"embedding_model": "text-embedding-v2",
"image2text_model": "qwen-vl-max",
"asr_model": "paraformer-realtime-8k-v1",
},
"OpenAI": {
"chat_model": "gpt-3.5-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"Azure-OpenAI": {
"chat_model": "gpt-35-turbo",
"embedding_model": "text-embedding-ada-002",
"image2text_model": "gpt-4-vision-preview",
"asr_model": "whisper-1",
},
"ZHIPU-AI": {
"chat_model": "glm-3-turbo",
"embedding_model": "embedding-2",
"image2text_model": "glm-4v",
"asr_model": "",
},
"Ollama": {
"chat_model": "qwen-14B-chat",
"embedding_model": "flag-embedding",
"image2text_model": "",
"asr_model": "",
},
"Moonshot": {
"chat_model": "moonshot-v1-8k",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"DeepSeek": {
"chat_model": "deepseek-chat",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"VolcEngine": {
"chat_model": "",
"embedding_model": "",
"image2text_model": "",
"asr_model": "",
},
"BAAI": {
"chat_model": "",
"embedding_model": "BAAI/bge-large-zh-v1.5",
"image2text_model": "",
"asr_model": "",
"rerank_model": "BAAI/bge-reranker-v2-m3",
}
}
if LLM_FACTORY:
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"] + f"@{LLM_FACTORY}"
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"] + f"@{LLM_FACTORY}"
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"] + f"@{LLM_FACTORY}"
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"] + "@BAAI"
RERANK_MDL = default_llm["BAAI"]["rerank_model"] + "@BAAI"
global API_KEY, PARSERS, HOST_IP, HOST_PORT, SECRET_KEY
API_KEY = LLM.get("api_key", "")
PARSERS = LLM.get(
"parsers",
"naive:General,qa:Q&A,resume:Resume,manual:Manual,table:Table,paper:Paper,book:Book,laws:Laws,presentation:Presentation,picture:Picture,one:One,audio:Audio,knowledge_graph:Knowledge Graph,email:Email")
HOST_IP = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("host", "127.0.0.1")
HOST_PORT = get_base_config(RAG_FLOW_SERVICE_NAME, {}).get("http_port")
SECRET_KEY = get_base_config(
RAG_FLOW_SERVICE_NAME,
{}).get("secret_key", str(date.today()))
global AUTHENTICATION_CONF, CLIENT_AUTHENTICATION, HTTP_APP_KEY, GITHUB_OAUTH, FEISHU_OAUTH
# authentication
AUTHENTICATION_CONF = get_base_config("authentication", {})
# client
CLIENT_AUTHENTICATION = AUTHENTICATION_CONF.get(
"client", {}).get(
"switch", False)
HTTP_APP_KEY = AUTHENTICATION_CONF.get("client", {}).get("http_app_key")
GITHUB_OAUTH = get_base_config("oauth", {}).get("github")
FEISHU_OAUTH = get_base_config("oauth", {}).get("feishu")
WECHAT_OAUTH = get_base_config("oauth", {}).get("wechat")
HTTP_APP_KEY = AUTHENTICATION_CONF.get("client", {}).get("http_app_key")
GITHUB_OAUTH = get_base_config("oauth", {}).get("github")
FEISHU_OAUTH = get_base_config("oauth", {}).get("feishu")
# site
SITE_AUTHENTICATION = AUTHENTICATION_CONF.get("site", {}).get("switch", False)
global DOC_ENGINE, docStoreConn, retrievaler, kg_retrievaler
DOC_ENGINE = os.environ.get('DOC_ENGINE', "elasticsearch")
if DOC_ENGINE == "elasticsearch":
docStoreConn = rag.utils.es_conn.ESConnection()
elif DOC_ENGINE == "infinity":
docStoreConn = rag.utils.infinity_conn.InfinityConnection()
else:
raise Exception(f"Not supported doc engine: {DOC_ENGINE}")
# permission
PERMISSION_CONF = get_base_config("permission", {})
PERMISSION_SWITCH = PERMISSION_CONF.get("switch")
COMPONENT_PERMISSION = PERMISSION_CONF.get("component")
DATASET_PERMISSION = PERMISSION_CONF.get("dataset")
HOOK_MODULE = get_base_config("hook_module")
HOOK_SERVER_NAME = get_base_config("hook_server_name")
ENABLE_MODEL_STORE = get_base_config('enable_model_store', False)
# authentication
USE_AUTHENTICATION = False
USE_DATA_AUTHENTICATION = False
AUTOMATIC_AUTHORIZATION_OUTPUT_DATA = True
USE_DEFAULT_TIMEOUT = False
AUTHENTICATION_DEFAULT_TIMEOUT = 7 * 24 * 60 * 60 # s
PRIVILEGE_COMMAND_WHITELIST = []
CHECK_NODES_IDENTITY = False
retrievaler = search.Dealer(ELASTICSEARCH)
kg_retrievaler = kg_search.KGSearch(ELASTICSEARCH)
retrievaler = search.Dealer(docStoreConn)
kg_retrievaler = kg_search.KGSearch(docStoreConn)
class CustomEnum(Enum):
@ -227,16 +192,6 @@ class CustomEnum(Enum):
return [member.name for member in cls.__members__.values()]
class PythonDependenceName(CustomEnum):
Rag_Source_Code = "python"
Python_Env = "miniconda"
class ModelStorage(CustomEnum):
REDIS = "redis"
MYSQL = "mysql"
class RetCode(IntEnum, CustomEnum):
SUCCESS = 0
NOT_EFFECTIVE = 10
@ -250,3 +205,5 @@ class RetCode(IntEnum, CustomEnum):
AUTHENTICATION_ERROR = 109
UNAUTHORIZED = 401
SERVER_ERROR = 500
FORBIDDEN = 403
NOT_FOUND = 404

View File

@ -23,56 +23,64 @@ import socket
import time
import uuid
import requests
import logging
from enum import Enum, IntEnum
import importlib
from Cryptodome.PublicKey import RSA
from Cryptodome.Cipher import PKCS1_v1_5 as Cipher_pkcs1_v1_5
from filelock import FileLock
from api.constants import SERVICE_CONF
from . import file_utils
SERVICE_CONF = "service_conf.yaml"
def conf_realpath(conf_name):
conf_path = f"conf/{conf_name}"
return os.path.join(file_utils.get_project_base_directory(), conf_path)
def get_base_config(key, default=None, conf_name=SERVICE_CONF) -> dict:
def read_config(conf_name=SERVICE_CONF):
local_config = {}
local_path = conf_realpath(f'local.{conf_name}')
if default is None:
default = os.environ.get(key.upper())
# load local config file
if os.path.exists(local_path):
local_config = file_utils.load_yaml_conf(local_path)
if not isinstance(local_config, dict):
raise ValueError(f'Invalid config file: "{local_path}".')
if key is not None and key in local_config:
return local_config[key]
global_config_path = conf_realpath(conf_name)
global_config = file_utils.load_yaml_conf(global_config_path)
config_path = conf_realpath(conf_name)
config = file_utils.load_yaml_conf(config_path)
if not isinstance(global_config, dict):
raise ValueError(f'Invalid config file: "{global_config_path}".')
if not isinstance(config, dict):
raise ValueError(f'Invalid config file: "{config_path}".')
global_config.update(local_config)
return global_config
config.update(local_config)
return config.get(key, default) if key is not None else config
CONFIGS = read_config()
def show_configs():
msg = f"Current configs, from {conf_realpath(SERVICE_CONF)}:"
for k, v in CONFIGS.items():
msg += f"\n\t{k}: {v}"
logging.info(msg)
def get_base_config(key, default=None):
if key is None:
return None
if default is None:
default = os.environ.get(key.upper())
return CONFIGS.get(key, default)
use_deserialize_safe_module = get_base_config(
'use_deserialize_safe_module', False)
class CoordinationCommunicationProtocol(object):
HTTP = "http"
GRPC = "grpc"
class BaseType:
def to_dict(self):
return dict([(k.lstrip("_"), v) for k, v in self.__dict__.items()])
@ -98,6 +106,7 @@ class BaseType:
data = obj
return {"type": obj.__class__.__name__,
"data": data, "module": module}
return _dict(self)
@ -245,7 +254,7 @@ def get_lan_ip():
try:
ip = get_interface_ip(ifname)
break
except IOError as e:
except IOError:
pass
return ip or ''
@ -342,5 +351,10 @@ def download_img(url):
return ""
response = requests.get(url)
return "data:" + \
response.headers.get('Content-Type', 'image/jpg') + ";" + \
"base64," + base64.b64encode(response.content).decode("utf-8")
response.headers.get('Content-Type', 'image/jpg') + ";" + \
"base64," + base64.b64encode(response.content).decode("utf-8")
def delta_seconds(date_string: str):
dt = datetime.datetime.strptime(date_string, "%Y-%m-%d %H:%M:%S")
return (datetime.datetime.now() - dt).total_seconds()

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import functools
import json
import random
@ -29,16 +30,16 @@ from flask import (
Response, jsonify, send_file, make_response,
request as flask_request,
)
from itsdangerous import URLSafeTimedSerializer
from werkzeug.http import HTTP_STATUS_CODES
from api.db.db_models import APIToken
from api.settings import (
REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC,
stat_logger, CLIENT_AUTHENTICATION, HTTP_APP_KEY, SECRET_KEY
)
from api.settings import RetCode
from api.utils import CustomJSONEncoder
from api import settings
from api import settings
from api.utils import CustomJSONEncoder, get_uuid
from api.utils import json_dumps
from api.constants import REQUEST_WAIT_SEC, REQUEST_MAX_WAIT_SEC
requests.models.complexjson.dumps = functools.partial(
json.dumps, cls=CustomJSONEncoder)
@ -52,18 +53,18 @@ def request(**kwargs):
k.replace(
'_',
'-').upper(): v for k,
v in kwargs.get(
v in kwargs.get(
'headers',
{}).items()}
prepped = requests.Request(**kwargs).prepare()
if CLIENT_AUTHENTICATION and HTTP_APP_KEY and SECRET_KEY:
if settings.CLIENT_AUTHENTICATION and settings.HTTP_APP_KEY and settings.SECRET_KEY:
timestamp = str(round(time() * 1000))
nonce = str(uuid1())
signature = b64encode(HMAC(SECRET_KEY.encode('ascii'), b'\n'.join([
signature = b64encode(HMAC(settings.SECRET_KEY.encode('ascii'), b'\n'.join([
timestamp.encode('ascii'),
nonce.encode('ascii'),
HTTP_APP_KEY.encode('ascii'),
settings.HTTP_APP_KEY.encode('ascii'),
prepped.path_url.encode('ascii'),
prepped.body if kwargs.get('json') else b'',
urlencode(
@ -77,7 +78,7 @@ def request(**kwargs):
prepped.headers.update({
'TIMESTAMP': timestamp,
'NONCE': nonce,
'APP-KEY': HTTP_APP_KEY,
'APP-KEY': settings.HTTP_APP_KEY,
'SIGNATURE': signature,
})
@ -96,39 +97,19 @@ def get_exponential_backoff_interval(retries, full_jitter=False):
return max(0, countdown)
def get_json_result(retcode=RetCode.SUCCESS, retmsg='success',
data=None, job_id=None, meta=None):
result_dict = {
"retcode": retcode,
"retmsg": retmsg,
# "retmsg": re.sub(r"rag", "seceum", retmsg, flags=re.IGNORECASE),
"data": data,
"jobId": job_id,
"meta": meta,
}
response = {}
for key, value in result_dict.items():
if value is None and key != "retcode":
continue
else:
response[key] = value
return jsonify(response)
def get_data_error_result(retcode=RetCode.DATA_ERROR,
retmsg='Sorry! Data missing!'):
def get_data_error_result(code=settings.RetCode.DATA_ERROR,
message='Sorry! Data missing!'):
import re
result_dict = {
"retcode": retcode,
"retmsg": re.sub(
"code": code,
"message": re.sub(
r"rag",
"seceum",
retmsg,
message,
flags=re.IGNORECASE)}
response = {}
for key, value in result_dict.items():
if value is None and key != "retcode":
if value is None and key != "code":
continue
else:
response[key] = value
@ -136,29 +117,25 @@ def get_data_error_result(retcode=RetCode.DATA_ERROR,
def server_error_response(e):
stat_logger.exception(e)
logging.exception(e)
try:
if e.code == 401:
return get_json_result(retcode=401, retmsg=repr(e))
return get_json_result(code=401, message=repr(e))
except BaseException:
pass
if len(e.args) > 1:
return get_json_result(
retcode=RetCode.EXCEPTION_ERROR, retmsg=repr(e.args[0]), data=e.args[1])
if repr(e).find("index_not_found_exception") >= 0:
return get_json_result(retcode=RetCode.EXCEPTION_ERROR,
retmsg="No chunk found, please upload file and parse it.")
return get_json_result(retcode=RetCode.EXCEPTION_ERROR, retmsg=repr(e))
code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=e.args[1])
return get_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e))
def error_response(response_code, retmsg=None):
if retmsg is None:
retmsg = HTTP_STATUS_CODES.get(response_code, 'Unknown Error')
def error_response(response_code, message=None):
if message is None:
message = HTTP_STATUS_CODES.get(response_code, 'Unknown Error')
return Response(json.dumps({
'retmsg': retmsg,
'retcode': response_code,
'message': message,
'code': response_code,
}), status=response_code, mimetype='application/json')
@ -190,7 +167,7 @@ def validate_request(*args, **kwargs):
error_string += "required argument values: {}".format(
",".join(["{}={}".format(a[0], a[1]) for a in error_arguments]))
return get_json_result(
retcode=RetCode.ARGUMENT_ERROR, retmsg=error_string)
code=settings.RetCode.ARGUMENT_ERROR, message=error_string)
return func(*_args, **_kwargs)
return decorated_function
@ -215,17 +192,38 @@ def send_file_in_mem(data, filename):
return send_file(f, as_attachment=True, attachment_filename=filename)
def get_json_result(retcode=RetCode.SUCCESS, retmsg='success', data=None):
response = {"retcode": retcode, "retmsg": retmsg, "data": data}
def get_json_result(code=settings.RetCode.SUCCESS, message='success', data=None):
response = {"code": code, "message": message, "data": data}
return jsonify(response)
def apikey_required(func):
@wraps(func)
def decorated_function(*args, **kwargs):
token = flask_request.headers.get('Authorization').split()[1]
objs = APIToken.query(token=token)
if not objs:
return build_error_result(
message='API-KEY is invalid!', code=settings.RetCode.FORBIDDEN
)
kwargs['tenant_id'] = objs[0].tenant_id
return func(*args, **kwargs)
def construct_response(retcode=RetCode.SUCCESS,
retmsg='success', data=None, auth=None):
result_dict = {"retcode": retcode, "retmsg": retmsg, "data": data}
return decorated_function
def build_error_result(code=settings.RetCode.FORBIDDEN, message='success'):
response = {"code": code, "message": message}
response = jsonify(response)
response.status_code = code
return response
def construct_response(code=settings.RetCode.SUCCESS,
message='success', data=None, auth=None):
result_dict = {"code": code, "message": message, "data": data}
response_dict = {}
for key, value in result_dict.items():
if value is None and key != "retcode":
if value is None and key != "code":
continue
else:
response_dict[key] = value
@ -240,7 +238,7 @@ def construct_response(retcode=RetCode.SUCCESS,
return response
def construct_result(code=RetCode.DATA_ERROR, message='data is missing'):
def construct_result(code=settings.RetCode.DATA_ERROR, message='data is missing'):
import re
result_dict = {"code": code, "message": re.sub(r"rag", "seceum", message, flags=re.IGNORECASE)}
response = {}
@ -252,7 +250,7 @@ def construct_result(code=RetCode.DATA_ERROR, message='data is missing'):
return jsonify(response)
def construct_json_result(code=RetCode.SUCCESS, message='success', data=None):
def construct_json_result(code=settings.RetCode.SUCCESS, message='success', data=None):
if data is None:
return jsonify({"code": code, "message": message})
else:
@ -260,31 +258,99 @@ def construct_json_result(code=RetCode.SUCCESS, message='success', data=None):
def construct_error_response(e):
stat_logger.exception(e)
logging.exception(e)
try:
if e.code == 401:
return construct_json_result(code=RetCode.UNAUTHORIZED, message=repr(e))
return construct_json_result(code=settings.RetCode.UNAUTHORIZED, message=repr(e))
except BaseException:
pass
if len(e.args) > 1:
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=e.args[1])
if repr(e).find("index_not_found_exception") >= 0:
return construct_json_result(code=RetCode.EXCEPTION_ERROR,
message="No chunk found, please upload file and parse it.")
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message=repr(e))
return construct_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=e.args[1])
return construct_json_result(code=settings.RetCode.EXCEPTION_ERROR, message=repr(e))
def token_required(func):
@wraps(func)
def decorated_function(*args, **kwargs):
token = flask_request.headers.get('Authorization').split()[1]
authorization_list=flask_request.headers.get('Authorization').split()
if len(authorization_list) < 2:
return get_json_result(data=False,message="Please check your authorization format.")
token = authorization_list[1]
objs = APIToken.query(token=token)
if not objs:
return get_json_result(
data=False, retmsg='Token is not valid!', retcode=RetCode.AUTHENTICATION_ERROR
data=False, message='Token is not valid!', code=settings.RetCode.AUTHENTICATION_ERROR
)
kwargs['tenant_id'] = objs[0].tenant_id
return func(*args, **kwargs)
return decorated_function
def get_result(code=settings.RetCode.SUCCESS, message="", data=None):
if code == 0:
if data is not None:
response = {"code": code, "data": data}
else:
response = {"code": code}
else:
response = {"code": code, "message": message}
return jsonify(response)
def get_error_data_result(message='Sorry! Data missing!', code=settings.RetCode.DATA_ERROR,
):
import re
result_dict = {
"code": code,
"message": re.sub(
r"rag",
"seceum",
message,
flags=re.IGNORECASE)}
response = {}
for key, value in result_dict.items():
if value is None and key != "code":
continue
else:
response[key] = value
return jsonify(response)
def generate_confirmation_token(tenent_id):
serializer = URLSafeTimedSerializer(tenent_id)
return "ragflow-" + serializer.dumps(get_uuid(), salt=tenent_id)[2:34]
def valid(permission,valid_permission,language,valid_language,chunk_method,valid_chunk_method):
if valid_parameter(permission,valid_permission):
return valid_parameter(permission,valid_permission)
if valid_parameter(language,valid_language):
return valid_parameter(language,valid_language)
if valid_parameter(chunk_method,valid_chunk_method):
return valid_parameter(chunk_method,valid_chunk_method)
def valid_parameter(parameter,valid_values):
if parameter and parameter not in valid_values:
return get_error_data_result(f"'{parameter}' is not in {valid_values}")
def get_parser_config(chunk_method,parser_config):
if parser_config:
return parser_config
if not chunk_method:
chunk_method = "naive"
key_mapping={"naive":{"chunk_token_num": 128, "delimiter": "\\n!?;。;!?", "html4excel": False,"layout_recognize": True, "raptor": {"use_raptor": False}},
"qa":{"raptor":{"use_raptor":False}},
"resume":None,
"manual":{"raptor":{"use_raptor":False}},
"table":None,
"paper":{"raptor":{"use_raptor":False}},
"book":{"raptor":{"use_raptor":False}},
"laws":{"raptor":{"use_raptor":False}},
"presentation":{"raptor":{"use_raptor":False}},
"one":None,
"knowledge_graph":{"chunk_token_num":8192,"delimiter":"\\n!?;。;!?","entity_types":["organization","person","location","event","time"]},
"email":None,
"picture":None}
parser_config=key_mapping[chunk_method]
return parser_config

View File

@ -25,6 +25,7 @@ from cachetools import LRUCache, cached
from ruamel.yaml import YAML
from api.db import FileType
from api.constants import IMG_BASE64_PREFIX
PROJECT_BASE = os.getenv("RAG_PROJECT_BASE") or os.getenv("RAG_DEPLOY_BASE")
RAG_BASE = os.getenv("RAG_BASE")
@ -70,7 +71,7 @@ def get_home_cache_dir():
dir = os.path.join(os.path.expanduser('~'), ".ragflow")
try:
os.mkdir(dir)
except OSError as error:
except OSError:
pass
return dir
@ -168,23 +169,20 @@ def filename_type(filename):
return FileType.OTHER.value
def thumbnail(filename, blob):
def thumbnail_img(filename, blob):
filename = filename.lower()
if re.match(r".*\.pdf$", filename):
pdf = pdfplumber.open(BytesIO(blob))
buffered = BytesIO()
pdf.pages[0].to_image(resolution=32).annotated.save(buffered, format="png")
return "data:image/png;base64," + \
base64.b64encode(buffered.getvalue()).decode("utf-8")
return buffered.getvalue()
if re.match(r".*\.(jpg|jpeg|png|tif|gif|icon|ico|webp)$", filename):
image = Image.open(BytesIO(blob))
image.thumbnail((30, 30))
buffered = BytesIO()
image.save(buffered, format="png")
return "data:image/png;base64," + \
base64.b64encode(buffered.getvalue()).decode("utf-8")
return buffered.getvalue()
if re.match(r".*\.(ppt|pptx)$", filename):
import aspose.slides as slides
@ -194,10 +192,19 @@ def thumbnail(filename, blob):
buffered = BytesIO()
presentation.slides[0].get_thumbnail(0.03, 0.03).save(
buffered, drawing.imaging.ImageFormat.png)
return "data:image/png;base64," + \
base64.b64encode(buffered.getvalue()).decode("utf-8")
except Exception as e:
return buffered.getvalue()
except Exception:
pass
return None
def thumbnail(filename, blob):
img = thumbnail_img(filename, blob)
if img is not None:
return IMG_BASE64_PREFIX + \
base64.b64encode(img).decode("utf-8")
else:
return ''
def traversal_files(base):

View File

@ -14,300 +14,41 @@
# limitations under the License.
#
import os
import typing
import traceback
import os.path
import logging
import inspect
from logging.handlers import TimedRotatingFileHandler
from threading import RLock
from logging.handlers import RotatingFileHandler
from api.utils import file_utils
def get_project_base_directory():
PROJECT_BASE = os.path.abspath(
os.path.join(
os.path.dirname(os.path.realpath(__file__)),
os.pardir,
os.pardir,
)
)
return PROJECT_BASE
def initRootLogger(logfile_basename: str, log_level: int = logging.INFO, log_format: str = "%(asctime)-15s %(levelname)-8s %(process)d %(message)s"):
logger = logging.getLogger()
if logger.hasHandlers():
return
class LoggerFactory(object):
TYPE = "FILE"
LOG_FORMAT = "[%(levelname)s] [%(asctime)s] [%(module)s.%(funcName)s] [line:%(lineno)d]: %(message)s"
logging.basicConfig(format=LOG_FORMAT)
LEVEL = logging.DEBUG
logger_dict = {}
global_handler_dict = {}
log_path = os.path.abspath(os.path.join(get_project_base_directory(), "logs", f"{logfile_basename}.log"))
LOG_DIR = None
PARENT_LOG_DIR = None
log_share = True
os.makedirs(os.path.dirname(log_path), exist_ok=True)
logger.setLevel(log_level)
formatter = logging.Formatter(log_format)
append_to_parent_log = None
handler1 = RotatingFileHandler(log_path, maxBytes=10*1024*1024, backupCount=5)
handler1.setLevel(log_level)
handler1.setFormatter(formatter)
logger.addHandler(handler1)
lock = RLock()
# CRITICAL = 50
# FATAL = CRITICAL
# ERROR = 40
# WARNING = 30
# WARN = WARNING
# INFO = 20
# DEBUG = 10
# NOTSET = 0
levels = (10, 20, 30, 40)
schedule_logger_dict = {}
handler2 = logging.StreamHandler()
handler2.setLevel(log_level)
handler2.setFormatter(formatter)
logger.addHandler(handler2)
@staticmethod
def set_directory(directory=None, parent_log_dir=None,
append_to_parent_log=None, force=False):
if parent_log_dir:
LoggerFactory.PARENT_LOG_DIR = parent_log_dir
if append_to_parent_log:
LoggerFactory.append_to_parent_log = append_to_parent_log
with LoggerFactory.lock:
if not directory:
directory = file_utils.get_project_base_directory("logs")
if not LoggerFactory.LOG_DIR or force:
LoggerFactory.LOG_DIR = directory
if LoggerFactory.log_share:
oldmask = os.umask(000)
os.makedirs(LoggerFactory.LOG_DIR, exist_ok=True)
os.umask(oldmask)
else:
os.makedirs(LoggerFactory.LOG_DIR, exist_ok=True)
for loggerName, ghandler in LoggerFactory.global_handler_dict.items():
for className, (logger,
handler) in LoggerFactory.logger_dict.items():
logger.removeHandler(ghandler)
ghandler.close()
LoggerFactory.global_handler_dict = {}
for className, (logger,
handler) in LoggerFactory.logger_dict.items():
logger.removeHandler(handler)
_handler = None
if handler:
handler.close()
if className != "default":
_handler = LoggerFactory.get_handler(className)
logger.addHandler(_handler)
LoggerFactory.assemble_global_handler(logger)
LoggerFactory.logger_dict[className] = logger, _handler
@staticmethod
def new_logger(name):
logger = logging.getLogger(name)
logger.propagate = False
logger.setLevel(LoggerFactory.LEVEL)
return logger
@staticmethod
def get_logger(class_name=None):
with LoggerFactory.lock:
if class_name in LoggerFactory.logger_dict.keys():
logger, handler = LoggerFactory.logger_dict[class_name]
if not logger:
logger, handler = LoggerFactory.init_logger(class_name)
else:
logger, handler = LoggerFactory.init_logger(class_name)
return logger
@staticmethod
def get_global_handler(logger_name, level=None, log_dir=None):
if not LoggerFactory.LOG_DIR:
return logging.StreamHandler()
if log_dir:
logger_name_key = logger_name + "_" + log_dir
else:
logger_name_key = logger_name + "_" + LoggerFactory.LOG_DIR
# if loggerName not in LoggerFactory.globalHandlerDict:
if logger_name_key not in LoggerFactory.global_handler_dict:
with LoggerFactory.lock:
if logger_name_key not in LoggerFactory.global_handler_dict:
handler = LoggerFactory.get_handler(
logger_name, level, log_dir)
LoggerFactory.global_handler_dict[logger_name_key] = handler
return LoggerFactory.global_handler_dict[logger_name_key]
@staticmethod
def get_handler(class_name, level=None, log_dir=None,
log_type=None, job_id=None):
if not log_type:
if not LoggerFactory.LOG_DIR or not class_name:
return logging.StreamHandler()
# return Diy_StreamHandler()
if not log_dir:
log_file = os.path.join(
LoggerFactory.LOG_DIR,
"{}.log".format(class_name))
else:
log_file = os.path.join(log_dir, "{}.log".format(class_name))
else:
log_file = os.path.join(log_dir, "rag_flow_{}.log".format(
log_type) if level == LoggerFactory.LEVEL else 'rag_flow_{}_error.log'.format(log_type))
os.makedirs(os.path.dirname(log_file), exist_ok=True)
if LoggerFactory.log_share:
handler = ROpenHandler(log_file,
when='D',
interval=1,
backupCount=14,
delay=True)
else:
handler = TimedRotatingFileHandler(log_file,
when='D',
interval=1,
backupCount=14,
delay=True)
if level:
handler.level = level
return handler
@staticmethod
def init_logger(class_name):
with LoggerFactory.lock:
logger = LoggerFactory.new_logger(class_name)
handler = None
if class_name:
handler = LoggerFactory.get_handler(class_name)
logger.addHandler(handler)
LoggerFactory.logger_dict[class_name] = logger, handler
else:
LoggerFactory.logger_dict["default"] = logger, handler
LoggerFactory.assemble_global_handler(logger)
return logger, handler
@staticmethod
def assemble_global_handler(logger):
if LoggerFactory.LOG_DIR:
for level in LoggerFactory.levels:
if level >= LoggerFactory.LEVEL:
level_logger_name = logging._levelToName[level]
logger.addHandler(
LoggerFactory.get_global_handler(
level_logger_name, level))
if LoggerFactory.append_to_parent_log and LoggerFactory.PARENT_LOG_DIR:
for level in LoggerFactory.levels:
if level >= LoggerFactory.LEVEL:
level_logger_name = logging._levelToName[level]
logger.addHandler(
LoggerFactory.get_global_handler(level_logger_name, level, LoggerFactory.PARENT_LOG_DIR))
def setDirectory(directory=None):
LoggerFactory.set_directory(directory)
def setLevel(level):
LoggerFactory.LEVEL = level
def getLogger(className=None, useLevelFile=False):
if className is None:
frame = inspect.stack()[1]
module = inspect.getmodule(frame[0])
className = 'stat'
return LoggerFactory.get_logger(className)
def exception_to_trace_string(ex):
return "".join(traceback.TracebackException.from_exception(ex).format())
class ROpenHandler(TimedRotatingFileHandler):
def _open(self):
prevumask = os.umask(000)
rtv = TimedRotatingFileHandler._open(self)
os.umask(prevumask)
return rtv
def sql_logger(job_id='', log_type='sql'):
key = job_id + log_type
if key in LoggerFactory.schedule_logger_dict.keys():
return LoggerFactory.schedule_logger_dict[key]
return get_job_logger(job_id=job_id, log_type=log_type)
def ready_log(msg, job=None, task=None, role=None, party_id=None, detail=None):
prefix, suffix = base_msg(job, task, role, party_id, detail)
return f"{prefix}{msg} ready{suffix}"
def start_log(msg, job=None, task=None, role=None, party_id=None, detail=None):
prefix, suffix = base_msg(job, task, role, party_id, detail)
return f"{prefix}start to {msg}{suffix}"
def successful_log(msg, job=None, task=None, role=None,
party_id=None, detail=None):
prefix, suffix = base_msg(job, task, role, party_id, detail)
return f"{prefix}{msg} successfully{suffix}"
def warning_log(msg, job=None, task=None, role=None,
party_id=None, detail=None):
prefix, suffix = base_msg(job, task, role, party_id, detail)
return f"{prefix}{msg} is not effective{suffix}"
def failed_log(msg, job=None, task=None, role=None,
party_id=None, detail=None):
prefix, suffix = base_msg(job, task, role, party_id, detail)
return f"{prefix}failed to {msg}{suffix}"
def base_msg(job=None, task=None, role: str = None,
party_id: typing.Union[str, int] = None, detail=None):
if detail:
detail_msg = f" detail: \n{detail}"
else:
detail_msg = ""
if task is not None:
return f"task {task.f_task_id} {task.f_task_version} ", f" on {task.f_role} {task.f_party_id}{detail_msg}"
elif job is not None:
return "", f" on {job.f_role} {job.f_party_id}{detail_msg}"
elif role and party_id:
return "", f" on {role} {party_id}{detail_msg}"
else:
return "", f"{detail_msg}"
def exception_to_trace_string(ex):
return "".join(traceback.TracebackException.from_exception(ex).format())
def get_logger_base_dir():
job_log_dir = file_utils.get_rag_flow_directory('logs')
return job_log_dir
def get_job_logger(job_id, log_type):
rag_flow_log_dir = file_utils.get_rag_flow_directory('logs', 'rag_flow')
job_log_dir = file_utils.get_rag_flow_directory('logs', job_id)
if not job_id:
log_dirs = [rag_flow_log_dir]
else:
if log_type == 'audit':
log_dirs = [job_log_dir, rag_flow_log_dir]
else:
log_dirs = [job_log_dir]
if LoggerFactory.log_share:
oldmask = os.umask(000)
os.makedirs(job_log_dir, exist_ok=True)
os.makedirs(rag_flow_log_dir, exist_ok=True)
os.umask(oldmask)
else:
os.makedirs(job_log_dir, exist_ok=True)
os.makedirs(rag_flow_log_dir, exist_ok=True)
logger = LoggerFactory.new_logger(f"{job_id}_{log_type}")
for job_log_dir in log_dirs:
handler = LoggerFactory.get_handler(class_name=None, level=LoggerFactory.LEVEL,
log_dir=job_log_dir, log_type=log_type, job_id=job_id)
error_handler = LoggerFactory.get_handler(
class_name=None,
level=logging.ERROR,
log_dir=job_log_dir,
log_type=log_type,
job_id=job_id)
logger.addHandler(handler)
logger.addHandler(error_handler)
with LoggerFactory.lock:
LoggerFactory.schedule_logger_dict[job_id + log_type] = logger
return logger
logging.captureWarnings(True)
msg = f"{logfile_basename} log path: {log_path}"
logger.info(msg)

View File

@ -77,4 +77,4 @@ def __get_pdf_from_html(
def is_valid_url(url: str) -> bool:
return bool(re.match(r"(https?|ftp|file)://[-A-Za-z0-9+&@#/%?=~_|!:,.;]+[-A-Za-z0-9+&@#/%=~_|]", url))
return bool(re.match(r"(https?)://[-A-Za-z0-9+&@#/%?=~_|!:,.;]+[-A-Za-z0-9+&@#/%=~_|]", url))

49
api/validation.py Normal file
View File

@ -0,0 +1,49 @@
#
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import sys
def python_version_validation():
# Check python version
required_python_version = (3, 10)
if sys.version_info < required_python_version:
logging.info(
f"Required Python: >= {required_python_version[0]}.{required_python_version[1]}. Current Python version: {sys.version_info[0]}.{sys.version_info[1]}."
)
sys.exit(1)
else:
logging.info(f"Python version: {sys.version_info[0]}.{sys.version_info[1]}")
python_version_validation()
# Download nltk data
def download_nltk_data():
import nltk
nltk.download('wordnet', halt_on_error=False, quiet=True)
nltk.download('punkt_tab', halt_on_error=False, quiet=True)
try:
from multiprocessing import Pool
pool = Pool(processes=1)
thread = pool.apply_async(download_nltk_data)
binary = thread.get(timeout=60)
except Exception as e:
print('\x1b[6;37;41m WARNING \x1b[0m' + "Downloading NLTK data failure.", flush=True)

View File

@ -13,14 +13,57 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import dotenv
import typing
import os
import subprocess
RAGFLOW_VERSION_INFO = "unknown"
def get_versions() -> typing.Mapping[str, typing.Any]:
dotenv.load_dotenv(dotenv.find_dotenv())
return dotenv.dotenv_values()
def get_ragflow_version() -> str:
global RAGFLOW_VERSION_INFO
if RAGFLOW_VERSION_INFO != "unknown":
return RAGFLOW_VERSION_INFO
version_path = os.path.abspath(
os.path.join(
os.path.dirname(os.path.realpath(__file__)), os.pardir, "VERSION"
)
)
if os.path.exists(version_path):
with open(version_path, "r") as f:
RAGFLOW_VERSION_INFO = f.read().strip()
else:
RAGFLOW_VERSION_INFO = get_closest_tag_and_count()
LIGHTEN = int(os.environ.get("LIGHTEN", "0"))
RAGFLOW_VERSION_INFO += " slim" if LIGHTEN == 1 else " full"
return RAGFLOW_VERSION_INFO
def get_rag_version() -> typing.Optional[str]:
return get_versions().get("RAGFLOW_IMAGE", "infiniflow/ragflow:dev").split(":")[-1]
def get_closest_tag_and_count():
try:
# Get the current commit hash
commit_id = (
subprocess.check_output(["git", "rev-parse", "--short", "HEAD"])
.strip()
.decode("utf-8")
)
# Get the closest tag
closest_tag = (
subprocess.check_output(["git", "describe", "--tags", "--abbrev=0"])
.strip()
.decode("utf-8")
)
# Get the commit count since the closest tag
process = subprocess.Popen(
["git", "rev-list", "--count", f"{closest_tag}..HEAD"],
stdout=subprocess.PIPE,
)
commits_count, _ = process.communicate()
commits_count = int(commits_count.strip())
if commits_count == 0:
return closest_tag
else:
return f"{commit_id}({closest_tag}~{commits_count})"
except Exception:
return "unknown"

View File

@ -0,0 +1,26 @@
{
"id": {"type": "varchar", "default": ""},
"doc_id": {"type": "varchar", "default": ""},
"kb_id": {"type": "varchar", "default": ""},
"create_time": {"type": "varchar", "default": ""},
"create_timestamp_flt": {"type": "float", "default": 0.0},
"img_id": {"type": "varchar", "default": ""},
"docnm_kwd": {"type": "varchar", "default": ""},
"title_tks": {"type": "varchar", "default": ""},
"title_sm_tks": {"type": "varchar", "default": ""},
"name_kwd": {"type": "varchar", "default": ""},
"important_kwd": {"type": "varchar", "default": ""},
"important_tks": {"type": "varchar", "default": ""},
"content_with_weight": {"type": "varchar", "default": ""},
"content_ltks": {"type": "varchar", "default": ""},
"content_sm_ltks": {"type": "varchar", "default": ""},
"page_num_list": {"type": "varchar", "default": ""},
"top_list": {"type": "varchar", "default": ""},
"position_list": {"type": "varchar", "default": ""},
"weight_int": {"type": "integer", "default": 0},
"weight_flt": {"type": "float", "default": 0.0},
"rank_int": {"type": "integer", "default": 0},
"available_int": {"type": "integer", "default": 1},
"knowledge_graph_kwd": {"type": "varchar", "default": ""},
"entities_kwd": {"type": "varchar", "default": ""}
}

View File

@ -3,7 +3,7 @@
{
"name": "OpenAI",
"logo": "",
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
"tags": "LLM,TEXT EMBEDDING,TTS,TEXT RE-RANK,SPEECH2TEXT,MODERATION",
"status": "1",
"llm": [
{
@ -89,9 +89,15 @@
{
"name": "Tongyi-Qianwen",
"logo": "",
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
"tags": "LLM,TEXT EMBEDDING,TEXT RE-RANK,TTS,SPEECH2TEXT,MODERATION",
"status": "1",
"llm": [
{
"llm_name": "qwen-long",
"tags": "LLM,CHAT,10000K",
"max_tokens": 1000000,
"model_type": "chat"
},
{
"llm_name": "qwen-turbo",
"tags": "LLM,CHAT,8K",
@ -139,6 +145,12 @@
"tags": "LLM,CHAT,IMAGE2TEXT",
"max_tokens": 765,
"model_type": "image2text"
},
{
"llm_name": "gte-rerank",
"tags": "RE-RANK,4k",
"max_tokens": 4000,
"model_type": "rerank"
}
]
},
@ -148,6 +160,18 @@
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
"status": "1",
"llm": [
{
"llm_name": "glm-4-plus",
"tags": "LLM,CHAT,",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "glm-4-0520",
"tags": "LLM,CHAT,",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "glm-4",
"tags": "LLM,CHAT,",
@ -172,6 +196,12 @@
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "glm-4-flashx",
"tags": "LLM,CHAT,",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "glm-4-long",
"tags": "LLM,CHAT,",
@ -190,6 +220,12 @@
"max_tokens": 2000,
"model_type": "image2text"
},
{
"llm_name": "glm-4-9b",
"tags": "LLM,CHAT,",
"max_tokens": 8192,
"model_type": "chat"
},
{
"llm_name": "embedding-2",
"tags": "TEXT EMBEDDING",
@ -248,6 +284,12 @@
"tags": "LLM,CHAT",
"max_tokens": 128000,
"model_type": "chat"
},
{
"llm_name": "moonshot-v1-auto",
"tags": "LLM,CHAT,",
"max_tokens": 128000,
"model_type": "chat"
}
]
},
@ -310,7 +352,7 @@
{
"name": "Xinference",
"logo": "",
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION,TEXT RE-RANK",
"tags": "LLM,TEXT EMBEDDING,TTS,SPEECH2TEXT,MODERATION,TEXT RE-RANK",
"status": "1",
"llm": []
},
@ -616,16 +658,16 @@
"llm_name": "gpt-4o",
"tags": "LLM,CHAT,128K",
"max_tokens": 128000,
"model_type": "chat,image2text"
"model_type": "image2text"
},
{
"llm_name": "gpt-35-turbo",
"llm_name": "gpt-3.5-turbo",
"tags": "LLM,CHAT,4K",
"max_tokens": 4096,
"model_type": "chat"
},
{
"llm_name": "gpt-35-turbo-16k",
"llm_name": "gpt-3.5-turbo-16k",
"tags": "LLM,CHAT,16k",
"max_tokens": 16385,
"model_type": "chat"
@ -1275,7 +1317,7 @@
"llm": []
},
{
"name": "cohere",
"name": "Cohere",
"logo": "",
"tags": "LLM,TEXT EMBEDDING, TEXT RE-RANK",
"status": "1",
@ -1993,6 +2035,60 @@
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-72B-Instruct-128K",
"tags": "LLM,CHAT,128k",
"max_tokens": 131072,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-72B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-14B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-32B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-Math-72B-Instruct",
"tags": "LLM,CHAT,Math,4k",
"max_tokens": 4096,
"model_type": "chat"
},
{
"llm_name": "Qwen/Qwen2.5-Coder-7B-Instruct",
"tags": "LLM,CHAT,FIM,Coder,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Pro/Qwen/Qwen2.5-7B-Instruct",
"tags": "LLM,CHAT,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "Pro/Qwen/Qwen2.5-Coder-7B-Instruct",
"tags": "LLM,CHAT,FIM,Coder,32k",
"max_tokens": 32768,
"model_type": "chat"
},
{
"llm_name": "01-ai/Yi-1.5-34B-Chat-16K",
"tags": "LLM,CHAT,16k",
@ -2097,6 +2193,12 @@
"tags": "LLM,IMAGE2TEXT",
"status": "1",
"llm": [
{
"llm_name": "yi-lightning",
"tags": "LLM,CHAT,16k",
"max_tokens": 16384,
"model_type": "chat"
},
{
"llm_name": "yi-large",
"tags": "LLM,CHAT,32k",
@ -2201,7 +2303,7 @@
{
"name": "XunFei Spark",
"logo": "",
"tags": "LLM",
"tags": "LLM,TTS",
"status": "1",
"llm": []
},
@ -2238,6 +2340,12 @@
"max_tokens": 204800,
"model_type": "chat"
},
{
"llm_name": "claude-3-5-sonnet-20241022",
"tags": "LLM,CHAT,200k",
"max_tokens": 204800,
"model_type": "chat"
},
{
"llm_name": "claude-3-opus-20240229",
"tags": "LLM,CHAT,200k",
@ -2269,8 +2377,8 @@
"model_type": "chat"
},
{
"llm_name": "claude-instant-1.2",
"tags": "LLM,CHAT,100k",
"llm_name": "claude-3-5-sonnet-20241022",
"tags": "LLM,CHAT,200k",
"max_tokens": 102400,
"model_type": "chat"
}
@ -2346,11 +2454,11 @@
"llm": []
},
{
"name": "HuggingFace",
"logo": "",
"tags": "TEXT EMBEDDING",
"status": "1",
"llm": []
}
"name": "HuggingFace",
"logo": "",
"tags": "TEXT EMBEDDING",
"status": "1",
"llm": []
}
]
}

View File

@ -1,200 +1,203 @@
{
{
"settings": {
"index": {
"number_of_shards": 2,
"number_of_replicas": 0,
"refresh_interval" : "1000ms"
"refresh_interval": "1000ms"
},
"similarity": {
"scripted_sim": {
"type": "scripted",
"script": {
"source": "double idf = Math.log(1+(field.docCount-term.docFreq+0.5)/(term.docFreq + 0.5))/Math.log(1+((field.docCount-0.5)/1.5)); return query.boost * idf * Math.min(doc.freq, 1);"
}
"scripted_sim": {
"type": "scripted",
"script": {
"source": "double idf = Math.log(1+(field.docCount-term.docFreq+0.5)/(term.docFreq + 0.5))/Math.log(1+((field.docCount-0.5)/1.5)); return query.boost * idf * Math.min(doc.freq, 1);"
}
}
}
},
"mappings": {
"properties": {
"lat_lon": {"type": "geo_point", "store":"true"}
},
"date_detection": "true",
"dynamic_templates": [
{
"int": {
"match": "*_int",
"mapping": {
"type": "integer",
"store": "true"
}
"properties": {
"lat_lon": {
"type": "geo_point",
"store": "true"
}
},
"date_detection": "true",
"dynamic_templates": [
{
"int": {
"match": "*_int",
"mapping": {
"type": "integer",
"store": "true"
}
},
{
"ulong": {
"match": "*_ulong",
"mapping": {
"type": "unsigned_long",
"store": "true"
}
}
},
{
"long": {
"match": "*_long",
"mapping": {
"type": "long",
"store": "true"
}
}
},
{
"short": {
"match": "*_short",
"mapping": {
"type": "short",
"store": "true"
}
}
},
{
"numeric": {
"match": "*_flt",
"mapping": {
"type": "float",
"store": true
}
}
},
{
"tks": {
"match": "*_tks",
"mapping": {
"type": "text",
"similarity": "scripted_sim",
"analyzer": "whitespace",
"store": true
}
}
},
{
"ltks":{
"match": "*_ltks",
"mapping": {
"type": "text",
"analyzer": "whitespace",
"store": true
}
}
},
{
"kwd": {
"match_pattern": "regex",
"match": "^(.*_(kwd|id|ids|uid|uids)|uid)$",
"mapping": {
"type": "keyword",
"similarity": "boolean",
"store": true
}
}
},
{
"dt": {
"match_pattern": "regex",
"match": "^.*(_dt|_time|_at)$",
"mapping": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||yyyy-MM-dd_HH:mm:ss",
"store": true
}
}
},
{
"nested": {
"match": "*_nst",
"mapping": {
"type": "nested"
}
}
},
{
"object": {
"match": "*_obj",
"mapping": {
"type": "object",
"dynamic": "true"
}
}
},
{
"string": {
"match": "*_with_weight",
"mapping": {
"type": "text",
"index": "false",
"store": true
}
}
},
{
"string": {
"match": "*_fea",
"mapping": {
"type": "rank_feature"
}
}
},
{
"dense_vector": {
"match": "*_512_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 512
}
}
},
{
"dense_vector": {
"match": "*_768_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 768
}
}
},
{
"dense_vector": {
"match": "*_1024_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 1024
}
}
},
{
"dense_vector": {
"match": "*_1536_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 1536
}
}
},
{
"binary": {
"match": "*_bin",
"mapping": {
"type": "binary"
}
}
}
]
}
}
},
{
"ulong": {
"match": "*_ulong",
"mapping": {
"type": "unsigned_long",
"store": "true"
}
}
},
{
"long": {
"match": "*_long",
"mapping": {
"type": "long",
"store": "true"
}
}
},
{
"short": {
"match": "*_short",
"mapping": {
"type": "short",
"store": "true"
}
}
},
{
"numeric": {
"match": "*_flt",
"mapping": {
"type": "float",
"store": true
}
}
},
{
"tks": {
"match": "*_tks",
"mapping": {
"type": "text",
"similarity": "scripted_sim",
"analyzer": "whitespace",
"store": true
}
}
},
{
"ltks": {
"match": "*_ltks",
"mapping": {
"type": "text",
"analyzer": "whitespace",
"store": true
}
}
},
{
"kwd": {
"match_pattern": "regex",
"match": "^(.*_(kwd|id|ids|uid|uids)|uid)$",
"mapping": {
"type": "keyword",
"similarity": "boolean",
"store": true
}
}
},
{
"dt": {
"match_pattern": "regex",
"match": "^.*(_dt|_time|_at)$",
"mapping": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||yyyy-MM-dd_HH:mm:ss",
"store": true
}
}
},
{
"nested": {
"match": "*_nst",
"mapping": {
"type": "nested"
}
}
},
{
"object": {
"match": "*_obj",
"mapping": {
"type": "object",
"dynamic": "true"
}
}
},
{
"string": {
"match": "*_(with_weight|list)$",
"mapping": {
"type": "text",
"index": "false",
"store": true
}
}
},
{
"string": {
"match": "*_fea",
"mapping": {
"type": "rank_feature"
}
}
},
{
"dense_vector": {
"match": "*_512_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 512
}
}
},
{
"dense_vector": {
"match": "*_768_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 768
}
}
},
{
"dense_vector": {
"match": "*_1024_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 1024
}
}
},
{
"dense_vector": {
"match": "*_1536_vec",
"mapping": {
"type": "dense_vector",
"index": true,
"similarity": "cosine",
"dims": 1536
}
}
},
{
"binary": {
"match": "*_bin",
"mapping": {
"type": "binary"
}
}
}
]
}
}

View File

@ -1 +0,0 @@
../docker/service_conf.yaml

77
conf/service_conf.yaml Normal file
View File

@ -0,0 +1,77 @@
ragflow:
host: 0.0.0.0
http_port: 9380
mysql:
name: 'rag_flow'
user: 'root'
password: 'infini_rag_flow'
host: 'mysql'
port: 5455
max_connections: 100
stale_timeout: 30
minio:
user: 'rag_flow'
password: 'infini_rag_flow'
host: 'minio:9000'
es:
hosts: 'http://es01:1200'
username: 'elastic'
password: 'infini_rag_flow'
infinity:
uri: 'infinity:23817'
db_name: 'default_db'
redis:
db: 1
password: 'infini_rag_flow'
host: 'redis:6379'
# postgres:
# name: 'rag_flow'
# user: 'rag_flow'
# password: 'infini_rag_flow'
# host: 'postgres'
# port: 5432
# max_connections: 100
# stale_timeout: 30
# s3:
# endpoint: 'endpoint'
# access_key: 'access_key'
# secret_key: 'secret_key'
# region: 'region'
# azure:
# auth_type: 'sas'
# container_url: 'container_url'
# sas_token: 'sas_token'
# azure:
# auth_type: 'spn'
# account_url: 'account_url'
# client_id: 'client_id'
# secret: 'secret'
# tenant_id: 'tenant_id'
# container_name: 'container_name'
# user_default_llm:
# factory: 'Tongyi-Qianwen'
# api_key: 'sk-xxxxxxxxxxxxx'
# base_url: ''
# oauth:
# github:
# client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
# secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
# url: https://github.com/login/oauth/access_token
# feishu:
# app_id: cli_xxxxxxxxxxxxxxxxxxx
# app_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
# app_access_token_url: https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal
# user_access_token_url: https://open.feishu.cn/open-apis/authen/v1/oidc/access_token
# grant_type: 'authorization_code'
# authentication:
# client:
# switch: false
# http_app_key:
# http_secret_key:
# site:
# switch: false
# permission:
# switch: false
# component: false
# dataset: false

View File

@ -110,7 +110,7 @@ class RAGFlowDocxParser:
return lines
return ["\n".join(lines)]
def __call__(self, fnm, from_page=0, to_page=100000):
def __call__(self, fnm, from_page=0, to_page=100000000):
self.doc = Document(fnm) if isinstance(
fnm, str) else Document(BytesIO(fnm))
pn = 0 # parsed page
@ -130,7 +130,7 @@ class RAGFlowDocxParser:
if 'lastRenderedPageBreak' in run._element.xml:
pn += 1
secs.append(("".join(runs_within_single_paragraph), p.style.name)) # then concat run.text as part of the paragraph
secs.append(("".join(runs_within_single_paragraph), p.style.name if hasattr(p.style, 'name') else '')) # then concat run.text as part of the paragraph
tbls = [self.__extract_table_content(tb) for tb in self.doc.tables]
return secs, tbls

View File

@ -16,11 +16,13 @@ import readability
import html_text
import chardet
def get_encoding(file):
with open(file,'rb') as f:
tmp = chardet.detect(f.read())
return tmp['encoding']
class RAGFlowHtmlParser:
def __call__(self, fnm, binary=None):
txt = ""

View File

@ -3,12 +3,11 @@
# from https://github.com/langchain-ai/langchain/blob/master/libs/text-splitters/langchain_text_splitters/json.py
import json
from typing import Any, Dict, List, Optional
from typing import Any
from rag.nlp import find_codec
class RAGFlowJsonParser:
def __init__(
self, max_chunk_size: int = 2000, min_chunk_size: Optional[int] = None
self, max_chunk_size: int = 2000, min_chunk_size: int | None = None
):
super().__init__()
self.max_chunk_size = max_chunk_size * 2
@ -27,12 +26,12 @@ class RAGFlowJsonParser:
return sections
@staticmethod
def _json_size(data: Dict) -> int:
def _json_size(data: dict) -> int:
"""Calculate the size of the serialized JSON object."""
return len(json.dumps(data, ensure_ascii=False))
@staticmethod
def _set_nested_dict(d: Dict, path: List[str], value: Any) -> None:
def _set_nested_dict(d: dict, path: list[str], value: Any) -> None:
"""Set a value in a nested dictionary based on the given path."""
for key in path[:-1]:
d = d.setdefault(key, {})
@ -54,10 +53,10 @@ class RAGFlowJsonParser:
def _json_split(
self,
data: Dict[str, Any],
current_path: Optional[List[str]] = None,
chunks: Optional[List[Dict]] = None,
) -> List[Dict]:
data: dict[str, Any],
current_path: list[str] | None,
chunks: list[dict] | None,
) -> list[dict]:
"""
Split json into maximum size dictionaries while preserving structure.
"""
@ -87,9 +86,9 @@ class RAGFlowJsonParser:
def split_json(
self,
json_data: Dict[str, Any],
json_data: dict[str, Any],
convert_lists: bool = False,
) -> List[Dict]:
) -> list[dict]:
"""Splits JSON into a list of JSON chunks"""
if convert_lists:
@ -104,10 +103,10 @@ class RAGFlowJsonParser:
def split_text(
self,
json_data: Dict[str, Any],
json_data: dict[str, Any],
convert_lists: bool = False,
ensure_ascii: bool = True,
) -> List[str]:
) -> list[str]:
"""Splits JSON into a list of JSON formatted strings"""
chunks = self.split_json(json_data=json_data, convert_lists=convert_lists)

Some files were not shown because too many files have changed in this diff Show More