Compare commits

...

15 Commits

Author SHA1 Message Date
1f4a17863f Feat: read web api testcases (#12383)
### What problem does this PR solve?

Web API testcase for list_messages, get_recent_message.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2026-01-01 12:52:40 +08:00
4d3a3a97ef Update HELP command of ADMIN CLI (#12387)
### What problem does this PR solve?

As title.

### Type of change

- [x] Refactoring

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2026-01-01 12:52:13 +08:00
ff1020ccfb ADMIN CLI: support grant/revoke user admin authorization (#12381)
### What problem does this PR solve?

```
admin> grant admin 'aaa@aaa1.com';
Fail to grant aaa@aaa1.com admin authorization, code: 404, message: User 'aaa@aaa1.com' not found
admin> grant admin 'aaa@aaa.com';
Grant successfully!
admin> revoke admin 'aaa1@aaa.com';
Fail to revoke aaa1@aaa.com admin authorization, code: 404, message: User 'aaa1@aaa.com' not found
admin> revoke admin 'aaa@aaa.com';
Revoke successfully!
admin> revoke admin 'aaa@aaa.com';
aaa@aaa.com isn't superuser, yet!
admin> grant admin 'aaa@aaa.com';
Grant successfully!
admin> grant admin 'aaa@aaa.com';
aaa@aaa.com is already superuser!
admin> revoke admin 'aaa@aaa.com';
Revoke successfully!

```

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2026-01-01 12:49:34 +08:00
ca3bd2cf9f Update README (#12386)
### Type of change

- [x] Documentation Update
2025-12-31 20:07:40 +08:00
eb661c028d Fix Tika version mismatch in Dockerfile.deps (3.0.0 → 3.2.3) (#12267)
Fixes #12266 

Dockerfile.deps still referenced `tika-server-standard-3.0.0.jar` even
after
the project moved to Tika 3.2.3 for security reasons.

This caused Docker builds to fail due to a version mismatch and missing
artifact.

Changes:
- Update Dockerfile.deps to consistently use Tika 3.2.3

No functional changes beyond dependency alignment.

Co-authored-by: Liu An <asiro@qq.com>
2025-12-31 19:55:39 +08:00
10c28c5ecd Feat: Refactoring the documentation page using shadcn. #10427 (#12376)
### What problem does this PR solve?

Feat: Refactoring the documentation page using shadcn. #10427

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-12-31 19:00:37 +08:00
96810b7d97 Fix: webdav connector (#12380)
### What problem does this PR solve?

fix webdav #11422

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 19:00:00 +08:00
365f9b01ae Fix: metadata data synchronization issues; add memory tab in home page (#12368)
### What problem does this PR solve?

fix: metadata data synchronization issues; add memory tab in home page

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 17:19:04 +08:00
7d4d687dde Feat: Bitbucket connector (#12332)
### What problem does this PR solve?

Feat: Bitbucket connector NOT READY TO MERGE

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-12-31 17:18:30 +08:00
6a664fea3b Docs: Updated v0.23.0 release notes (#12374)
### What problem does this PR solve?


### Type of change


- [x] Documentation Update
2025-12-31 17:10:15 +08:00
dcdc1b0ec7 Fix urls for basic docs (#12372)
### Type of change

- [x] Documentation Update
2025-12-31 17:02:34 +08:00
4af4c36e60 Docs: Added v0.23.1 release notes (#12371)
### What problem does this PR solve?


### Type of change

- [x] Documentation Update
2025-12-31 16:43:56 +08:00
05e5244d94 Refactor docs of RAGFlow admin (#12361)
### What problem does this PR solve?

as title

### Type of change

- [x] Documentation Update

Signed-off-by: Jin Hai <haijin.chn@gmail.com>
2025-12-31 14:42:53 +08:00
c2ee2bf7fe Feat: add Zendesk data source integration with configuration and sync capabilities (#12344)
### What problem does this PR solve?
issue:
#12313
change:
add Zendesk data source integration with configuration and sync
capabilities

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2025-12-31 14:40:49 +08:00
461c81e14a Fix: KG search issue. (#12364)
### What problem does this PR solve?

Close #12347

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-12-31 14:40:27 +08:00
77 changed files with 3251 additions and 1166 deletions

View File

@ -19,17 +19,17 @@ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
cp -r /deps/nltk_data /root/ && \
cp /deps/tika-server-standard-3.0.0.jar /deps/tika-server-standard-3.0.0.jar.md5 /ragflow/ && \
cp /deps/tika-server-standard-3.2.3.jar /deps/tika-server-standard-3.2.3.jar.md5 /ragflow/ && \
cp /deps/cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.0.0.jar"
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.2.3.jar"
ENV DEBIAN_FRONTEND=noninteractive
# Setup apt
# Python package and implicit dependencies:
# opencv-python: libglib2.0-0 libglx-mesa0 libgl1
# aspose-slides: pkg-config libicu-dev libgdiplus libssl1.1_1.1.1f-1ubuntu2_amd64.deb
# python-pptx: default-jdk tika-server-standard-3.0.0.jar
# python-pptx: default-jdk tika-server-standard-3.2.3.jar
# selenium: libatk-bridge2.0-0 chrome-linux64-121-0-6167-85
# Building C extensions: libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \

View File

@ -3,7 +3,7 @@
FROM scratch
# Copy resources downloaded via download_deps.py
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.0.0.jar tika-server-standard-3.0.0.jar.md5 libssl*.deb uv-x86_64-unknown-linux-gnu.tar.gz /
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.2.3.jar tika-server-standard-3.2.3.jar.md5 libssl*.deb uv-x86_64-unknown-linux-gnu.tar.gz /
COPY nltk_data /nltk_data

View File

@ -72,7 +72,7 @@
## 💡 What is RAGFlow?
[RAGFlow](https://ragflow.io/) is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged context engine and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
[RAGFlow](https://ragflow.io/) is a leading open-source Retrieval-Augmented Generation ([RAG](https://ragflow.io/basics/what-is-rag)) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged [context engine](https://ragflow.io/basics/what-is-agent-context-engine) and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
## 🎮 Demo

View File

@ -72,7 +72,7 @@
## 💡 Apa Itu RAGFlow?
[RAGFlow](https://ragflow.io/) adalah mesin RAG (Retrieval-Augmented Generation) open-source terkemuka yang mengintegrasikan teknologi RAG mutakhir dengan kemampuan Agent untuk menciptakan lapisan kontekstual superior bagi LLM. Menyediakan alur kerja RAG yang efisien dan dapat diadaptasi untuk perusahaan segala skala. Didukung oleh mesin konteks terkonvergensi dan template Agent yang telah dipra-bangun, RAGFlow memungkinkan pengembang mengubah data kompleks menjadi sistem AI kesetiaan-tinggi dan siap-produksi dengan efisiensi dan presisi yang luar biasa.
[RAGFlow](https://ragflow.io/) adalah mesin [RAG](https://ragflow.io/basics/what-is-rag) (Retrieval-Augmented Generation) open-source terkemuka yang mengintegrasikan teknologi RAG mutakhir dengan kemampuan Agent untuk menciptakan lapisan kontekstual superior bagi LLM. Menyediakan alur kerja RAG yang efisien dan dapat diadaptasi untuk perusahaan segala skala. Didukung oleh mesin konteks terkonvergensi dan template Agent yang telah dipra-bangun, RAGFlow memungkinkan pengembang mengubah data kompleks menjadi sistem AI kesetiaan-tinggi dan siap-produksi dengan efisiensi dan presisi yang luar biasa.
## 🎮 Demo

View File

@ -53,7 +53,7 @@
## 💡 RAGFlow とは?
[RAGFlow](https://ragflow.io/) は、先進的なRAGRetrieval-Augmented Generation技術と Agent 機能を融合し、大規模言語モデルLLMに優れたコンテキスト層を構築する最先端のオープンソース RAG エンジンです。あらゆる規模の企業に対応可能な合理化された RAG ワークフローを提供し、統合型コンテキストエンジンと事前構築されたAgentテンプレートにより、開発者が複雑なデータを驚異的な効率性と精度で高精細なプロダクションレディAIシステムへ変換することを可能にします。
[RAGFlow](https://ragflow.io/) は、先進的な[RAG](https://ragflow.io/basics/what-is-rag)Retrieval-Augmented Generation技術と Agent 機能を融合し、大規模言語モデルLLMに優れたコンテキスト層を構築する最先端のオープンソース RAG エンジンです。あらゆる規模の企業に対応可能な合理化された RAG ワークフローを提供し、統合型[コンテキストエンジン](https://ragflow.io/basics/what-is-agent-context-engine)と事前構築されたAgentテンプレートにより、開発者が複雑なデータを驚異的な効率性と精度で高精細なプロダクションレディAIシステムへ変換することを可能にします。
## 🎮 Demo

View File

@ -54,7 +54,7 @@
## 💡 RAGFlow란?
[RAGFlow](https://ragflow.io/) 는 최첨단 RAG(Retrieval-Augmented Generation)와 Agent 기능을 융합하여 대규모 언어 모델(LLM)을 위한 우수한 컨텍스트 계층을 생성하는 선도적인 오픈소스 RAG 엔진입니다. 모든 규모의 기업에 적용 가능한 효율적인 RAG 워크플로를 제공하며, 통합 컨텍스트 엔진과 사전 구축된 Agent 템플릿을 통해 개발자들이 복잡한 데이터를 예외적인 효율성과 정밀도로 고급 구현도의 프로덕션 준비 완료 AI 시스템으로 변환할 수 있도록 지원합니다.
[RAGFlow](https://ragflow.io/) 는 최첨단 [RAG](https://ragflow.io/basics/what-is-rag)(Retrieval-Augmented Generation)와 Agent 기능을 융합하여 대규모 언어 모델(LLM)을 위한 우수한 컨텍스트 계층을 생성하는 선도적인 오픈소스 RAG 엔진입니다. 모든 규모의 기업에 적용 가능한 효율적인 RAG 워크플로를 제공하며, 통합 [컨텍스트 엔진](https://ragflow.io/basics/what-is-agent-context-engine)과 사전 구축된 Agent 템플릿을 통해 개발자들이 복잡한 데이터를 예외적인 효율성과 정밀도로 고급 구현도의 프로덕션 준비 완료 AI 시스템으로 변환할 수 있도록 지원합니다.
## 🎮 데모

View File

@ -73,7 +73,7 @@
## 💡 O que é o RAGFlow?
[RAGFlow](https://ragflow.io/) é um mecanismo de RAG (Retrieval-Augmented Generation) open-source líder que fusiona tecnologias RAG de ponta com funcionalidades Agent para criar uma camada contextual superior para LLMs. Oferece um fluxo de trabalho RAG otimizado adaptável a empresas de qualquer escala. Alimentado por um motor de contexto convergente e modelos Agent pré-construídos, o RAGFlow permite que desenvolvedores transformem dados complexos em sistemas de IA de alta fidelidade e pronto para produção com excepcional eficiência e precisão.
[RAGFlow](https://ragflow.io/) é um mecanismo de [RAG](https://ragflow.io/basics/what-is-rag) (Retrieval-Augmented Generation) open-source líder que fusiona tecnologias RAG de ponta com funcionalidades Agent para criar uma camada contextual superior para LLMs. Oferece um fluxo de trabalho RAG otimizado adaptável a empresas de qualquer escala. Alimentado por [um motor de contexto](https://ragflow.io/basics/what-is-agent-context-engine) convergente e modelos Agent pré-construídos, o RAGFlow permite que desenvolvedores transformem dados complexos em sistemas de IA de alta fidelidade e pronto para produção com excepcional eficiência e precisão.
## 🎮 Demo

View File

@ -72,7 +72,7 @@
## 💡 RAGFlow 是什麼?
[RAGFlow](https://ragflow.io/) 是一款領先的開源 RAGRetrieval-Augmented Generation引擎通過融合前沿的 RAG 技術與 Agent 能力,為大型語言模型提供卓越的上下文層。它提供可適配任意規模企業的端到端 RAG 工作流,憑藉融合式上下文引擎與預置的 Agent 模板,助力開發者以極致效率與精度將複雜數據轉化為高可信、生產級的人工智能系統。
[RAGFlow](https://ragflow.io/) 是一款領先的開源 [RAG](https://ragflow.io/basics/what-is-rag)Retrieval-Augmented Generation引擎通過融合前沿的 RAG 技術與 Agent 能力,為大型語言模型提供卓越的上下文層。它提供可適配任意規模企業的端到端 RAG 工作流,憑藉融合式[上下文引擎](https://ragflow.io/basics/what-is-agent-context-engine)與預置的 Agent 模板,助力開發者以極致效率與精度將複雜數據轉化為高可信、生產級的人工智能系統。
## 🎮 Demo 試用

View File

@ -72,7 +72,7 @@
## 💡 RAGFlow 是什么?
[RAGFlow](https://ragflow.io/) 是一款领先的开源检索增强生成RAG引擎通过融合前沿的 RAG 技术与 Agent 能力,为大型语言模型提供卓越的上下文层。它提供可适配任意规模企业的端到端 RAG 工作流,凭借融合式上下文引擎与预置的 Agent 模板,助力开发者以极致效率与精度将复杂数据转化为高可信、生产级的人工智能系统。
[RAGFlow](https://ragflow.io/) 是一款领先的开源检索增强生成([RAG](https://ragflow.io/basics/what-is-rag))引擎,通过融合前沿的 RAG 技术与 Agent 能力,为大型语言模型提供卓越的上下文层。它提供可适配任意规模企业的端到端 RAG 工作流,凭借融合式[上下文引擎](https://ragflow.io/basics/what-is-agent-context-engine)与预置的 Agent 模板,助力开发者以极致效率与精度将复杂数据转化为高可信、生产级的人工智能系统。
## 🎮 Demo 试用

View File

@ -53,6 +53,8 @@ sql_command: list_services
| alter_user_role
| show_user_permission
| show_version
| grant_admin
| revoke_admin
// meta command definition
meta_command: "\\" meta_command_name [meta_args]
@ -77,6 +79,7 @@ DROP: "DROP"i
USER: "USER"i
ALTER: "ALTER"i
ACTIVE: "ACTIVE"i
ADMIN: "ADMIN"i
PASSWORD: "PASSWORD"i
DATASETS: "DATASETS"i
OF: "OF"i
@ -123,6 +126,9 @@ revoke_permission: REVOKE action_list ON identifier FROM ROLE identifier ";"
alter_user_role: ALTER USER quoted_string SET ROLE identifier ";"
show_user_permission: SHOW USER PERMISSION quoted_string ";"
grant_admin: GRANT ADMIN quoted_string ";"
revoke_admin: REVOKE ADMIN quoted_string ";"
show_version: SHOW VERSION ";"
action_list: identifier ("," identifier)*
@ -249,6 +255,14 @@ class AdminTransformer(Transformer):
def show_version(self, items):
return {"type": "show_version"}
def grant_admin(self, items):
user_name = items[2]
return {"type": "grant_admin", "user_name": user_name}
def revoke_admin(self, items):
user_name = items[2]
return {"type": "revoke_admin", "user_name": user_name}
def action_list(self, items):
return items
@ -286,6 +300,43 @@ def encode_to_base64(input_string):
return base64_encoded.decode("utf-8")
def show_help():
"""Help info"""
help_text = """
Commands:
LIST SERVICES
SHOW SERVICE <service>
STARTUP SERVICE <service>
SHUTDOWN SERVICE <service>
RESTART SERVICE <service>
LIST USERS
SHOW USER <user>
DROP USER <user>
CREATE USER <user> <password>
ALTER USER PASSWORD <user> <new_password>
ALTER USER ACTIVE <user> <on/off>
LIST DATASETS OF <user>
LIST AGENTS OF <user>
CREATE ROLE <role>
DROP ROLE <role>
ALTER ROLE <role> SET DESCRIPTION <description>
LIST ROLES
SHOW ROLE <role>
GRANT <action_list> ON <function> TO ROLE <role>
REVOKE <action_list> ON <function> TO ROLE <role>
ALTER USER <user> SET ROLE <role>
SHOW USER PERMISSION <user>
SHOW VERSION
GRANT ADMIN <user>
REVOKE ADMIN <user>
Meta Commands:
\\?, \\h, \\help Show this help
\\q, \\quit, \\exit Quit the CLI
"""
print(help_text)
class AdminCLI(Cmd):
def __init__(self):
super().__init__()
@ -566,6 +617,10 @@ class AdminCLI(Cmd):
self._show_user_permission(command_dict)
case "show_version":
self._show_version(command_dict)
case "grant_admin":
self._grant_admin(command_dict)
case "revoke_admin":
self._revoke_admin(command_dict)
case "meta":
self._handle_meta_command(command_dict)
case _:
@ -698,6 +753,33 @@ class AdminCLI(Cmd):
else:
print(f"Unknown activate status: {activate_status}.")
def _grant_admin(self, command):
user_name_tree: Tree = command["user_name"]
user_name: str = user_name_tree.children[0].strip("'\"")
url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/admin"
# print(f"Grant admin: {url}")
# return
response = self.session.put(url)
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to grant {user_name} admin authorization, code: {res_json['code']}, message: {res_json['message']}")
def _revoke_admin(self, command):
user_name_tree: Tree = command["user_name"]
user_name: str = user_name_tree.children[0].strip("'\"")
url = f"http://{self.host}:{self.port}/api/v1/admin/users/{user_name}/admin"
# print(f"Revoke admin: {url}")
# return
response = self.session.delete(url)
res_json = response.json()
if response.status_code == 200:
print(res_json["message"])
else:
print(f"Fail to revoke {user_name} admin authorization, code: {res_json['code']}, message: {res_json['message']}")
def _handle_list_datasets(self, command):
username_tree: Tree = command["user_name"]
user_name: str = username_tree.children[0].strip("'\"")
@ -873,36 +955,12 @@ class AdminCLI(Cmd):
args = command.get("args", [])
if meta_command in ["?", "h", "help"]:
self.show_help()
show_help()
elif meta_command in ["q", "quit", "exit"]:
print("Goodbye!")
else:
print(f"Meta command '{meta_command}' with args {args}")
def show_help(self):
"""Help info"""
help_text = """
Commands:
LIST SERVICES
SHOW SERVICE <service>
STARTUP SERVICE <service>
SHUTDOWN SERVICE <service>
RESTART SERVICE <service>
LIST USERS
SHOW USER <user>
DROP USER <user>
CREATE USER <user> <password>
ALTER USER PASSWORD <user> <new_password>
ALTER USER ACTIVE <user> <on/off>
LIST DATASETS OF <user>
LIST AGENTS OF <user>
Meta Commands:
\\?, \\h, \\help Show this help
\\q, \\quit, \\exit Quit the CLI
"""
print(help_text)
def main():
import sys

View File

@ -158,6 +158,36 @@ def alter_user_activate_status(username):
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/admin', methods=['PUT'])
@login_required
@check_admin_auth
def grant_admin(username):
try:
if current_user.email == username:
return error_response(f"can't grant current user: {username}", 409)
msg = UserMgr.grant_admin(username)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>/admin', methods=['DELETE'])
@login_required
@check_admin_auth
def revoke_admin(username):
try:
if current_user.email == username:
return error_response(f"can't grant current user: {username}", 409)
msg = UserMgr.revoke_admin(username)
return success_response(None, msg)
except AdminException as e:
return error_response(e.message, e.code)
except Exception as e:
return error_response(str(e), 500)
@admin_bp.route('/users/<username>', methods=['GET'])
@login_required
@check_admin_auth

View File

@ -137,6 +137,38 @@ class UserMgr:
UserService.update_user(usr.id, {"is_active": target_status})
return f"Turn {_activate_status} user activate status successfully!"
@staticmethod
def grant_admin(username: str):
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check activate status different from new
usr = user_list[0]
if usr.is_superuser:
return f"{usr} is already superuser!"
# update is_active
UserService.update_user(usr.id, {"is_superuser": True})
return "Grant successfully!"
@staticmethod
def revoke_admin(username: str):
# use email to find user. check exist and unique.
user_list = UserService.query_user_by_email(username)
if not user_list:
raise UserNotFoundError(username)
elif len(user_list) > 1:
raise AdminException(f"Exist more than 1 user: {username}!")
# check activate status different from new
usr = user_list[0]
if not usr.is_superuser:
return f"{usr} isn't superuser, yet!"
# update is_active
UserService.update_user(usr.id, {"is_superuser": False})
return "Revoke successfully!"
class UserServiceMgr:

View File

@ -202,7 +202,7 @@ class Retrieval(ToolBase, ABC):
kbinfos["chunks"] = settings.retriever.retrieval_by_children(kbinfos["chunks"],
[kb.tenant_id for kb in kbs])
if self._param.use_kg:
ck = settings.kg_retriever.retrieval(query,
ck = await settings.kg_retriever.retrieval(query,
[kb.tenant_id for kb in kbs],
kb_ids,
embd_mdl,
@ -215,7 +215,7 @@ class Retrieval(ToolBase, ABC):
kbinfos = {"chunks": [], "doc_aggs": []}
if self._param.use_kg and kbs:
ck = settings.kg_retriever.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl,
ck = await settings.kg_retriever.retrieval(query, [kb.tenant_id for kb in kbs], filtered_kb_ids, embd_mdl,
LLMBundle(kbs[0].tenant_id, LLMType.CHAT))
if self.check_if_canceled("Retrieval processing"):
return

View File

@ -381,7 +381,7 @@ async def retrieval_test():
rank_feature=labels
)
if use_kg:
ck = settings.kg_retriever.retrieval(_question,
ck = await settings.kg_retriever.retrieval(_question,
tenant_ids,
kb_ids,
embd_mdl,

View File

@ -150,7 +150,7 @@ async def retrieval(tenant_id):
)
if use_kg:
ck = settings.kg_retriever.retrieval(question,
ck = await settings.kg_retriever.retrieval(question,
[tenant_id],
[kb_id],
embd_mdl,

View File

@ -1579,7 +1579,7 @@ async def retrieval_test(tenant_id):
if cks:
ranks["chunks"] = cks
if use_kg:
ck = settings.kg_retriever.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
ck = await settings.kg_retriever.retrieval(question, [k.tenant_id for k in kbs], kb_ids, embd_mdl, LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)

View File

@ -1116,7 +1116,7 @@ async def retrieval_test_embedded():
local_doc_ids, rerank_mdl=rerank_mdl, highlight=req.get("highlight"), rank_feature=labels
)
if use_kg:
ck = settings.kg_retriever.retrieval(_question, tenant_ids, kb_ids, embd_mdl,
ck = await settings.kg_retriever.retrieval(_question, tenant_ids, kb_ids, embd_mdl,
LLMBundle(kb.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
ranks["chunks"].insert(0, ck)

View File

@ -421,7 +421,7 @@ async def async_chat(dialog, messages, stream=True, **kwargs):
kbinfos["chunks"].extend(tav_res["chunks"])
kbinfos["doc_aggs"].extend(tav_res["doc_aggs"])
if prompt_config.get("use_kg"):
ck = settings.kg_retriever.retrieval(" ".join(questions), tenant_ids, dialog.kb_ids, embd_mdl,
ck = await settings.kg_retriever.retrieval(" ".join(questions), tenant_ids, dialog.kb_ids, embd_mdl,
LLMBundle(dialog.tenant_id, LLMType.CHAT))
if ck["content_with_weight"]:
kbinfos["chunks"].insert(0, ck)

View File

@ -133,6 +133,8 @@ class FileSource(StrEnum):
GITHUB = "github"
GITLAB = "gitlab"
IMAP = "imap"
BITBUCKET = "bitbucket"
ZENDESK = "zendesk"
class PipelineTaskType(StrEnum):
PARSE = "Parse"

View File

@ -34,11 +34,11 @@ from .google_drive.connector import GoogleDriveConnector
from .jira.connector import JiraConnector
from .sharepoint_connector import SharePointConnector
from .teams_connector import TeamsConnector
from .webdav_connector import WebDAVConnector
from .moodle_connector import MoodleConnector
from .airtable_connector import AirtableConnector
from .asana_connector import AsanaConnector
from .imap_connector import ImapConnector
from .zendesk_connector import ZendeskConnector
from .config import BlobType, DocumentSource
from .models import Document, TextSection, ImageSection, BasicExpertInfo
from .exceptions import (
@ -61,7 +61,6 @@ __all__ = [
"JiraConnector",
"SharePointConnector",
"TeamsConnector",
"WebDAVConnector",
"MoodleConnector",
"BlobType",
"DocumentSource",
@ -76,5 +75,6 @@ __all__ = [
"UnexpectedValidationError",
"AirtableConnector",
"AsanaConnector",
"ImapConnector"
"ImapConnector",
"ZendeskConnector",
]

View File

View File

@ -0,0 +1,388 @@
from __future__ import annotations
import copy
from collections.abc import Callable
from collections.abc import Iterator
from datetime import datetime
from datetime import timezone
from typing import Any
from typing import TYPE_CHECKING
from typing_extensions import override
from common.data_source.config import INDEX_BATCH_SIZE
from common.data_source.config import DocumentSource
from common.data_source.config import REQUEST_TIMEOUT_SECONDS
from common.data_source.exceptions import (
ConnectorMissingCredentialError,
CredentialExpiredError,
InsufficientPermissionsError,
UnexpectedValidationError,
)
from common.data_source.interfaces import CheckpointedConnector
from common.data_source.interfaces import CheckpointOutput
from common.data_source.interfaces import IndexingHeartbeatInterface
from common.data_source.interfaces import SecondsSinceUnixEpoch
from common.data_source.interfaces import SlimConnectorWithPermSync
from common.data_source.models import ConnectorCheckpoint
from common.data_source.models import ConnectorFailure
from common.data_source.models import DocumentFailure
from common.data_source.models import SlimDocument
from common.data_source.bitbucket.utils import (
build_auth_client,
list_repositories,
map_pr_to_document,
paginate,
PR_LIST_RESPONSE_FIELDS,
SLIM_PR_LIST_RESPONSE_FIELDS,
)
if TYPE_CHECKING:
import httpx
class BitbucketConnectorCheckpoint(ConnectorCheckpoint):
"""Checkpoint state for resumable Bitbucket PR indexing.
Fields:
repos_queue: Materialized list of repository slugs to process.
current_repo_index: Index of the repository currently being processed.
next_url: Bitbucket "next" URL for continuing pagination within the current repo.
"""
repos_queue: list[str] = []
current_repo_index: int = 0
next_url: str | None = None
class BitbucketConnector(
CheckpointedConnector[BitbucketConnectorCheckpoint],
SlimConnectorWithPermSync,
):
"""Connector for indexing Bitbucket Cloud pull requests.
Args:
workspace: Bitbucket workspace ID.
repositories: Comma-separated list of repository slugs to index.
projects: Comma-separated list of project keys to index all repositories within.
batch_size: Max number of documents to yield per batch.
"""
def __init__(
self,
workspace: str,
repositories: str | None = None,
projects: str | None = None,
batch_size: int = INDEX_BATCH_SIZE,
) -> None:
self.workspace = workspace
self._repositories = (
[s.strip() for s in repositories.split(",") if s.strip()]
if repositories
else None
)
self._projects: list[str] | None = (
[s.strip() for s in projects.split(",") if s.strip()] if projects else None
)
self.batch_size = batch_size
self.email: str | None = None
self.api_token: str | None = None
def load_credentials(self, credentials: dict[str, Any]) -> dict[str, Any] | None:
"""Load API token-based credentials.
Expects a dict with keys: `bitbucket_email`, `bitbucket_api_token`.
"""
self.email = credentials.get("bitbucket_email")
self.api_token = credentials.get("bitbucket_api_token")
if not self.email or not self.api_token:
raise ConnectorMissingCredentialError("Bitbucket")
return None
def _client(self) -> httpx.Client:
"""Build an authenticated HTTP client or raise if credentials missing."""
if not self.email or not self.api_token:
raise ConnectorMissingCredentialError("Bitbucket")
return build_auth_client(self.email, self.api_token)
def _iter_pull_requests_for_repo(
self,
client: httpx.Client,
repo_slug: str,
params: dict[str, Any] | None = None,
start_url: str | None = None,
on_page: Callable[[str | None], None] | None = None,
) -> Iterator[dict[str, Any]]:
base = f"https://api.bitbucket.org/2.0/repositories/{self.workspace}/{repo_slug}/pullrequests"
yield from paginate(
client,
base,
params,
start_url=start_url,
on_page=on_page,
)
def _build_params(
self,
fields: str = PR_LIST_RESPONSE_FIELDS,
start: SecondsSinceUnixEpoch | None = None,
end: SecondsSinceUnixEpoch | None = None,
) -> dict[str, Any]:
"""Build Bitbucket fetch params.
Always include OPEN, MERGED, and DECLINED PRs. If both ``start`` and
``end`` are provided, apply a single updated_on time window.
"""
def _iso(ts: SecondsSinceUnixEpoch) -> str:
return datetime.fromtimestamp(ts, tz=timezone.utc).isoformat()
def _tc_epoch(
lower_epoch: SecondsSinceUnixEpoch | None,
upper_epoch: SecondsSinceUnixEpoch | None,
) -> str | None:
if lower_epoch is not None and upper_epoch is not None:
lower_iso = _iso(lower_epoch)
upper_iso = _iso(upper_epoch)
return f'(updated_on > "{lower_iso}" AND updated_on <= "{upper_iso}")'
return None
params: dict[str, Any] = {"fields": fields, "pagelen": 50}
time_clause = _tc_epoch(start, end)
q = '(state = "OPEN" OR state = "MERGED" OR state = "DECLINED")'
if time_clause:
q = f"{q} AND {time_clause}"
params["q"] = q
return params
def _iter_target_repositories(self, client: httpx.Client) -> Iterator[str]:
"""Yield repository slugs based on configuration.
Priority:
- repositories list
- projects list (list repos by project key)
- workspace (all repos)
"""
if self._repositories:
for slug in self._repositories:
yield slug
return
if self._projects:
for project_key in self._projects:
for repo in list_repositories(client, self.workspace, project_key):
slug_val = repo.get("slug")
if isinstance(slug_val, str) and slug_val:
yield slug_val
return
for repo in list_repositories(client, self.workspace, None):
slug_val = repo.get("slug")
if isinstance(slug_val, str) and slug_val:
yield slug_val
@override
def load_from_checkpoint(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: BitbucketConnectorCheckpoint,
) -> CheckpointOutput[BitbucketConnectorCheckpoint]:
"""Resumable PR ingestion across repos and pages within a time window.
Yields Documents (or ConnectorFailure for per-PR mapping failures) and returns
an updated checkpoint that records repo position and next page URL.
"""
new_checkpoint = copy.deepcopy(checkpoint)
with self._client() as client:
# Materialize target repositories once
if not new_checkpoint.repos_queue:
# Preserve explicit order; otherwise ensure deterministic ordering
repos_list = list(self._iter_target_repositories(client))
new_checkpoint.repos_queue = sorted(set(repos_list))
new_checkpoint.current_repo_index = 0
new_checkpoint.next_url = None
repos = new_checkpoint.repos_queue
if not repos or new_checkpoint.current_repo_index >= len(repos):
new_checkpoint.has_more = False
return new_checkpoint
repo_slug = repos[new_checkpoint.current_repo_index]
first_page_params = self._build_params(
fields=PR_LIST_RESPONSE_FIELDS,
start=start,
end=end,
)
def _on_page(next_url: str | None) -> None:
new_checkpoint.next_url = next_url
for pr in self._iter_pull_requests_for_repo(
client,
repo_slug,
params=first_page_params,
start_url=new_checkpoint.next_url,
on_page=_on_page,
):
try:
document = map_pr_to_document(pr, self.workspace, repo_slug)
yield document
except Exception as e:
pr_id = pr.get("id")
pr_link = (
f"https://bitbucket.org/{self.workspace}/{repo_slug}/pull-requests/{pr_id}"
if pr_id is not None
else None
)
yield ConnectorFailure(
failed_document=DocumentFailure(
document_id=(
f"{DocumentSource.BITBUCKET.value}:{self.workspace}:{repo_slug}:pr:{pr_id}"
if pr_id is not None
else f"{DocumentSource.BITBUCKET.value}:{self.workspace}:{repo_slug}:pr:unknown"
),
document_link=pr_link,
),
failure_message=f"Failed to process Bitbucket PR: {e}",
exception=e,
)
# Advance to next repository (if any) and set has_more accordingly
new_checkpoint.current_repo_index += 1
new_checkpoint.next_url = None
new_checkpoint.has_more = new_checkpoint.current_repo_index < len(repos)
return new_checkpoint
@override
def build_dummy_checkpoint(self) -> BitbucketConnectorCheckpoint:
"""Create an initial checkpoint with work remaining."""
return BitbucketConnectorCheckpoint(has_more=True)
@override
def validate_checkpoint_json(
self, checkpoint_json: str
) -> BitbucketConnectorCheckpoint:
"""Validate and deserialize a checkpoint instance from JSON."""
return BitbucketConnectorCheckpoint.model_validate_json(checkpoint_json)
def retrieve_all_slim_docs_perm_sync(
self,
start: SecondsSinceUnixEpoch | None = None,
end: SecondsSinceUnixEpoch | None = None,
callback: IndexingHeartbeatInterface | None = None,
) -> Iterator[list[SlimDocument]]:
"""Return only document IDs for all existing pull requests."""
batch: list[SlimDocument] = []
params = self._build_params(
fields=SLIM_PR_LIST_RESPONSE_FIELDS,
start=start,
end=end,
)
with self._client() as client:
for slug in self._iter_target_repositories(client):
for pr in self._iter_pull_requests_for_repo(
client, slug, params=params
):
pr_id = pr["id"]
doc_id = f"{DocumentSource.BITBUCKET.value}:{self.workspace}:{slug}:pr:{pr_id}"
batch.append(SlimDocument(id=doc_id))
if len(batch) >= self.batch_size:
yield batch
batch = []
if callback:
if callback.should_stop():
# Note: this is not actually used for permission sync yet, just pruning
raise RuntimeError(
"bitbucket_pr_sync: Stop signal detected"
)
callback.progress("bitbucket_pr_sync", len(batch))
if batch:
yield batch
def validate_connector_settings(self) -> None:
"""Validate Bitbucket credentials and workspace access by probing a lightweight endpoint.
Raises:
CredentialExpiredError: on HTTP 401
InsufficientPermissionsError: on HTTP 403
UnexpectedValidationError: on any other failure
"""
try:
with self._client() as client:
url = f"https://api.bitbucket.org/2.0/repositories/{self.workspace}"
resp = client.get(
url,
params={"pagelen": 1, "fields": "pagelen"},
timeout=REQUEST_TIMEOUT_SECONDS,
)
if resp.status_code == 401:
raise CredentialExpiredError(
"Invalid or expired Bitbucket credentials (HTTP 401)."
)
if resp.status_code == 403:
raise InsufficientPermissionsError(
"Insufficient permissions to access Bitbucket workspace (HTTP 403)."
)
if resp.status_code < 200 or resp.status_code >= 300:
raise UnexpectedValidationError(
f"Unexpected Bitbucket error (status={resp.status_code})."
)
except Exception as e:
# Network or other unexpected errors
if isinstance(
e,
(
CredentialExpiredError,
InsufficientPermissionsError,
UnexpectedValidationError,
ConnectorMissingCredentialError,
),
):
raise
raise UnexpectedValidationError(
f"Unexpected error while validating Bitbucket settings: {e}"
)
if __name__ == "__main__":
bitbucket = BitbucketConnector(
workspace="<YOUR_WORKSPACE>"
)
bitbucket.load_credentials({
"bitbucket_email": "<YOUR_EMAIL>",
"bitbucket_api_token": "<YOUR_API_TOKEN>",
})
bitbucket.validate_connector_settings()
print("Credentials validated successfully.")
start_time = datetime.fromtimestamp(0, tz=timezone.utc)
end_time = datetime.now(timezone.utc)
for doc_batch in bitbucket.retrieve_all_slim_docs_perm_sync(
start=start_time.timestamp(),
end=end_time.timestamp(),
):
for doc in doc_batch:
print(doc)
bitbucket_checkpoint = bitbucket.build_dummy_checkpoint()
while bitbucket_checkpoint.has_more:
gen = bitbucket.load_from_checkpoint(
start=start_time.timestamp(),
end=end_time.timestamp(),
checkpoint=bitbucket_checkpoint,
)
while True:
try:
doc = next(gen)
print(doc)
except StopIteration as e:
bitbucket_checkpoint = e.value
break

View File

@ -0,0 +1,288 @@
from __future__ import annotations
import time
from collections.abc import Callable
from collections.abc import Iterator
from datetime import datetime
from datetime import timezone
from typing import Any
import httpx
from common.data_source.config import REQUEST_TIMEOUT_SECONDS, DocumentSource
from common.data_source.cross_connector_utils.rate_limit_wrapper import (
rate_limit_builder,
)
from common.data_source.utils import sanitize_filename
from common.data_source.models import BasicExpertInfo, Document
from common.data_source.cross_connector_utils.retry_wrapper import retry_builder
# Fields requested from Bitbucket PR list endpoint to ensure rich PR data
PR_LIST_RESPONSE_FIELDS: str = ",".join(
[
"next",
"page",
"pagelen",
"values.author",
"values.close_source_branch",
"values.closed_by",
"values.comment_count",
"values.created_on",
"values.description",
"values.destination",
"values.draft",
"values.id",
"values.links",
"values.merge_commit",
"values.participants",
"values.reason",
"values.rendered",
"values.reviewers",
"values.source",
"values.state",
"values.summary",
"values.task_count",
"values.title",
"values.type",
"values.updated_on",
]
)
# Minimal fields for slim retrieval (IDs only)
SLIM_PR_LIST_RESPONSE_FIELDS: str = ",".join(
[
"next",
"page",
"pagelen",
"values.id",
]
)
# Minimal fields for repository list calls
REPO_LIST_RESPONSE_FIELDS: str = ",".join(
[
"next",
"page",
"pagelen",
"values.slug",
"values.full_name",
"values.project.key",
]
)
class BitbucketRetriableError(Exception):
"""Raised for retriable Bitbucket conditions (429, 5xx)."""
class BitbucketNonRetriableError(Exception):
"""Raised for non-retriable Bitbucket client errors (4xx except 429)."""
@retry_builder(
tries=6,
delay=1,
backoff=2,
max_delay=30,
exceptions=(BitbucketRetriableError, httpx.RequestError),
)
@rate_limit_builder(max_calls=60, period=60)
def bitbucket_get(
client: httpx.Client, url: str, params: dict[str, Any] | None = None
) -> httpx.Response:
"""Perform a GET against Bitbucket with retry and rate limiting.
Retries on 429 and 5xx responses, and on transport errors. Honors
`Retry-After` header for 429 when present by sleeping before retrying.
"""
try:
response = client.get(url, params=params, timeout=REQUEST_TIMEOUT_SECONDS)
except httpx.RequestError:
# Allow retry_builder to handle retries of transport errors
raise
try:
response.raise_for_status()
except httpx.HTTPStatusError as e:
status = e.response.status_code if e.response is not None else None
if status == 429:
retry_after = e.response.headers.get("Retry-After") if e.response else None
if retry_after is not None:
try:
time.sleep(int(retry_after))
except (TypeError, ValueError):
pass
raise BitbucketRetriableError("Bitbucket rate limit exceeded (429)") from e
if status is not None and 500 <= status < 600:
raise BitbucketRetriableError(f"Bitbucket server error: {status}") from e
if status is not None and 400 <= status < 500:
raise BitbucketNonRetriableError(f"Bitbucket client error: {status}") from e
# Unknown status, propagate
raise
return response
def build_auth_client(email: str, api_token: str) -> httpx.Client:
"""Create an authenticated httpx client for Bitbucket Cloud API."""
return httpx.Client(auth=(email, api_token), http2=True)
def paginate(
client: httpx.Client,
url: str,
params: dict[str, Any] | None = None,
start_url: str | None = None,
on_page: Callable[[str | None], None] | None = None,
) -> Iterator[dict[str, Any]]:
"""Iterate over paginated Bitbucket API responses yielding individual values.
Args:
client: Authenticated HTTP client.
url: Base collection URL (first page when start_url is None).
params: Query params for the first page.
start_url: If provided, start from this absolute URL (ignores params).
on_page: Optional callback invoked after each page with the next page URL.
"""
next_url = start_url or url
# If resuming from a next URL, do not pass params again
query = params.copy() if params else None
query = None if start_url else query
while next_url:
resp = bitbucket_get(client, next_url, params=query)
data = resp.json()
values = data.get("values", [])
for item in values:
yield item
next_url = data.get("next")
if on_page is not None:
on_page(next_url)
# only include params on first call, next_url will contain all necessary params
query = None
def list_repositories(
client: httpx.Client, workspace: str, project_key: str | None = None
) -> Iterator[dict[str, Any]]:
"""List repositories in a workspace, optionally filtered by project key."""
base_url = f"https://api.bitbucket.org/2.0/repositories/{workspace}"
params: dict[str, Any] = {
"fields": REPO_LIST_RESPONSE_FIELDS,
"pagelen": 100,
# Ensure deterministic ordering
"sort": "full_name",
}
if project_key:
params["q"] = f'project.key="{project_key}"'
yield from paginate(client, base_url, params)
def map_pr_to_document(pr: dict[str, Any], workspace: str, repo_slug: str) -> Document:
"""Map a Bitbucket pull request JSON to Onyx Document."""
pr_id = pr["id"]
title = pr.get("title") or f"PR {pr_id}"
description = pr.get("description") or ""
state = pr.get("state")
draft = pr.get("draft", False)
author = pr.get("author", {})
reviewers = pr.get("reviewers", [])
participants = pr.get("participants", [])
link = pr.get("links", {}).get("html", {}).get("href") or (
f"https://bitbucket.org/{workspace}/{repo_slug}/pull-requests/{pr_id}"
)
created_on = pr.get("created_on")
updated_on = pr.get("updated_on")
updated_dt = (
datetime.fromisoformat(updated_on.replace("Z", "+00:00")).astimezone(
timezone.utc
)
if isinstance(updated_on, str)
else None
)
source_branch = pr.get("source", {}).get("branch", {}).get("name", "")
destination_branch = pr.get("destination", {}).get("branch", {}).get("name", "")
approved_by = [
_get_user_name(p.get("user", {})) for p in participants if p.get("approved")
]
primary_owner = None
if author:
primary_owner = BasicExpertInfo(
display_name=_get_user_name(author),
)
# secondary_owners = [
# BasicExpertInfo(display_name=_get_user_name(r)) for r in reviewers
# ] or None
reviewer_names = [_get_user_name(r) for r in reviewers]
# Create a concise summary of key PR info
created_date = created_on.split("T")[0] if created_on else "N/A"
updated_date = updated_on.split("T")[0] if updated_on else "N/A"
content_text = (
"Pull Request Information:\n"
f"- Pull Request ID: {pr_id}\n"
f"- Title: {title}\n"
f"- State: {state or 'N/A'} {'(Draft)' if draft else ''}\n"
)
if state == "DECLINED":
content_text += f"- Reason: {pr.get('reason', 'N/A')}\n"
content_text += (
f"- Author: {_get_user_name(author) if author else 'N/A'}\n"
f"- Reviewers: {', '.join(reviewer_names) if reviewer_names else 'N/A'}\n"
f"- Branch: {source_branch} -> {destination_branch}\n"
f"- Created: {created_date}\n"
f"- Updated: {updated_date}"
)
if description:
content_text += f"\n\nDescription:\n{description}"
metadata: dict[str, str | list[str]] = {
"object_type": "PullRequest",
"workspace": workspace,
"repository": repo_slug,
"pr_key": f"{workspace}/{repo_slug}#{pr_id}",
"id": str(pr_id),
"title": title,
"state": state or "",
"draft": str(bool(draft)),
"link": link,
"author": _get_user_name(author) if author else "",
"reviewers": reviewer_names,
"approved_by": approved_by,
"comment_count": str(pr.get("comment_count", "")),
"task_count": str(pr.get("task_count", "")),
"created_on": created_on or "",
"updated_on": updated_on or "",
"source_branch": source_branch,
"destination_branch": destination_branch,
"closed_by": (
_get_user_name(pr.get("closed_by", {})) if pr.get("closed_by") else ""
),
"close_source_branch": str(bool(pr.get("close_source_branch", False))),
}
name = sanitize_filename(title, "md")
return Document(
id=f"{DocumentSource.BITBUCKET.value}:{workspace}:{repo_slug}:pr:{pr_id}",
blob=content_text.encode("utf-8"),
source=DocumentSource.BITBUCKET,
extension=".md",
semantic_identifier=f"#{pr_id}: {name}",
size_bytes=len(content_text.encode("utf-8")),
doc_updated_at=updated_dt,
primary_owners=[primary_owner] if primary_owner else None,
# secondary_owners=secondary_owners,
metadata=metadata,
)
def _get_user_name(user: dict[str, Any]) -> str:
return user.get("display_name") or user.get("nickname") or "unknown"

View File

@ -13,6 +13,9 @@ def get_current_tz_offset() -> int:
return round(time_diff.total_seconds() / 3600)
# Default request timeout, mostly used by connectors
REQUEST_TIMEOUT_SECONDS = int(os.environ.get("REQUEST_TIMEOUT_SECONDS") or 60)
ONE_MINUTE = 60
ONE_HOUR = 3600
ONE_DAY = ONE_HOUR * 24
@ -58,6 +61,8 @@ class DocumentSource(str, Enum):
GITHUB = "github"
GITLAB = "gitlab"
IMAP = "imap"
BITBUCKET = "bitbucket"
ZENDESK = "zendesk"
class FileOrigin(str, Enum):
@ -271,6 +276,10 @@ IMAP_CONNECTOR_SIZE_THRESHOLD = int(
os.environ.get("IMAP_CONNECTOR_SIZE_THRESHOLD", 10 * 1024 * 1024)
)
ZENDESK_CONNECTOR_SKIP_ARTICLE_LABELS = os.environ.get(
"ZENDESK_CONNECTOR_SKIP_ARTICLE_LABELS", ""
).split(",")
_USER_NOT_FOUND = "Unknown Confluence User"
_COMMENT_EXPANSION_FIELDS = ["body.storage.value"]

View File

@ -0,0 +1,126 @@
import time
import logging
from collections.abc import Callable
from functools import wraps
from typing import Any
from typing import cast
from typing import TypeVar
import requests
F = TypeVar("F", bound=Callable[..., Any])
class RateLimitTriedTooManyTimesError(Exception):
pass
class _RateLimitDecorator:
"""Builds a generic wrapper/decorator for calls to external APIs that
prevents making more than `max_calls` requests per `period`
Implementation inspired by the `ratelimit` library:
https://github.com/tomasbasham/ratelimit.
NOTE: is not thread safe.
"""
def __init__(
self,
max_calls: int,
period: float, # in seconds
sleep_time: float = 2, # in seconds
sleep_backoff: float = 2, # applies exponential backoff
max_num_sleep: int = 0,
):
self.max_calls = max_calls
self.period = period
self.sleep_time = sleep_time
self.sleep_backoff = sleep_backoff
self.max_num_sleep = max_num_sleep
self.call_history: list[float] = []
self.curr_calls = 0
def __call__(self, func: F) -> F:
@wraps(func)
def wrapped_func(*args: list, **kwargs: dict[str, Any]) -> Any:
# cleanup calls which are no longer relevant
self._cleanup()
# check if we've exceeded the rate limit
sleep_cnt = 0
while len(self.call_history) == self.max_calls:
sleep_time = self.sleep_time * (self.sleep_backoff**sleep_cnt)
logging.warning(
f"Rate limit exceeded for function {func.__name__}. "
f"Waiting {sleep_time} seconds before retrying."
)
time.sleep(sleep_time)
sleep_cnt += 1
if self.max_num_sleep != 0 and sleep_cnt >= self.max_num_sleep:
raise RateLimitTriedTooManyTimesError(
f"Exceeded '{self.max_num_sleep}' retries for function '{func.__name__}'"
)
self._cleanup()
# add the current call to the call history
self.call_history.append(time.monotonic())
return func(*args, **kwargs)
return cast(F, wrapped_func)
def _cleanup(self) -> None:
curr_time = time.monotonic()
time_to_expire_before = curr_time - self.period
self.call_history = [
call_time
for call_time in self.call_history
if call_time > time_to_expire_before
]
rate_limit_builder = _RateLimitDecorator
"""If you want to allow the external service to tell you when you've hit the rate limit,
use the following instead"""
R = TypeVar("R", bound=Callable[..., requests.Response])
def wrap_request_to_handle_ratelimiting(
request_fn: R, default_wait_time_sec: int = 30, max_waits: int = 30
) -> R:
def wrapped_request(*args: list, **kwargs: dict[str, Any]) -> requests.Response:
for _ in range(max_waits):
response = request_fn(*args, **kwargs)
if response.status_code == 429:
try:
wait_time = int(
response.headers.get("Retry-After", default_wait_time_sec)
)
except ValueError:
wait_time = default_wait_time_sec
time.sleep(wait_time)
continue
return response
raise RateLimitTriedTooManyTimesError(f"Exceeded '{max_waits}' retries")
return cast(R, wrapped_request)
_rate_limited_get = wrap_request_to_handle_ratelimiting(requests.get)
_rate_limited_post = wrap_request_to_handle_ratelimiting(requests.post)
class _RateLimitedRequest:
get = _rate_limited_get
post = _rate_limited_post
rl_requests = _RateLimitedRequest

View File

@ -0,0 +1,88 @@
from collections.abc import Callable
import logging
from logging import Logger
from typing import Any
from typing import cast
from typing import TypeVar
import requests
from retry import retry
from common.data_source.config import REQUEST_TIMEOUT_SECONDS
F = TypeVar("F", bound=Callable[..., Any])
logger = logging.getLogger(__name__)
def retry_builder(
tries: int = 20,
delay: float = 0.1,
max_delay: float | None = 60,
backoff: float = 2,
jitter: tuple[float, float] | float = 1,
exceptions: type[Exception] | tuple[type[Exception], ...] = (Exception,),
) -> Callable[[F], F]:
"""Builds a generic wrapper/decorator for calls to external APIs that
may fail due to rate limiting, flakes, or other reasons. Applies exponential
backoff with jitter to retry the call."""
def retry_with_default(func: F) -> F:
@retry(
tries=tries,
delay=delay,
max_delay=max_delay,
backoff=backoff,
jitter=jitter,
logger=cast(Logger, logger),
exceptions=exceptions,
)
def wrapped_func(*args: list, **kwargs: dict[str, Any]) -> Any:
return func(*args, **kwargs)
return cast(F, wrapped_func)
return retry_with_default
def request_with_retries(
method: str,
url: str,
*,
data: dict[str, Any] | None = None,
headers: dict[str, Any] | None = None,
params: dict[str, Any] | None = None,
timeout: int = REQUEST_TIMEOUT_SECONDS,
stream: bool = False,
tries: int = 8,
delay: float = 1,
backoff: float = 2,
) -> requests.Response:
@retry(tries=tries, delay=delay, backoff=backoff, logger=cast(Logger, logger))
def _make_request() -> requests.Response:
response = requests.request(
method=method,
url=url,
data=data,
headers=headers,
params=params,
timeout=timeout,
stream=stream,
)
try:
response.raise_for_status()
except requests.exceptions.HTTPError:
logging.exception(
"Request failed:\n%s",
{
"method": method,
"url": url,
"data": data,
"headers": headers,
"params": params,
"timeout": timeout,
"stream": stream,
},
)
raise
return response
return _make_request()

View File

@ -19,7 +19,7 @@ from github.PaginatedList import PaginatedList
from github.PullRequest import PullRequest
from pydantic import BaseModel
from typing_extensions import override
from common.data_source.google_util.util import sanitize_filename
from common.data_source.utils import sanitize_filename
from common.data_source.config import DocumentSource, GITHUB_CONNECTOR_BASE_URL
from common.data_source.exceptions import (
ConnectorMissingCredentialError,

View File

@ -8,10 +8,10 @@ from common.data_source.config import INDEX_BATCH_SIZE, SLIM_BATCH_SIZE, Documen
from common.data_source.google_util.auth import get_google_creds
from common.data_source.google_util.constant import DB_CREDENTIALS_PRIMARY_ADMIN_KEY, MISSING_SCOPES_ERROR_STR, SCOPE_INSTRUCTIONS, USER_FIELDS
from common.data_source.google_util.resource import get_admin_service, get_gmail_service
from common.data_source.google_util.util import _execute_single_retrieval, execute_paginated_retrieval, sanitize_filename, clean_string
from common.data_source.google_util.util import _execute_single_retrieval, execute_paginated_retrieval, clean_string
from common.data_source.interfaces import LoadConnector, PollConnector, SecondsSinceUnixEpoch, SlimConnectorWithPermSync
from common.data_source.models import BasicExpertInfo, Document, ExternalAccess, GenerateDocumentsOutput, GenerateSlimDocumentOutput, SlimDocument, TextSection
from common.data_source.utils import build_time_range_query, clean_email_and_extract_name, get_message_body, is_mail_service_disabled_error, gmail_time_str_to_utc
from common.data_source.utils import build_time_range_query, clean_email_and_extract_name, get_message_body, is_mail_service_disabled_error, gmail_time_str_to_utc, sanitize_filename
# Constants for Gmail API fields
THREAD_LIST_FIELDS = "nextPageToken, threads(id)"

View File

@ -191,42 +191,6 @@ def get_credentials_from_env(email: str, oauth: bool = False, source="drive") ->
DB_CREDENTIALS_AUTHENTICATION_METHOD: "uploaded",
}
def sanitize_filename(name: str, extension: str = "txt") -> str:
"""
Soft sanitize for MinIO/S3:
- Replace only prohibited characters with a space.
- Preserve readability (no ugly underscores).
- Collapse multiple spaces.
"""
if name is None:
return f"file.{extension}"
name = str(name).strip()
# Characters that MUST NOT appear in S3/MinIO object keys
# Replace them with a space (not underscore)
forbidden = r'[\\\?\#\%\*\:\|\<\>"]'
name = re.sub(forbidden, " ", name)
# Replace slashes "/" (S3 interprets as folder) with space
name = name.replace("/", " ")
# Collapse multiple spaces into one
name = re.sub(r"\s+", " ", name)
# Trim both ends
name = name.strip()
# Enforce reasonable max length
if len(name) > 200:
base, ext = os.path.splitext(name)
name = base[:180].rstrip() + ext
if not os.path.splitext(name)[1]:
name += f".{extension}"
return name
def clean_string(text: str | None) -> str | None:
"""

View File

@ -1149,3 +1149,137 @@ def parallel_yield(gens: list[Iterator[R]], max_workers: int = 10) -> Iterator[R
future_to_index[executor.submit(_next_or_none, ind, gens[ind])] = next_ind
next_ind += 1
del future_to_index[future]
def sanitize_filename(name: str, extension: str = "txt") -> str:
"""
Soft sanitize for MinIO/S3:
- Replace only prohibited characters with a space.
- Preserve readability (no ugly underscores).
- Collapse multiple spaces.
"""
if name is None:
return f"file.{extension}"
name = str(name).strip()
# Characters that MUST NOT appear in S3/MinIO object keys
# Replace them with a space (not underscore)
forbidden = r'[\\\?\#\%\*\:\|\<\>"]'
name = re.sub(forbidden, " ", name)
# Replace slashes "/" (S3 interprets as folder) with space
name = name.replace("/", " ")
# Collapse multiple spaces into one
name = re.sub(r"\s+", " ", name)
# Trim both ends
name = name.strip()
# Enforce reasonable max length
if len(name) > 200:
base, ext = os.path.splitext(name)
name = base[:180].rstrip() + ext
if not os.path.splitext(name)[1]:
name += f".{extension}"
return name
F = TypeVar("F", bound=Callable[..., Any])
class _RateLimitDecorator:
"""Builds a generic wrapper/decorator for calls to external APIs that
prevents making more than `max_calls` requests per `period`
Implementation inspired by the `ratelimit` library:
https://github.com/tomasbasham/ratelimit.
NOTE: is not thread safe.
"""
def __init__(
self,
max_calls: int,
period: float, # in seconds
sleep_time: float = 2, # in seconds
sleep_backoff: float = 2, # applies exponential backoff
max_num_sleep: int = 0,
):
self.max_calls = max_calls
self.period = period
self.sleep_time = sleep_time
self.sleep_backoff = sleep_backoff
self.max_num_sleep = max_num_sleep
self.call_history: list[float] = []
self.curr_calls = 0
def __call__(self, func: F) -> F:
@wraps(func)
def wrapped_func(*args: list, **kwargs: dict[str, Any]) -> Any:
# cleanup calls which are no longer relevant
self._cleanup()
# check if we've exceeded the rate limit
sleep_cnt = 0
while len(self.call_history) == self.max_calls:
sleep_time = self.sleep_time * (self.sleep_backoff**sleep_cnt)
logging.warning(
f"Rate limit exceeded for function {func.__name__}. "
f"Waiting {sleep_time} seconds before retrying."
)
time.sleep(sleep_time)
sleep_cnt += 1
if self.max_num_sleep != 0 and sleep_cnt >= self.max_num_sleep:
raise RateLimitTriedTooManyTimesError(
f"Exceeded '{self.max_num_sleep}' retries for function '{func.__name__}'"
)
self._cleanup()
# add the current call to the call history
self.call_history.append(time.monotonic())
return func(*args, **kwargs)
return cast(F, wrapped_func)
def _cleanup(self) -> None:
curr_time = time.monotonic()
time_to_expire_before = curr_time - self.period
self.call_history = [
call_time
for call_time in self.call_history
if call_time > time_to_expire_before
]
rate_limit_builder = _RateLimitDecorator
def retry_builder(
tries: int = 20,
delay: float = 0.1,
max_delay: float | None = 60,
backoff: float = 2,
jitter: tuple[float, float] | float = 1,
exceptions: type[Exception] | tuple[type[Exception], ...] = (Exception,),
) -> Callable[[F], F]:
"""Builds a generic wrapper/decorator for calls to external APIs that
may fail due to rate limiting, flakes, or other reasons. Applies exponential
backoff with jitter to retry the call."""
def retry_with_default(func: F) -> F:
@retry(
tries=tries,
delay=delay,
max_delay=max_delay,
backoff=backoff,
jitter=jitter,
logger=logging.getLogger(__name__),
exceptions=exceptions,
)
def wrapped_func(*args: list, **kwargs: dict[str, Any]) -> Any:
return func(*args, **kwargs)
return cast(F, wrapped_func)
return retry_with_default

View File

@ -82,10 +82,6 @@ class WebDAVConnector(LoadConnector, PollConnector):
base_url=self.base_url,
auth=(username, password)
)
# Test connection
self.client.exists(self.remote_path)
except Exception as e:
logging.error(f"Failed to connect to WebDAV server: {e}")
raise ConnectorMissingCredentialError(
@ -308,60 +304,79 @@ class WebDAVConnector(LoadConnector, PollConnector):
yield batch
def validate_connector_settings(self) -> None:
"""Validate WebDAV connector settings
"""Validate WebDAV connector settings.
Raises:
ConnectorMissingCredentialError: If credentials are not loaded
ConnectorValidationError: If settings are invalid
Validation should exercise the same code-paths used by the connector
(directory listing / PROPFIND), avoiding exists() which may probe with
methods that differ across servers.
"""
if self.client is None:
raise ConnectorMissingCredentialError(
"WebDAV credentials not loaded."
)
raise ConnectorMissingCredentialError("WebDAV credentials not loaded.")
if not self.base_url:
raise ConnectorValidationError(
"No base URL was provided in connector settings."
)
raise ConnectorValidationError("No base URL was provided in connector settings.")
# Normalize directory path: for collections, many servers behave better with trailing '/'
test_path = self.remote_path or "/"
if not test_path.startswith("/"):
test_path = f"/{test_path}"
if test_path != "/" and not test_path.endswith("/"):
test_path = f"{test_path}/"
try:
if not self.client.exists(self.remote_path):
raise ConnectorValidationError(
f"Remote path '{self.remote_path}' does not exist on WebDAV server."
)
# Use the same behavior as real sync: list directory with details (PROPFIND)
self.client.ls(test_path, detail=True)
except Exception as e:
error_message = str(e)
# Prefer structured status codes if present on the exception/response
status = None
for attr in ("status_code", "code"):
v = getattr(e, attr, None)
if isinstance(v, int):
status = v
break
if status is None:
resp = getattr(e, "response", None)
v = getattr(resp, "status_code", None)
if isinstance(v, int):
status = v
if "401" in error_message or "unauthorized" in error_message.lower():
raise CredentialExpiredError(
"WebDAV credentials appear invalid or expired."
)
if "403" in error_message or "forbidden" in error_message.lower():
# If we can classify by status code, do it
if status == 401:
raise CredentialExpiredError("WebDAV credentials appear invalid or expired.")
if status == 403:
raise InsufficientPermissionsError(
f"Insufficient permissions to access path '{self.remote_path}' on WebDAV server."
)
if "404" in error_message or "not found" in error_message.lower():
if status == 404:
raise ConnectorValidationError(
f"Remote path '{self.remote_path}' does not exist on WebDAV server."
)
# Fallback: avoid brittle substring matching that caused false positives.
# Provide the original exception for diagnosis.
raise ConnectorValidationError(
f"Unexpected WebDAV client error: {e}"
f"WebDAV validation failed for path '{test_path}': {repr(e)}"
)
if __name__ == "__main__":
credentials_dict = {
"username": os.environ.get("WEBDAV_USERNAME"),
"password": os.environ.get("WEBDAV_PASSWORD"),
}
credentials_dict = {
"username": "user",
"password": "pass",
}
connector = WebDAVConnector(
base_url=os.environ.get("WEBDAV_URL") or "https://webdav.example.com",
remote_path=os.environ.get("WEBDAV_PATH") or "/",
base_url="http://172.17.0.1:8080/",
remote_path="/",
)
try:

View File

@ -0,0 +1,667 @@
import copy
import logging
import time
from collections.abc import Callable
from collections.abc import Iterator
from typing import Any
import requests
from pydantic import BaseModel
from requests.exceptions import HTTPError
from typing_extensions import override
from common.data_source.config import ZENDESK_CONNECTOR_SKIP_ARTICLE_LABELS, DocumentSource
from common.data_source.exceptions import ConnectorValidationError, CredentialExpiredError, InsufficientPermissionsError
from common.data_source.html_utils import parse_html_page_basic
from common.data_source.interfaces import CheckpointOutput, CheckpointOutputWrapper, CheckpointedConnector, IndexingHeartbeatInterface, SlimConnectorWithPermSync
from common.data_source.models import BasicExpertInfo, ConnectorCheckpoint, ConnectorFailure, Document, DocumentFailure, GenerateSlimDocumentOutput, SecondsSinceUnixEpoch, SlimDocument
from common.data_source.utils import retry_builder, time_str_to_utc,rate_limit_builder
MAX_PAGE_SIZE = 30 # Zendesk API maximum
MAX_AUTHOR_MAP_SIZE = 50_000 # Reset author map cache if it gets too large
_SLIM_BATCH_SIZE = 1000
class ZendeskCredentialsNotSetUpError(PermissionError):
def __init__(self) -> None:
super().__init__(
"Zendesk Credentials are not set up, was load_credentials called?"
)
class ZendeskClient:
def __init__(
self,
subdomain: str,
email: str,
token: str,
calls_per_minute: int | None = None,
):
self.base_url = f"https://{subdomain}.zendesk.com/api/v2"
self.auth = (f"{email}/token", token)
self.make_request = request_with_rate_limit(self, calls_per_minute)
def request_with_rate_limit(
client: ZendeskClient, max_calls_per_minute: int | None = None
) -> Callable[[str, dict[str, Any]], dict[str, Any]]:
@retry_builder()
@(
rate_limit_builder(max_calls=max_calls_per_minute, period=60)
if max_calls_per_minute
else lambda x: x
)
def make_request(endpoint: str, params: dict[str, Any]) -> dict[str, Any]:
response = requests.get(
f"{client.base_url}/{endpoint}", auth=client.auth, params=params
)
if response.status_code == 429:
retry_after = response.headers.get("Retry-After")
if retry_after is not None:
# Sleep for the duration indicated by the Retry-After header
time.sleep(int(retry_after))
elif (
response.status_code == 403
and response.json().get("error") == "SupportProductInactive"
):
return response.json()
response.raise_for_status()
return response.json()
return make_request
class ZendeskPageResponse(BaseModel):
data: list[dict[str, Any]]
meta: dict[str, Any]
has_more: bool
def _get_content_tag_mapping(client: ZendeskClient) -> dict[str, str]:
content_tags: dict[str, str] = {}
params = {"page[size]": MAX_PAGE_SIZE}
try:
while True:
data = client.make_request("guide/content_tags", params)
for tag in data.get("records", []):
content_tags[tag["id"]] = tag["name"]
# Check if there are more pages
if data.get("meta", {}).get("has_more", False):
params["page[after]"] = data["meta"]["after_cursor"]
else:
break
return content_tags
except Exception as e:
raise Exception(f"Error fetching content tags: {str(e)}")
def _get_articles(
client: ZendeskClient, start_time: int | None = None, page_size: int = MAX_PAGE_SIZE
) -> Iterator[dict[str, Any]]:
params = {"page[size]": page_size, "sort_by": "updated_at", "sort_order": "asc"}
if start_time is not None:
params["start_time"] = start_time
while True:
data = client.make_request("help_center/articles", params)
for article in data["articles"]:
yield article
if not data.get("meta", {}).get("has_more"):
break
params["page[after]"] = data["meta"]["after_cursor"]
def _get_article_page(
client: ZendeskClient,
start_time: int | None = None,
after_cursor: str | None = None,
page_size: int = MAX_PAGE_SIZE,
) -> ZendeskPageResponse:
params = {"page[size]": page_size, "sort_by": "updated_at", "sort_order": "asc"}
if start_time is not None:
params["start_time"] = start_time
if after_cursor is not None:
params["page[after]"] = after_cursor
data = client.make_request("help_center/articles", params)
return ZendeskPageResponse(
data=data["articles"],
meta=data["meta"],
has_more=bool(data["meta"].get("has_more", False)),
)
def _get_tickets(
client: ZendeskClient, start_time: int | None = None
) -> Iterator[dict[str, Any]]:
params = {"start_time": start_time or 0}
while True:
data = client.make_request("incremental/tickets.json", params)
for ticket in data["tickets"]:
yield ticket
if not data.get("end_of_stream", False):
params["start_time"] = data["end_time"]
else:
break
# TODO: maybe these don't need to be their own functions?
def _get_tickets_page(
client: ZendeskClient, start_time: int | None = None
) -> ZendeskPageResponse:
params = {"start_time": start_time or 0}
# NOTE: for some reason zendesk doesn't seem to be respecting the start_time param
# in my local testing with very few tickets. We'll look into it if this becomes an
# issue in larger deployments
data = client.make_request("incremental/tickets.json", params)
if data.get("error") == "SupportProductInactive":
raise ValueError(
"Zendesk Support Product is not active for this account, No tickets to index"
)
return ZendeskPageResponse(
data=data["tickets"],
meta={"end_time": data["end_time"]},
has_more=not bool(data.get("end_of_stream", False)),
)
def _fetch_author(
client: ZendeskClient, author_id: str | int
) -> BasicExpertInfo | None:
# Skip fetching if author_id is invalid
# cast to str to avoid issues with zendesk changing their types
if not author_id or str(author_id) == "-1":
return None
try:
author_data = client.make_request(f"users/{author_id}", {})
user = author_data.get("user")
return (
BasicExpertInfo(display_name=user.get("name"), email=user.get("email"))
if user and user.get("name") and user.get("email")
else None
)
except requests.exceptions.HTTPError:
# Handle any API errors gracefully
return None
def _article_to_document(
article: dict[str, Any],
content_tags: dict[str, str],
author_map: dict[str, BasicExpertInfo],
client: ZendeskClient,
) -> tuple[dict[str, BasicExpertInfo] | None, Document]:
author_id = article.get("author_id")
if not author_id:
author = None
else:
author = (
author_map.get(author_id)
if author_id in author_map
else _fetch_author(client, author_id)
)
new_author_mapping = {author_id: author} if author_id and author else None
updated_at = article.get("updated_at")
update_time = time_str_to_utc(updated_at) if updated_at else None
text = parse_html_page_basic(article.get("body") or "")
blob = text.encode("utf-8", errors="replace")
# Build metadata
metadata: dict[str, str | list[str]] = {
"labels": [str(label) for label in article.get("label_names", []) if label],
"content_tags": [
content_tags[tag_id]
for tag_id in article.get("content_tag_ids", [])
if tag_id in content_tags
],
}
# Remove empty values
metadata = {k: v for k, v in metadata.items() if v}
return new_author_mapping, Document(
id=f"article:{article['id']}",
source=DocumentSource.ZENDESK,
semantic_identifier=article["title"],
extension=".txt",
blob=blob,
size_bytes=len(blob),
doc_updated_at=update_time,
primary_owners=[author] if author else None,
metadata=metadata,
)
def _get_comment_text(
comment: dict[str, Any],
author_map: dict[str, BasicExpertInfo],
client: ZendeskClient,
) -> tuple[dict[str, BasicExpertInfo] | None, str]:
author_id = comment.get("author_id")
if not author_id:
author = None
else:
author = (
author_map.get(author_id)
if author_id in author_map
else _fetch_author(client, author_id)
)
new_author_mapping = {author_id: author} if author_id and author else None
comment_text = f"Comment{' by ' + author.display_name if author and author.display_name else ''}"
comment_text += f"{' at ' + comment['created_at'] if comment.get('created_at') else ''}:\n{comment['body']}"
return new_author_mapping, comment_text
def _ticket_to_document(
ticket: dict[str, Any],
author_map: dict[str, BasicExpertInfo],
client: ZendeskClient,
) -> tuple[dict[str, BasicExpertInfo] | None, Document]:
submitter_id = ticket.get("submitter")
if not submitter_id:
submitter = None
else:
submitter = (
author_map.get(submitter_id)
if submitter_id in author_map
else _fetch_author(client, submitter_id)
)
new_author_mapping = (
{submitter_id: submitter} if submitter_id and submitter else None
)
updated_at = ticket.get("updated_at")
update_time = time_str_to_utc(updated_at) if updated_at else None
metadata: dict[str, str | list[str]] = {}
if status := ticket.get("status"):
metadata["status"] = status
if priority := ticket.get("priority"):
metadata["priority"] = priority
if tags := ticket.get("tags"):
metadata["tags"] = tags
if ticket_type := ticket.get("type"):
metadata["ticket_type"] = ticket_type
# Fetch comments for the ticket
comments_data = client.make_request(f"tickets/{ticket.get('id')}/comments", {})
comments = comments_data.get("comments", [])
comment_texts = []
for comment in comments:
new_author_mapping, comment_text = _get_comment_text(
comment, author_map, client
)
if new_author_mapping:
author_map.update(new_author_mapping)
comment_texts.append(comment_text)
comments_text = "\n\n".join(comment_texts)
subject = ticket.get("subject")
full_text = f"Ticket Subject:\n{subject}\n\nComments:\n{comments_text}"
blob = full_text.encode("utf-8", errors="replace")
return new_author_mapping, Document(
id=f"zendesk_ticket_{ticket['id']}",
blob=blob,
extension=".txt",
size_bytes=len(blob),
source=DocumentSource.ZENDESK,
semantic_identifier=f"Ticket #{ticket['id']}: {subject or 'No Subject'}",
doc_updated_at=update_time,
primary_owners=[submitter] if submitter else None,
metadata=metadata,
)
class ZendeskConnectorCheckpoint(ConnectorCheckpoint):
# We use cursor-based paginated retrieval for articles
after_cursor_articles: str | None
# We use timestamp-based paginated retrieval for tickets
next_start_time_tickets: int | None
cached_author_map: dict[str, BasicExpertInfo] | None
cached_content_tags: dict[str, str] | None
class ZendeskConnector(
SlimConnectorWithPermSync, CheckpointedConnector[ZendeskConnectorCheckpoint]
):
def __init__(
self,
content_type: str = "articles",
calls_per_minute: int | None = None,
) -> None:
self.content_type = content_type
self.subdomain = ""
# Fetch all tags ahead of time
self.content_tags: dict[str, str] = {}
self.calls_per_minute = calls_per_minute
def load_credentials(self, credentials: dict[str, Any]) -> dict[str, Any] | None:
# Subdomain is actually the whole URL
subdomain = (
credentials["zendesk_subdomain"]
.replace("https://", "")
.split(".zendesk.com")[0]
)
self.subdomain = subdomain
self.client = ZendeskClient(
subdomain,
credentials["zendesk_email"],
credentials["zendesk_token"],
calls_per_minute=self.calls_per_minute,
)
return None
@override
def load_from_checkpoint(
self,
start: SecondsSinceUnixEpoch,
end: SecondsSinceUnixEpoch,
checkpoint: ZendeskConnectorCheckpoint,
) -> CheckpointOutput[ZendeskConnectorCheckpoint]:
if self.client is None:
raise ZendeskCredentialsNotSetUpError()
if checkpoint.cached_content_tags is None:
checkpoint.cached_content_tags = _get_content_tag_mapping(self.client)
return checkpoint # save the content tags to the checkpoint
self.content_tags = checkpoint.cached_content_tags
if self.content_type == "articles":
checkpoint = yield from self._retrieve_articles(start, end, checkpoint)
return checkpoint
elif self.content_type == "tickets":
checkpoint = yield from self._retrieve_tickets(start, end, checkpoint)
return checkpoint
else:
raise ValueError(f"Unsupported content_type: {self.content_type}")
def _retrieve_articles(
self,
start: SecondsSinceUnixEpoch | None,
end: SecondsSinceUnixEpoch | None,
checkpoint: ZendeskConnectorCheckpoint,
) -> CheckpointOutput[ZendeskConnectorCheckpoint]:
checkpoint = copy.deepcopy(checkpoint)
# This one is built on the fly as there may be more many more authors than tags
author_map: dict[str, BasicExpertInfo] = checkpoint.cached_author_map or {}
after_cursor = checkpoint.after_cursor_articles
doc_batch: list[Document] = []
response = _get_article_page(
self.client,
start_time=int(start) if start else None,
after_cursor=after_cursor,
)
articles = response.data
has_more = response.has_more
after_cursor = response.meta.get("after_cursor")
for article in articles:
if (
article.get("body") is None
or article.get("draft")
or any(
label in ZENDESK_CONNECTOR_SKIP_ARTICLE_LABELS
for label in article.get("label_names", [])
)
):
continue
try:
new_author_map, document = _article_to_document(
article, self.content_tags, author_map, self.client
)
except Exception as e:
logging.error(f"Error processing article {article['id']}: {e}")
yield ConnectorFailure(
failed_document=DocumentFailure(
document_id=f"{article.get('id')}",
document_link=article.get("html_url", ""),
),
failure_message=str(e),
exception=e,
)
continue
if new_author_map:
author_map.update(new_author_map)
updated_at = document.doc_updated_at
updated_ts = updated_at.timestamp() if updated_at else None
if updated_ts is not None:
if start is not None and updated_ts <= start:
continue
if end is not None and updated_ts > end:
continue
doc_batch.append(document)
if not has_more:
yield from doc_batch
checkpoint.has_more = False
return checkpoint
# Sometimes no documents are retrieved, but the cursor
# is still updated so the connector makes progress.
yield from doc_batch
checkpoint.after_cursor_articles = after_cursor
last_doc_updated_at = doc_batch[-1].doc_updated_at if doc_batch else None
checkpoint.has_more = bool(
end is None
or last_doc_updated_at is None
or last_doc_updated_at.timestamp() <= end
)
checkpoint.cached_author_map = (
author_map if len(author_map) <= MAX_AUTHOR_MAP_SIZE else None
)
return checkpoint
def _retrieve_tickets(
self,
start: SecondsSinceUnixEpoch | None,
end: SecondsSinceUnixEpoch | None,
checkpoint: ZendeskConnectorCheckpoint,
) -> CheckpointOutput[ZendeskConnectorCheckpoint]:
checkpoint = copy.deepcopy(checkpoint)
if self.client is None:
raise ZendeskCredentialsNotSetUpError()
author_map: dict[str, BasicExpertInfo] = checkpoint.cached_author_map or {}
doc_batch: list[Document] = []
next_start_time = int(checkpoint.next_start_time_tickets or start or 0)
ticket_response = _get_tickets_page(self.client, start_time=next_start_time)
tickets = ticket_response.data
has_more = ticket_response.has_more
next_start_time = ticket_response.meta["end_time"]
for ticket in tickets:
if ticket.get("status") == "deleted":
continue
try:
new_author_map, document = _ticket_to_document(
ticket=ticket,
author_map=author_map,
client=self.client,
)
except Exception as e:
logging.error(f"Error processing ticket {ticket['id']}: {e}")
yield ConnectorFailure(
failed_document=DocumentFailure(
document_id=f"{ticket.get('id')}",
document_link=ticket.get("url", ""),
),
failure_message=str(e),
exception=e,
)
continue
if new_author_map:
author_map.update(new_author_map)
updated_at = document.doc_updated_at
updated_ts = updated_at.timestamp() if updated_at else None
if updated_ts is not None:
if start is not None and updated_ts <= start:
continue
if end is not None and updated_ts > end:
continue
doc_batch.append(document)
if not has_more:
yield from doc_batch
checkpoint.has_more = False
return checkpoint
yield from doc_batch
checkpoint.next_start_time_tickets = next_start_time
last_doc_updated_at = doc_batch[-1].doc_updated_at if doc_batch else None
checkpoint.has_more = bool(
end is None
or last_doc_updated_at is None
or last_doc_updated_at.timestamp() <= end
)
checkpoint.cached_author_map = (
author_map if len(author_map) <= MAX_AUTHOR_MAP_SIZE else None
)
return checkpoint
def retrieve_all_slim_docs_perm_sync(
self,
start: SecondsSinceUnixEpoch | None = None,
end: SecondsSinceUnixEpoch | None = None,
callback: IndexingHeartbeatInterface | None = None,
) -> GenerateSlimDocumentOutput:
slim_doc_batch: list[SlimDocument] = []
if self.content_type == "articles":
articles = _get_articles(
self.client, start_time=int(start) if start else None
)
for article in articles:
slim_doc_batch.append(
SlimDocument(
id=f"article:{article['id']}",
)
)
if len(slim_doc_batch) >= _SLIM_BATCH_SIZE:
yield slim_doc_batch
slim_doc_batch = []
elif self.content_type == "tickets":
tickets = _get_tickets(
self.client, start_time=int(start) if start else None
)
for ticket in tickets:
slim_doc_batch.append(
SlimDocument(
id=f"zendesk_ticket_{ticket['id']}",
)
)
if len(slim_doc_batch) >= _SLIM_BATCH_SIZE:
yield slim_doc_batch
slim_doc_batch = []
else:
raise ValueError(f"Unsupported content_type: {self.content_type}")
if slim_doc_batch:
yield slim_doc_batch
@override
def validate_connector_settings(self) -> None:
if self.client is None:
raise ZendeskCredentialsNotSetUpError()
try:
_get_article_page(self.client, start_time=0)
except HTTPError as e:
# Check for HTTP status codes
if e.response.status_code == 401:
raise CredentialExpiredError(
"Your Zendesk credentials appear to be invalid or expired (HTTP 401)."
) from e
elif e.response.status_code == 403:
raise InsufficientPermissionsError(
"Your Zendesk token does not have sufficient permissions (HTTP 403)."
) from e
elif e.response.status_code == 404:
raise ConnectorValidationError(
"Zendesk resource not found (HTTP 404)."
) from e
else:
raise ConnectorValidationError(
f"Unexpected Zendesk error (status={e.response.status_code}): {e}"
) from e
@override
def validate_checkpoint_json(
self, checkpoint_json: str
) -> ZendeskConnectorCheckpoint:
return ZendeskConnectorCheckpoint.model_validate_json(checkpoint_json)
@override
def build_dummy_checkpoint(self) -> ZendeskConnectorCheckpoint:
return ZendeskConnectorCheckpoint(
after_cursor_articles=None,
next_start_time_tickets=None,
cached_author_map=None,
cached_content_tags=None,
has_more=True,
)
if __name__ == "__main__":
import os
connector = ZendeskConnector(content_type="articles")
connector.load_credentials(
{
"zendesk_subdomain": os.environ["ZENDESK_SUBDOMAIN"],
"zendesk_email": os.environ["ZENDESK_EMAIL"],
"zendesk_token": os.environ["ZENDESK_TOKEN"],
}
)
current = time.time()
one_day_ago = current - 24 * 60 * 60 # 1 day
checkpoint = connector.build_dummy_checkpoint()
while checkpoint.has_more:
gen = connector.load_from_checkpoint(
one_day_ago, current, checkpoint
)
wrapper = CheckpointOutputWrapper()
any_doc = False
for document, failure, next_checkpoint in wrapper(gen):
if document:
print("got document:", document.id)
any_doc = True
checkpoint = next_checkpoint
if any_doc:
break

View File

@ -1,6 +1,6 @@
---
sidebar_position: 2
slug: /what_is_agent_context_engine
slug: /what-is-agent-context-engine
---
# What is Agent context engine?

View File

@ -1,6 +1,6 @@
---
sidebar_position: 1
slug: /what_is_rag
slug: /what-is-rag
---
# What is Retreival-Augmented-Generation (RAG)

View File

@ -0,0 +1,8 @@
{
"label": "Administration",
"position": 6,
"link": {
"type": "generated-index",
"description": "RAGFlow administration"
}
}

View File

@ -1,43 +1,11 @@
---
sidebar_position: 6
slug: /manage_users_and_services
sidebar_position: 2
slug: /admin_cli
---
# Admin CLI
# Admin CLI and Admin Service
The Admin CLI and Admin Service form a client-server architectural suite for RAGFlow system administration. The Admin CLI serves as an interactive command-line interface that receives instructions and displays execution results from the Admin Service in real-time. This duo enables real-time monitoring of system operational status, supporting visibility into RAGFlow Server services and dependent components including MySQL, Elasticsearch, Redis, and MinIO. In administrator mode, they provide user management capabilities that allow viewing users and performing critical operations—such as user creation, password updates, activation status changes, and comprehensive user data deletion—even when corresponding web interface functionalities are disabled.
## Starting the Admin Service
### Launching from source code
1. Before start Admin Service, please make sure RAGFlow system is already started.
2. Launch from source code:
```bash
python admin/server/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
### Using docker image
1. Before startup, please configure the `docker_compose.yml` file to enable admin server:
```bash
command:
- --enable-adminserver
```
2. Start the containers, the service will start and listen for incoming connections from the CLI on the configured port.
The RAGFlow Admin CLI is a command-line-based system administration tool that offers administrators an efficient and flexible method for system interaction and control. Operating on a client-server architecture, it communicates in real-time with the Admin Service, receiving administrator commands and dynamically returning execution results.
## Using the Admin CLI

View File

@ -0,0 +1,39 @@
---
sidebar_position: 0
slug: /admin_service
---
# Admin Service
The Admin Service is the core backend management service of the RAGFlow system, providing comprehensive system administration capabilities through centralized API interfaces for managing and controlling the entire platform. Adopting a client-server architecture, it supports access and operations via both a Web UI and an Admin CLI, ensuring flexible and efficient execution of administrative tasks.
The core functions of the Admin Service include real-time monitoring of the operational status of the RAGFlow server and its critical dependent components—such as MySQL, Elasticsearch, Redis, and MinIO—along with full-featured user management. In administrator mode, it enables key operations such as viewing user information, creating users, updating passwords, modifying activation status, and performing complete user data deletion. These functions remain accessible via the Admin CLI even when the web management interface is disabled, ensuring the system stays under control at all times.
With its unified interface design, the Admin Service combines the convenience of visual administration with the efficiency and stability of command-line operations, serving as a crucial foundation for the reliable operation and secure management of the RAGFlow system.
## Starting the Admin Service
### Launching from source code
1. Before start Admin Service, please make sure RAGFlow system is already started.
2. Launch from source code:
```bash
python admin/server/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
### Using docker image
1. Before startup, please configure the `docker_compose.yml` file to enable admin server:
```bash
command:
- --enable-adminserver
```
2. Start the containers, the service will start and listen for incoming connections from the CLI on the configured port.

View File

@ -1,6 +1,6 @@
---
sidebar_position: 7
slug: /accessing_admin_ui
sidebar_position: 1
slug: /admin_ui
---
# Admin UI

View File

@ -12,11 +12,18 @@ Key features, improvements and bug fixes in the latest releases.
Released on December 31, 2025.
### Improvements
- Memory: Enhances the stability of memory extraction when all memory types are selected.
- RAG: Refines the context window extraction strategy for images and tables.
### Fixed issues
- Resolved an issue where the RAGFlow Server would fail to start if an empty memory object existed, and corrected the inability to delete a newly created empty Memory.
- Improved the stability of memory extraction across all memory types after selection.
- Fixed MDX file parsing support.
- Memory:
- The RAGFlow server failed to start if an empty memory object existed.
- Unable to delete a newly created empty Memory.
- RAG: MDX file parsing was not supported.
### Data sources
@ -50,6 +57,7 @@ Released on December 27, 2025.
### Improvements
- RAG: Accelerates GraphRAG generation significantly.
- Bumps RAGFlow's document engine, [Infinity](https://github.com/infiniflow/infinity) to v0.6.15 (backward compatible).
### Data sources

View File

@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
import asyncio
import json
import logging
from collections import defaultdict
@ -32,21 +33,21 @@ from common.doc_store.doc_store_base import OrderByExpr
class KGSearch(Dealer):
def _chat(self, llm_bdl, system, history, gen_conf):
async def _chat(self, llm_bdl, system, history, gen_conf):
response = get_llm_cache(llm_bdl.llm_name, system, history, gen_conf)
if response:
return response
response = llm_bdl.chat(system, history, gen_conf)
response = await llm_bdl.async_chat(system, history, gen_conf)
if response.find("**ERROR**") >= 0:
raise Exception(response)
set_llm_cache(llm_bdl.llm_name, system, response, history, gen_conf)
return response
def query_rewrite(self, llm, question, idxnms, kb_ids):
async def query_rewrite(self, llm, question, idxnms, kb_ids):
ty2ents = get_entity_type2samples(idxnms, kb_ids)
hint_prompt = PROMPTS["minirag_query2kwd"].format(query=question,
TYPE_POOL=json.dumps(ty2ents, ensure_ascii=False, indent=2))
result = self._chat(llm, hint_prompt, [{"role": "user", "content": "Output:"}], {})
result = await self._chat(llm, hint_prompt, [{"role": "user", "content": "Output:"}], {})
try:
keywords_data = json_repair.loads(result)
type_keywords = keywords_data.get("answer_type_keywords", [])
@ -138,7 +139,7 @@ class KGSearch(Dealer):
idxnms, kb_ids)
return self._ent_info_from_(es_res, 0)
def retrieval(self, question: str,
async def retrieval(self, question: str,
tenant_ids: str | list[str],
kb_ids: list[str],
emb_mdl,
@ -158,7 +159,7 @@ class KGSearch(Dealer):
idxnms = [index_name(tid) for tid in tenant_ids]
ty_kwds = []
try:
ty_kwds, ents = self.query_rewrite(llm, qst, [index_name(tid) for tid in tenant_ids], kb_ids)
ty_kwds, ents = await self.query_rewrite(llm, qst, [index_name(tid) for tid in tenant_ids], kb_ids)
logging.info(f"Q: {qst}, Types: {ty_kwds}, Entities: {ents}")
except Exception as e:
logging.exception(e)
@ -334,5 +335,5 @@ if __name__ == "__main__":
embed_bdl = LLMBundle(args.tenant_id, LLMType.EMBEDDING, kb.embd_id)
kg = KGSearch(settings.docStoreConn)
print(kg.retrieval({"question": args.question, "kb_ids": [kb_id]},
search.index_name(kb.tenant_id), [kb_id], embed_bdl, llm_bdl))
print(asyncio.run(kg.retrieval({"question": args.question, "kb_ids": [kb_id]},
search.index_name(kb.tenant_id), [kb_id], embed_bdl, llm_bdl)))

View File

@ -10542,6 +10542,5 @@
"周五": ["礼拜五", "星期五"],
"周六": ["礼拜六", "星期六"],
"周日": ["礼拜日", "星期日", "星期天", "礼拜天"],
"上班": "办公",
"HELO":"agn"
"上班": "办公"
}

View File

@ -46,18 +46,21 @@ from common.data_source import (
MoodleConnector,
JiraConnector,
DropboxConnector,
WebDAVConnector,
AirtableConnector,
AsanaConnector,
ImapConnector
ImapConnector,
ZendeskConnector,
)
from common.constants import FileSource, TaskStatus
from common.data_source.config import INDEX_BATCH_SIZE
from common.data_source.models import ConnectorFailure
from common.data_source.webdav_connector import WebDAVConnector
from common.data_source.confluence_connector import ConfluenceConnector
from common.data_source.gmail_connector import GmailConnector
from common.data_source.box_connector import BoxConnector
from common.data_source.github.connector import GithubConnector
from common.data_source.gitlab_connector import GitlabConnector
from common.data_source.bitbucket.connector import BitbucketConnector
from common.data_source.interfaces import CheckpointOutputWrapper
from common.log_utils import init_root_logger
from common.signal_utils import start_tracemalloc_and_snapshot, stop_tracemalloc
@ -693,7 +696,12 @@ class WebDAV(SyncBase):
self.conf.get("remote_path", "/"),
begin_info
))
return document_batch_generator
async def async_wrapper():
for document_batch in document_batch_generator:
yield document_batch
return async_wrapper()
class Moodle(SyncBase):
@ -971,6 +979,10 @@ class IMAP(SyncBase):
if pending_docs:
yield pending_docs
async def async_wrapper():
for batch in document_batches():
yield batch
logging.info(
"Connect to IMAP: host(%s) port(%s) user(%s) folder(%s) %s",
self.conf["imap_host"],
@ -979,7 +991,87 @@ class IMAP(SyncBase):
self.conf["imap_mailbox"],
begin_info
)
return document_batches()
return async_wrapper()
class Zendesk(SyncBase):
SOURCE_NAME: str = FileSource.ZENDESK
async def _generate(self, task: dict):
self.connector = ZendeskConnector(content_type=self.conf.get("zendesk_content_type"))
self.connector.load_credentials(self.conf["credentials"])
end_time = datetime.now(timezone.utc).timestamp()
if task["reindex"] == "1" or not task.get("poll_range_start"):
start_time = 0
begin_info = "totally"
else:
start_time = task["poll_range_start"].timestamp()
begin_info = f"from {task['poll_range_start']}"
raw_batch_size = (
self.conf.get("sync_batch_size")
or self.conf.get("batch_size")
or INDEX_BATCH_SIZE
)
try:
batch_size = int(raw_batch_size)
except (TypeError, ValueError):
batch_size = INDEX_BATCH_SIZE
if batch_size <= 0:
batch_size = INDEX_BATCH_SIZE
def document_batches():
checkpoint = self.connector.build_dummy_checkpoint()
pending_docs = []
iterations = 0
iteration_limit = 100_000
while checkpoint.has_more:
wrapper = CheckpointOutputWrapper()
doc_generator = wrapper(
self.connector.load_from_checkpoint(
start_time, end_time, checkpoint
)
)
for document, failure, next_checkpoint in doc_generator:
if failure is not None:
logging.warning(
"Zendesk connector failure: %s",
getattr(failure, "failure_message", failure),
)
continue
if document is not None:
pending_docs.append(document)
if len(pending_docs) >= batch_size:
yield pending_docs
pending_docs = []
if next_checkpoint is not None:
checkpoint = next_checkpoint
iterations += 1
if iterations > iteration_limit:
raise RuntimeError(
"Too many iterations while loading Zendesk documents."
)
if pending_docs:
yield pending_docs
async def async_wrapper():
for batch in document_batches():
yield batch
logging.info(
"Connect to Zendesk: subdomain(%s) %s",
self.conf['credentials'].get("zendesk_subdomain"),
begin_info,
)
return async_wrapper()
class Gitlab(SyncBase):
@ -1022,6 +1114,67 @@ class Gitlab(SyncBase):
logging.info("Connect to Gitlab: ({}) {}".format(self.conf["project_name"], begin_info))
return document_generator
class Bitbucket(SyncBase):
SOURCE_NAME: str = FileSource.BITBUCKET
async def _generate(self, task: dict):
self.connector = BitbucketConnector(
workspace=self.conf.get("workspace"),
repositories=self.conf.get("repository_slugs"),
projects=self.conf.get("projects"),
)
self.connector.load_credentials(
{
"bitbucket_email": self.conf["credentials"].get("bitbucket_account_email"),
"bitbucket_api_token": self.conf["credentials"].get("bitbucket_api_token"),
}
)
if task["reindex"] == "1" or not task["poll_range_start"]:
start_time = datetime.fromtimestamp(0, tz=timezone.utc)
begin_info = "totally"
else:
start_time = task.get("poll_range_start")
begin_info = f"from {start_time}"
end_time = datetime.now(timezone.utc)
def document_batches():
checkpoint = self.connector.build_dummy_checkpoint()
while checkpoint.has_more:
gen = self.connector.load_from_checkpoint(
start=start_time.timestamp(),
end=end_time.timestamp(),
checkpoint=checkpoint)
while True:
try:
item = next(gen)
if isinstance(item, ConnectorFailure):
logging.exception(
"Bitbucket connector failure: %s",
item.failure_message)
break
yield [item]
except StopIteration as e:
checkpoint = e.value
break
async def async_wrapper():
for batch in document_batches():
yield batch
logging.info(
"Connect to Bitbucket: workspace(%s), %s",
self.conf.get("workspace"),
begin_info,
)
return async_wrapper()
func_factory = {
FileSource.S3: S3,
FileSource.R2: R2,
@ -1043,8 +1196,10 @@ func_factory = {
FileSource.AIRTABLE: Airtable,
FileSource.ASANA: Asana,
FileSource.IMAP: IMAP,
FileSource.ZENDESK: Zendesk,
FileSource.GITHUB: Github,
FileSource.GITLAB: Gitlab,
FileSource.BITBUCKET: Bitbucket,
}

View File

@ -29,6 +29,7 @@ DIALOG_APP_URL = f"/{VERSION}/dialog"
# SESSION_WITH_CHAT_ASSISTANT_API_URL = "/api/v1/chats/{chat_id}/sessions"
# SESSION_WITH_AGENT_API_URL = "/api/v1/agents/{agent_id}/sessions"
MEMORY_API_URL = f"/{VERSION}/memories"
MESSAGE_API_URL = f"/{VERSION}/messages"
# KB APP
@ -299,3 +300,76 @@ def get_memory_config(auth, memory_id:str):
url = f"{HOST_ADDRESS}{MEMORY_API_URL}/{memory_id}/config"
res = requests.get(url=url, headers=HEADERS, auth=auth)
return res.json()
def list_memory_message(auth, memory_id, params=None):
url = f"{HOST_ADDRESS}{MEMORY_API_URL}/{memory_id}"
if params:
query_parts = []
for key, value in params.items():
if isinstance(value, list):
for item in value:
query_parts.append(f"{key}={item}")
else:
query_parts.append(f"{key}={value}")
query_string = "&".join(query_parts)
url = f"{url}?{query_string}"
res = requests.get(url=url, headers=HEADERS, auth=auth)
return res.json()
def add_message(auth, payload=None):
url = f"{HOST_ADDRESS}{MESSAGE_API_URL}"
res = requests.post(url=url, headers=HEADERS, auth=auth, json=payload)
return res.json()
def forget_message(auth, memory_id: str, message_id: int):
url = f"{HOST_ADDRESS}{MESSAGE_API_URL}/{memory_id}:{message_id}"
res = requests.delete(url=url, headers=HEADERS, auth=auth)
return res.json()
def update_message_status(auth, memory_id: str, message_id: int, status: bool):
url = f"{HOST_ADDRESS}{MESSAGE_API_URL}/{memory_id}:{message_id}"
payload = {"status": status}
res = requests.put(url=url, headers=HEADERS, auth=auth, json=payload)
return res.json()
def search_message(auth, params=None):
url = f"{HOST_ADDRESS}{MESSAGE_API_URL}/search"
if params:
query_parts = []
for key, value in params.items():
if isinstance(value, list):
for item in value:
query_parts.append(f"{key}={item}")
else:
query_parts.append(f"{key}={value}")
query_string = "&".join(query_parts)
url = f"{url}?{query_string}"
res = requests.get(url=url, headers=HEADERS, auth=auth)
return res.json()
def get_recent_message(auth, params=None):
url = f"{HOST_ADDRESS}{MESSAGE_API_URL}"
if params:
query_parts = []
for key, value in params.items():
if isinstance(value, list):
for item in value:
query_parts.append(f"{key}={item}")
else:
query_parts.append(f"{key}={value}")
query_string = "&".join(query_parts)
url = f"{url}?{query_string}"
res = requests.get(url=url, headers=HEADERS, auth=auth)
return res.json()
def get_message_content(auth, memory_id: str, message_id: int):
url = f"{HOST_ADDRESS}{MESSAGE_API_URL}/{memory_id}:{message_id}/content"
res = requests.get(url=url, headers=HEADERS, auth=auth)
return res.json()

View File

@ -0,0 +1,101 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
import uuid
import pytest
import random
from test_web_api.common import create_memory, list_memory, add_message, delete_memory
@pytest.fixture(scope="class")
def add_memory_with_multiple_message_func(request, WebApiAuth):
def cleanup():
memory_list_res = list_memory(WebApiAuth)
exist_memory_ids = [memory["id"] for memory in memory_list_res["data"]["memory_list"]]
for _memory_id in exist_memory_ids:
delete_memory(WebApiAuth, _memory_id)
request.addfinalizer(cleanup)
payload = {
"name": "test_memory_0",
"memory_type": ["raw"] + random.choices(["semantic", "episodic", "procedural"], k=random.randint(1, 3)),
"embd_id": "BAAI/bge-small-en-v1.5@Builtin",
"llm_id": "glm-4-flash@ZHIPU-AI"
}
res = create_memory(WebApiAuth, payload)
memory_id = res["data"]["id"]
agent_id = uuid.uuid4().hex
message_payload = {
"memory_id": [memory_id],
"agent_id": agent_id,
"session_id": uuid.uuid4().hex,
"user_id": "",
"user_input": "what is coriander?",
"agent_response": """
Coriander is a versatile herb with two main edible parts, and its name can refer to both:
1. Leaves and Stems (often called Cilantro or Fresh Coriander): These are the fresh, green, fragrant leaves and tender stems of the plant Coriandrum sativum. They have a bright, citrusy, and sometimes pungent flavor. Cilantro is widely used as a garnish or key ingredient in cuisines like Mexican, Indian, Thai, and Middle Eastern.
2. Seeds (called Coriander Seeds): These are the dried, golden-brown seeds of the same plant. When ground, they become coriander powder. The seeds have a warm, nutty, floral, and slightly citrusy taste, completely different from the fresh leaves. They are a fundamental spice in curries, stews, pickles, and baking.
Key Point of Confusion: The naming differs by region. In North America, "coriander" typically refers to the seeds, while "cilantro" refers to the fresh leaves. In the UK, Europe, and many other parts of the world, "coriander" refers to the fresh herb, and the seeds are called "coriander seeds."
"""
}
add_message(WebApiAuth, message_payload)
request.cls.memory_id = memory_id
request.cls.agent_id = agent_id
return memory_id
@pytest.fixture(scope="class")
def add_memory_with_5_raw_message_func(request, WebApiAuth):
def cleanup():
memory_list_res = list_memory(WebApiAuth)
exist_memory_ids = [memory["id"] for memory in memory_list_res["data"]["memory_list"]]
for _memory_id in exist_memory_ids:
delete_memory(WebApiAuth, _memory_id)
request.addfinalizer(cleanup)
payload = {
"name": "test_memory_1",
"memory_type": ["raw"],
"embd_id": "BAAI/bge-small-en-v1.5@Builtin",
"llm_id": "glm-4-flash@ZHIPU-AI"
}
res = create_memory(WebApiAuth, payload)
memory_id = res["data"]["id"]
agent_ids = [uuid.uuid4().hex for _ in range(2)]
session_ids = [uuid.uuid4().hex for _ in range(5)]
for i in range(5):
message_payload = {
"memory_id": [memory_id],
"agent_id": agent_ids[i % 2],
"session_id": session_ids[i],
"user_id": "",
"user_input": "what is coriander?",
"agent_response": """
Coriander is a versatile herb with two main edible parts, and its name can refer to both:
1. Leaves and Stems (often called Cilantro or Fresh Coriander): These are the fresh, green, fragrant leaves and tender stems of the plant Coriandrum sativum. They have a bright, citrusy, and sometimes pungent flavor. Cilantro is widely used as a garnish or key ingredient in cuisines like Mexican, Indian, Thai, and Middle Eastern.
2. Seeds (called Coriander Seeds): These are the dried, golden-brown seeds of the same plant. When ground, they become coriander powder. The seeds have a warm, nutty, floral, and slightly citrusy taste, completely different from the fresh leaves. They are a fundamental spice in curries, stews, pickles, and baking.
Key Point of Confusion: The naming differs by region. In North America, "coriander" typically refers to the seeds, while "cilantro" refers to the fresh leaves. In the UK, Europe, and many other parts of the world, "coriander" refers to the fresh herb, and the seeds are called "coriander seeds."
"""
}
add_message(WebApiAuth, message_payload)
request.cls.memory_id = memory_id
request.cls.agent_ids = agent_ids
request.cls.session_ids = session_ids
time.sleep(2) # make sure refresh to index before search
return memory_id

View File

@ -0,0 +1,68 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
import pytest
from test_web_api.common import get_recent_message
from configs import INVALID_API_TOKEN
from libs.auth import RAGFlowWebApiAuth
class TestAuthorization:
@pytest.mark.p1
@pytest.mark.parametrize(
"invalid_auth, expected_code, expected_message",
[
(None, 401, "<Unauthorized '401: Unauthorized'>"),
(RAGFlowWebApiAuth(INVALID_API_TOKEN), 401, "<Unauthorized '401: Unauthorized'>"),
],
)
def test_auth_invalid(self, invalid_auth, expected_code, expected_message):
res = get_recent_message(invalid_auth)
assert res["code"] == expected_code, res
assert res["message"] == expected_message, res
@pytest.mark.usefixtures("add_memory_with_5_raw_message_func")
class TestGetRecentMessage:
@pytest.mark.p1
def test_get_recent_messages(self, WebApiAuth):
memory_id = self.memory_id
res = get_recent_message(WebApiAuth, params={"memory_id": memory_id})
assert res["code"] == 0, res
assert len(res["data"]) == 5, res
@pytest.mark.p2
def test_filter_recent_messages_by_agent(self, WebApiAuth):
memory_id = self.memory_id
agent_ids = self.agent_ids
agent_id = random.choice(agent_ids)
res = get_recent_message(WebApiAuth, params={"agent_id": agent_id, "memory_id": memory_id})
assert res["code"] == 0, res
for message in res["data"]:
assert message["agent_id"] == agent_id, message
@pytest.mark.p2
def test_filter_recent_messages_by_session(self, WebApiAuth):
memory_id = self.memory_id
session_ids = self.session_ids
session_id = random.choice(session_ids)
res = get_recent_message(WebApiAuth, params={"session_id": session_id, "memory_id": memory_id})
assert res["code"] == 0, res
for message in res["data"]:
assert message["session_id"] == session_id, message

View File

@ -0,0 +1,100 @@
#
# Copyright 2025 The InfiniFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import random
import pytest
from test_web_api.common import list_memory_message
from configs import INVALID_API_TOKEN
from libs.auth import RAGFlowWebApiAuth
class TestAuthorization:
@pytest.mark.p1
@pytest.mark.parametrize(
"invalid_auth, expected_code, expected_message",
[
(None, 401, "<Unauthorized '401: Unauthorized'>"),
(RAGFlowWebApiAuth(INVALID_API_TOKEN), 401, "<Unauthorized '401: Unauthorized'>"),
],
)
def test_auth_invalid(self, invalid_auth, expected_code, expected_message):
res = list_memory_message(invalid_auth, "")
assert res["code"] == expected_code, res
assert res["message"] == expected_message, res
@pytest.mark.usefixtures("add_memory_with_5_raw_message_func")
class TestMessageList:
@pytest.mark.p1
def test_params_unset(self, WebApiAuth):
memory_id = self.memory_id
res = list_memory_message(WebApiAuth, memory_id, params=None)
assert res["code"] == 0, res
assert len(res["data"]["messages"]["message_list"]) == 5, res
@pytest.mark.p1
def test_params_empty(self, WebApiAuth):
memory_id = self.memory_id
res = list_memory_message(WebApiAuth, memory_id, params={})
assert res["code"] == 0, res
assert len(res["data"]["messages"]["message_list"]) == 5, res
@pytest.mark.p1
@pytest.mark.parametrize(
"params, expected_page_size",
[
({"page": 1, "page_size": 10}, 5),
({"page": 2, "page_size": 10}, 0),
({"page": 1, "page_size": 2}, 2),
({"page": 3, "page_size": 2}, 1),
({"page": 5, "page_size": 10}, 0),
],
ids=["normal_first_page", "beyond_max_page", "normal_last_partial_page", "normal_middle_page",
"full_data_single_page"],
)
def test_page_size(self, WebApiAuth, params, expected_page_size):
# have added 5 messages in fixture
memory_id = self.memory_id
res = list_memory_message(WebApiAuth, memory_id, params=params)
assert res["code"] == 0, res
assert len(res["data"]["messages"]["message_list"]) == expected_page_size, res
@pytest.mark.p2
def test_filter_agent_id(self, WebApiAuth):
memory_id = self.memory_id
agent_ids = self.agent_ids
agent_id = random.choice(agent_ids)
res = list_memory_message(WebApiAuth, memory_id, params={"agent_id": agent_id})
assert res["code"] == 0, res
for message in res["data"]["messages"]["message_list"]:
assert message["agent_id"] == agent_id, message
@pytest.mark.p2
@pytest.mark.skipif(os.getenv("DOC_ENGINE") == "infinity", reason="Not support.")
def test_search_keyword(self, WebApiAuth):
memory_id = self.memory_id
session_ids = self.session_ids
session_id = random.choice(session_ids)
slice_start = random.randint(0, len(session_id) - 2)
slice_end = random.randint(slice_start + 1, len(session_id) - 1)
keyword = session_id[slice_start:slice_end]
res = list_memory_message(WebApiAuth, memory_id, params={"keywords": keyword})
assert res["code"] == 0, res
assert len(res["data"]["messages"]["message_list"]) > 0, res
for message in res["data"]["messages"]["message_list"]:
assert keyword in message["session_id"], message

View File

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg xmlns="http://www.w3.org/2000/svg"
aria-label="Bitbucket" role="img"
viewBox="0 0 512 512"><rect
width="512" height="512"
rx="15%"
fill="#ffffff"/><path fill="#2684ff" d="M422 130a10 10 0 00-9.9-11.7H100.5a10 10 0 00-10 11.7L136 409a10 10 0 009.9 8.4h221c5 0 9.2-3.5 10 -8.4L422 130zM291 316.8h-69.3l-18.7-98h104.8z"/><path fill="url(#a)" d="M59.632 25.2H40.94l-3.1 18.3h-13v18.9H52c1 0 1.7-.7 1.8-1.6l5.8-35.6z" transform="translate(89.8 85) scale(5.3285)"/><linearGradient id="a" x2="1" gradientTransform="rotate(141 22.239 22.239) scale(31.4)" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#0052cc"/><stop offset="1" stop-color="#2684ff"/></linearGradient></svg>

After

Width:  |  Height:  |  Size: 803 B

View File

@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg width="800px" height="800px" viewBox="0 -30.5 256 256" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid">
<g>
<path d="M118.249172,51.2326115 L118.249172,194.005605 L0,194.005605 L118.249172,51.2326115 Z M118.249172,2.84217094e-14 C118.249172,32.6440764 91.7686624,59.124586 59.124586,59.124586 C26.4805096,59.124586 0,32.6440764 0,2.84217094e-14 L118.249172,2.84217094e-14 Z M137.750828,194.005605 C137.750828,161.328917 164.198726,134.881019 196.875414,134.881019 C229.552102,134.881019 256,161.361529 256,194.005605 L137.750828,194.005605 Z M137.750828,142.740382 L137.750828,0 L256,0 L137.750828,142.740382 Z" fill="#03363D">
</path>
</g>

After

Width:  |  Height:  |  Size: 864 B

View File

@ -1,11 +1,23 @@
import CopyToClipboard from '@/components/copy-to-clipboard';
import { Button } from '@/components/ui/button';
import {
Dialog,
DialogContent,
DialogHeader,
DialogTitle,
} from '@/components/ui/dialog';
import {
Table,
TableBody,
TableCell,
TableHead,
TableHeader,
TableRow,
} from '@/components/ui/table';
import { useTranslate } from '@/hooks/common-hooks';
import { IModalProps } from '@/interfaces/common';
import { IToken } from '@/interfaces/database/chat';
import { formatDate } from '@/utils/date';
import { DeleteOutlined } from '@ant-design/icons';
import type { TableProps } from 'antd';
import { Button, Modal, Space, Table } from 'antd';
import { Trash2 } from 'lucide-react';
import { useOperateApiKey } from '../hooks';
const ChatApiKeyModal = ({
@ -17,57 +29,59 @@ const ChatApiKeyModal = ({
useOperateApiKey(idKey, dialogId);
const { t } = useTranslate('chat');
const columns: TableProps<IToken>['columns'] = [
{
title: 'Token',
dataIndex: 'token',
key: 'token',
render: (text) => <a>{text}</a>,
},
{
title: t('created'),
dataIndex: 'create_date',
key: 'create_date',
render: (text) => formatDate(text),
},
{
title: t('action'),
key: 'action',
render: (_, record) => (
<Space size="middle">
<CopyToClipboard text={record.token}></CopyToClipboard>
<DeleteOutlined onClick={() => removeToken(record.token)} />
</Space>
),
},
];
return (
<>
<Modal
title={t('apiKey')}
open
onCancel={hideModal}
cancelButtonProps={{ style: { display: 'none' } }}
style={{ top: 300 }}
onOk={hideModal}
width={'50vw'}
>
<Table
columns={columns}
dataSource={tokenList}
rowKey={'token'}
loading={listLoading}
pagination={false}
/>
<Button
onClick={createToken}
loading={creatingLoading}
disabled={tokenList?.length > 0}
>
{t('createNewKey')}
</Button>
</Modal>
<Dialog open onOpenChange={hideModal}>
<DialogContent className="max-w-[50vw]">
<DialogHeader>
<DialogTitle>{t('apiKey')}</DialogTitle>
</DialogHeader>
<div className="space-y-4">
{listLoading ? (
<div className="flex justify-center py-8">Loading...</div>
) : (
<Table>
<TableHeader>
<TableRow>
<TableHead>Token</TableHead>
<TableHead>{t('created')}</TableHead>
<TableHead>{t('action')}</TableHead>
</TableRow>
</TableHeader>
<TableBody>
{tokenList?.map((tokenItem) => (
<TableRow key={tokenItem.token}>
<TableCell className="font-medium break-all">
{tokenItem.token}
</TableCell>
<TableCell>{formatDate(tokenItem.create_date)}</TableCell>
<TableCell>
<div className="flex items-center gap-2">
<CopyToClipboard text={tokenItem.token} />
<Button
variant="ghost"
size="icon"
onClick={() => removeToken(tokenItem.token)}
>
<Trash2 className="h-4 w-4" />
</Button>
</div>
</TableCell>
</TableRow>
))}
</TableBody>
</Table>
)}
<Button
onClick={createToken}
loading={creatingLoading}
disabled={tokenList?.length > 0}
>
{t('createNewKey')}
</Button>
</div>
</DialogContent>
</Dialog>
</>
);
};

View File

@ -0,0 +1,93 @@
import React, { useSyncExternalStore } from 'react';
export interface AnchorItem {
key: string;
href: string;
title: string;
children?: AnchorItem[];
}
interface SimpleAnchorProps {
items: AnchorItem[];
className?: string;
style?: React.CSSProperties;
}
// Subscribe to URL hash changes
const subscribeHash = (callback: () => void) => {
window.addEventListener('hashchange', callback);
return () => window.removeEventListener('hashchange', callback);
};
const getHash = () => window.location.hash;
const Anchor: React.FC<SimpleAnchorProps> = ({
items,
className = '',
style = {},
}) => {
// Sync with URL hash changes, to highlight the active item
const hash = useSyncExternalStore(subscribeHash, getHash);
// Handle menu item click
const handleClick = (
e: React.MouseEvent<HTMLAnchorElement>,
href: string,
) => {
e.preventDefault();
const targetId = href.replace('#', '');
const targetElement = document.getElementById(targetId);
if (targetElement) {
// Update URL hash (triggers hashchange event)
window.location.hash = href;
// Smooth scroll to target
targetElement.scrollIntoView({ behavior: 'smooth', block: 'start' });
}
};
if (items.length === 0) return null;
return (
<nav className={className} style={style}>
<ul className="list-none p-0 m-0">
{items.map((item) => (
<li key={item.key} className="mb-2">
<a
href={item.href}
onClick={(e) => handleClick(e, item.href)}
className={`block px-3 py-1.5 no-underline rounded cursor-pointer transition-all duration-300 hover:text-accent-primary/70 ${
hash === item.href
? 'text-accent-primary bg-accent-primary-5'
: 'text-text-secondary bg-transparent'
}`}
>
{item.title}
</a>
{item.children && item.children.length > 0 && (
<ul className="list-none p-0 ml-4 mt-1">
{item.children.map((child) => (
<li key={child.key} className="mb-1">
<a
href={child.href}
onClick={(e) => handleClick(e, child.href)}
className={`block px-3 py-1 text-sm no-underline rounded cursor-pointer transition-all duration-300 hover:text-accent-primary/70 ${
hash === child.href
? 'text-accent-primary bg-accent-primary-5'
: 'text-text-secondary bg-transparent'
}`}
>
{child.title}
</a>
</li>
))}
</ul>
)}
</li>
))}
</ul>
</nav>
);
};
export default Anchor;

View File

@ -1,52 +1,26 @@
import { useIsDarkTheme } from '@/components/theme-provider';
import { useSetModalState, useTranslate } from '@/hooks/common-hooks';
import { useSetModalState } from '@/hooks/common-hooks';
import { LangfuseCard } from '@/pages/user-setting/setting-model/langfuse';
import apiDoc from '@parent/docs/references/http_api_reference.md';
import MarkdownPreview from '@uiw/react-markdown-preview';
import { Button, Card, Flex, Space } from 'antd';
import ChatApiKeyModal from '../chat-api-key-modal';
import { usePreviewChat } from '../hooks';
import BackendServiceApi from './backend-service-api';
import MarkdownToc from './markdown-toc';
const ApiContent = ({
id,
idKey,
hideChatPreviewCard = false,
}: {
id?: string;
idKey: string;
hideChatPreviewCard?: boolean;
}) => {
const { t } = useTranslate('chat');
const ApiContent = ({ id, idKey }: { id?: string; idKey: string }) => {
const {
visible: apiKeyVisible,
hideModal: hideApiKeyModal,
showModal: showApiKeyModal,
} = useSetModalState();
// const { embedVisible, hideEmbedModal, showEmbedModal, embedToken } =
// useShowEmbedModal(idKey);
const { handlePreview } = usePreviewChat(idKey);
const isDarkTheme = useIsDarkTheme();
return (
<div className="pb-2">
<Flex vertical gap={'middle'}>
<section className="flex flex-col gap-2 pb-5">
<BackendServiceApi show={showApiKeyModal}></BackendServiceApi>
{!hideChatPreviewCard && (
<Card title={`${name} Web App`}>
<Flex gap={8} vertical>
<Space size={'middle'}>
<Button onClick={handlePreview}>{t('preview')}</Button>
{/* <Button onClick={() => showEmbedModal(id)}>
{t('embedded')}
</Button> */}
</Space>
</Flex>
</Card>
)}
<div style={{ position: 'relative' }}>
<MarkdownToc content={apiDoc} />
</div>
@ -54,7 +28,8 @@ const ApiContent = ({
source={apiDoc}
wrapperElement={{ 'data-color-mode': isDarkTheme ? 'dark' : 'light' }}
></MarkdownPreview>
</Flex>
</section>
<LangfuseCard></LangfuseCard>
{apiKeyVisible && (
<ChatApiKeyModal
hideModal={hideApiKeyModal}
@ -62,14 +37,6 @@ const ApiContent = ({
idKey={idKey}
></ChatApiKeyModal>
)}
{/* {embedVisible && (
<EmbedModal
token={embedToken}
visible={embedVisible}
hideModal={hideEmbedModal}
></EmbedModal>
)} */}
<LangfuseCard></LangfuseCard>
</div>
);
};

View File

@ -1,33 +1,28 @@
import { Button, Card, Flex, Space, Typography } from 'antd';
import { Button } from '@/components/ui/button';
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
import { CopyToClipboardWithText } from '@/components/copy-to-clipboard';
import { useTranslate } from '@/hooks/common-hooks';
import styles from './index.less';
const { Paragraph } = Typography;
const BackendServiceApi = ({ show }: { show(): void }) => {
const { t } = useTranslate('chat');
return (
<Card
title={
<Space size={'large'}>
<span>RAGFlow API</span>
<Button onClick={show} type="primary">
{t('apiKey')}
</Button>
</Space>
}
>
<Flex gap={8} align="center">
<b>{t('backendServiceApi')}</b>
<Paragraph
copyable={{ text: `${location.origin}` }}
className={styles.apiLinkText}
>
{location.origin}
</Paragraph>
</Flex>
<Card>
<CardHeader>
<div className="flex items-center gap-4">
<CardTitle>RAGFlow API</CardTitle>
<Button onClick={show}>{t('apiKey')}</Button>
</div>
</CardHeader>
<CardContent>
<div className="flex items-center gap-2">
<b className="font-semibold">{t('backendServiceApi')}</b>
<CopyToClipboardWithText
text={location.origin}
></CopyToClipboardWithText>
</div>
</CardContent>
</Card>
);
};

View File

@ -1,31 +0,0 @@
import { useTranslate } from '@/hooks/common-hooks';
import { IModalProps } from '@/interfaces/common';
import { Modal } from 'antd';
import ApiContent from './api-content';
const ChatOverviewModal = ({
visible,
hideModal,
id,
idKey,
}: IModalProps<any> & { id: string; name?: string; idKey: string }) => {
const { t } = useTranslate('chat');
return (
<>
<Modal
title={t('overview')}
open={visible}
onCancel={hideModal}
cancelButtonProps={{ style: { display: 'none' } }}
onOk={hideModal}
width={'100vw'}
okText={t('close', { keyPrefix: 'common' })}
>
<ApiContent id={id} idKey={idKey}></ApiContent>
</Modal>
</>
);
};
export default ChatOverviewModal;

View File

@ -1,21 +1,27 @@
import { Anchor } from 'antd';
import type { AnchorLinkItemProps } from 'antd/es/anchor/Anchor';
import React, { useEffect, useState } from 'react';
import Anchor, { AnchorItem } from './anchor';
interface MarkdownTocProps {
content: string;
}
const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
const [items, setItems] = useState<AnchorLinkItemProps[]>([]);
const [items, setItems] = useState<AnchorItem[]>([]);
useEffect(() => {
const generateTocItems = () => {
const headings = document.querySelectorAll(
'.wmde-markdown h2, .wmde-markdown h3',
);
const tocItems: AnchorLinkItemProps[] = [];
let currentH2Item: AnchorLinkItemProps | null = null;
// If headings haven't rendered yet, wait for next frame
if (headings.length === 0) {
requestAnimationFrame(generateTocItems);
return;
}
const tocItems: AnchorItem[] = [];
let currentH2Item: AnchorItem | null = null;
headings.forEach((heading) => {
const title = heading.textContent || '';
@ -23,7 +29,7 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
const isH2 = heading.tagName.toLowerCase() === 'h2';
if (id && title) {
const item: AnchorLinkItemProps = {
const item: AnchorItem = {
key: id,
href: `#${id}`,
title,
@ -48,7 +54,10 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
setItems(tocItems.slice(1));
};
setTimeout(generateTocItems, 100);
// Use requestAnimationFrame to ensure execution after DOM rendering
requestAnimationFrame(() => {
requestAnimationFrame(generateTocItems);
});
}, [content]);
return (
@ -56,7 +65,7 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
className="markdown-toc bg-bg-base text-text-primary shadow shadow-text-secondary"
style={{
position: 'fixed',
right: 20,
right: 30,
top: 100,
bottom: 150,
width: 200,
@ -66,7 +75,7 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
zIndex: 1000,
}}
>
<Anchor items={items} affix={false} />
<Anchor items={items} />
</div>
);
};

View File

@ -1,21 +0,0 @@
.codeCard {
.clearCardBody();
}
.codeText {
padding: 10px;
background-color: #ffffff09;
}
.id {
.linkText();
}
.darkBg {
background-color: rgb(69, 68, 68);
}
.darkId {
color: white;
.darkBg();
}

View File

@ -1,170 +0,0 @@
import CopyToClipboard from '@/components/copy-to-clipboard';
import HighLightMarkdown from '@/components/highlight-markdown';
import { SharedFrom } from '@/constants/chat';
import { useTranslate } from '@/hooks/common-hooks';
import { IModalProps } from '@/interfaces/common';
import {
Card,
Checkbox,
Form,
Modal,
Select,
Tabs,
TabsProps,
Typography,
} from 'antd';
import { useMemo, useState } from 'react';
import { useIsDarkTheme } from '@/components/theme-provider';
import {
LanguageAbbreviation,
LanguageAbbreviationMap,
} from '@/constants/common';
import { cn } from '@/lib/utils';
import styles from './index.less';
const { Paragraph, Link } = Typography;
const EmbedModal = ({
visible,
hideModal,
token = '',
form,
beta = '',
isAgent,
}: IModalProps<any> & {
token: string;
form: SharedFrom;
beta: string;
isAgent: boolean;
}) => {
const { t } = useTranslate('chat');
const isDarkTheme = useIsDarkTheme();
const [visibleAvatar, setVisibleAvatar] = useState(false);
const [locale, setLocale] = useState('');
const languageOptions = useMemo(() => {
return Object.values(LanguageAbbreviation).map((x) => ({
label: LanguageAbbreviationMap[x],
value: x,
}));
}, []);
const generateIframeSrc = () => {
let src = `${location.origin}/chat/share?shared_id=${token}&from=${form}&auth=${beta}`;
if (visibleAvatar) {
src += '&visible_avatar=1';
}
if (locale) {
src += `&locale=${locale}`;
}
return src;
};
const iframeSrc = generateIframeSrc();
const text = `
~~~ html
<iframe
src="${iframeSrc}"
style="width: 100%; height: 100%; min-height: 600px"
frameborder="0"
>
</iframe>
~~~
`;
const items: TabsProps['items'] = [
{
key: '1',
label: t('fullScreenTitle'),
children: (
<Card
title={t('fullScreenDescription')}
extra={<CopyToClipboard text={text}></CopyToClipboard>}
className={styles.codeCard}
>
<div className="p-2">
<h2 className="mb-3">Option:</h2>
<Form.Item
label={t('avatarHidden')}
labelCol={{ span: 6 }}
wrapperCol={{ span: 18 }}
>
<Checkbox
checked={visibleAvatar}
onChange={(e) => setVisibleAvatar(e.target.checked)}
></Checkbox>
</Form.Item>
<Form.Item
label={t('locale')}
labelCol={{ span: 6 }}
wrapperCol={{ span: 18 }}
>
<Select
placeholder="Select a locale"
onChange={(value) => setLocale(value)}
options={languageOptions}
style={{ width: '100%' }}
/>
</Form.Item>
</div>
<HighLightMarkdown>{text}</HighLightMarkdown>
</Card>
),
},
{
key: '2',
label: t('partialTitle'),
children: t('comingSoon'),
},
{
key: '3',
label: t('extensionTitle'),
children: t('comingSoon'),
},
];
const onChange = (key: string) => {
console.log(key);
};
return (
<Modal
title={t('embedIntoSite', { keyPrefix: 'common' })}
open={visible}
style={{ top: 300 }}
width={'50vw'}
onOk={hideModal}
onCancel={hideModal}
>
<Tabs defaultActiveKey="1" items={items} onChange={onChange} />
<div className="text-base font-medium mt-4 mb-1">
{t(isAgent ? 'flow' : 'chat', { keyPrefix: 'header' })}
<span className="ml-1 inline-block">ID</span>
</div>
<Paragraph
copyable={{ text: token }}
className={cn(styles.id, {
[styles.darkId]: isDarkTheme,
})}
>
{token}
</Paragraph>
<Link
href={
isAgent
? 'https://ragflow.io/docs/dev/http_api_reference#create-session-with-agent'
: 'https://ragflow.io/docs/dev/http_api_reference#create-session-with-chat-assistant'
}
target="_blank"
>
{t('howUseId', { keyPrefix: isAgent ? 'flow' : 'chat' })}
</Link>
</Modal>
);
};
export default EmbedModal;

View File

@ -1,4 +1,3 @@
import { SharedFrom } from '@/constants/chat';
import {
useSetModalState,
useShowDeleteConfirm,
@ -80,11 +79,6 @@ export const useShowBetaEmptyError = () => {
return { showBetaEmptyError };
};
const getUrlWithToken = (token: string, from: string = 'chat') => {
const { protocol, host } = window.location;
return `${protocol}//${host}/chat/share?shared_id=${token}&from=${from}`;
};
const useFetchTokenListBeforeOtherStep = () => {
const { showTokenEmptyError } = useShowTokenEmptyError();
const { showBetaEmptyError } = useShowBetaEmptyError();
@ -149,31 +143,3 @@ export const useShowEmbedModal = () => {
beta,
};
};
export const usePreviewChat = (idKey: string) => {
const { handleOperate } = useFetchTokenListBeforeOtherStep();
const open = useCallback(
(t: string) => {
window.open(
getUrlWithToken(
t,
idKey === 'canvasId' ? SharedFrom.Agent : SharedFrom.Chat,
),
'_blank',
);
},
[idKey],
);
const handlePreview = useCallback(async () => {
const token = await handleOperate();
if (token) {
open(token);
}
}, [handleOperate, open]);
return {
handlePreview,
};
};

View File

@ -947,6 +947,19 @@ Beispiel: Virtual Hosted Style`,
'Laden Sie das OAuth-JSON hoch, das von der Google Console generiert wurde. Wenn es nur Client-Anmeldeinformationen enthält, führen Sie die browserbasierte Überprüfung einmal durch, um langlebige Refresh-Token zu erstellen.',
dropboxDescription:
'Verbinden Sie Ihre Dropbox, um Dateien und Ordner von einem ausgewählten Konto zu synchronisieren.',
bitbucketDescription:
'Bitbucket verbinden, um PR-Inhalte zu synchronisieren.',
zendeskDescription:
'Verbinden Sie Ihr Zendesk, um Tickets, Artikel und andere Inhalte zu synchronisieren.',
bitbucketTopWorkspaceTip:
'Der zu indizierende Bitbucket-Workspace (z. B. "atlassian" aus https://bitbucket.org/atlassian/workspace )',
bitbucketWorkspaceTip:
'Dieser Connector indiziert alle Repositories im Workspace.',
bitbucketProjectsTip: 'Kommagetrennte Projekt-Keys, z. B.: PROJ1,PROJ2',
bitbucketRepositorySlugsTip:
'Kommagetrennte Repository-Slugs, z. B.: repo-one,repo-two',
connectorNameTip:
'Geben Sie einen aussagekräftigen Namen für den Connector an',
boxDescription:
'Verbinden Sie Ihr Box-Laufwerk, um Dateien und Ordner zu synchronisieren.',
githubDescription:

View File

@ -879,6 +879,7 @@ This auto-tagging feature enhances retrieval by adding another layer of domain-s
cropImage: 'Crop image',
selectModelPlaceholder: 'Select model',
configureModelTitle: 'Configure model',
connectorNameTip: 'A descriptive name for the connector',
confluenceIsCloudTip:
'Check if this is a Confluence Cloud instance, uncheck for Confluence Server/Data Center',
confluenceWikiBaseUrlTip:
@ -923,7 +924,9 @@ Example: Virtual Hosted Style`,
google_driveTokenTip:
'Upload the OAuth token JSON generated from the OAuth helper or Google Cloud Console. You may also upload a client_secret JSON from an "installed" or "web" application. If this is your first sync, a browser window will open to complete the OAuth consent. If the JSON already contains a refresh token, it will be reused automatically.',
google_drivePrimaryAdminTip:
'Email address that has access to the Drive content being synced.',
'Email address that has access to the Drive content being synced',
zendeskDescription:
'Connect your Zendesk to sync tickets, articles, and other content.',
google_driveMyDriveEmailsTip:
'Comma-separated emails whose "My Drive" contents should be indexed (include the primary admin).',
google_driveSharedFoldersTip:
@ -934,7 +937,16 @@ Example: Virtual Hosted Style`,
'Upload the OAuth JSON generated from Google Console. If it only contains client credentials, run the browser-based verification once to mint long-lived refresh tokens.',
dropboxDescription:
'Connect your Dropbox to sync files and folders from a chosen account.',
bitbucketDescription: 'Connect Bitbucket to sync PR content.',
bitbucketTopWorkspaceTip:
'The Bitbucket workspace to index (e.g., "atlassian" from https://bitbucket.org/atlassian/workspace ).',
bitbucketRepositorySlugsTip:
'Comma separated repository slugs. E.g., repo-one,repo-two',
bitbucketProjectsTip: 'Comma separated project keys. E.g., PROJ1,PROJ2',
bitbucketWorkspaceTip:
'This connector will index all repositories in the workspace.',
boxDescription: 'Connect your Box drive to sync files and folders.',
githubDescription:
'Connect GitHub to sync pull requests and issues for retrieval.',
airtableDescription:

View File

@ -731,6 +731,7 @@ export default {
newDocs: 'Новые документы',
timeStarted: 'Время начала',
log: 'Лог',
connectorNameTip: 'Укажите понятное имя для коннектора',
confluenceDescription:
'Интегрируйте ваше рабочее пространство Confluence для поиска документации.',
s3Description:
@ -747,6 +748,18 @@ export default {
'Синхронизируйте страницы и базы данных из Notion для извлечения знаний.',
boxDescription:
'Подключите ваш диск Box для синхронизации файлов и папок.',
bitbucketDescription:
'Подключите Bitbucket для синхронизации содержимого PR.',
zendeskDescription:
'Подключите Zendesk для синхронизации тикетов, статей и другого контента.',
bitbucketTopWorkspaceTip:
'Рабочее пространство Bitbucket для индексации (например, "atlassian" из https://bitbucket.org/atlassian/workspace )',
bitbucketWorkspaceTip:
'Этот коннектор проиндексирует все репозитории в рабочем пространстве.',
bitbucketProjectsTip:
'Ключи проектов через запятую, например: PROJ1,PROJ2',
bitbucketRepositorySlugsTip:
'Слоги репозиториев через запятую, например: repo-one,repo-two',
githubDescription:
'Подключите GitHub для синхронизации содержимого Pull Request и Issue для поиска.',
airtableDescription:

View File

@ -726,6 +726,16 @@ export default {
view: '查看',
modelsToBeAddedTooltip:
'若您的模型供應商未列於此處,但宣稱與 OpenAI 相容可透過選擇「OpenAI-API-compatible」卡片來設定相關模型。',
dropboxDescription: '連接 Dropbox同步指定帳號下的文件與文件夾。',
bitbucketDescription: '連接 Bitbucket同步 PR 內容。',
zendeskDescription: '連接 Zendesk同步工單、文章及其他內容。',
bitbucketTopWorkspaceTip:
'要索引的 Bitbucket 工作區例如https://bitbucket.org/atlassian/workspace 中的 "atlassian"',
bitbucketWorkspaceTip: '此連接器將索引工作區下的所有倉庫。',
bitbucketRepositorySlugsTip:
'以英文逗號分隔的倉庫 slug例如repo-one,repo-two',
bitbucketProjectsTip: '以英文逗號分隔的項目鍵例如PROJ1,PROJ2',
connectorNameTip: '為連接器填寫一個有意義的名稱',
},
message: {
registered: '註冊成功',

View File

@ -53,6 +53,7 @@ export default {
noData: '暂无数据',
bedrockCredentialsHint:
'提示Access Key / Secret Key 可留空,以启用 AWS IAM 自动验证。',
zendeskDescription: '连接 Zendesk同步工单、文章及其他内容。',
promptPlaceholder: '请输入或使用 / 快速插入变量。',
selected: '已选择',
},
@ -864,6 +865,14 @@ General实体和关系提取提示来自 GitHub - microsoft/graphrag基于
'请上传由 Google Console 生成的 OAuth JSON。如果仅包含 client credentials请通过浏览器授权一次以获取长期有效的刷新 Token。',
dropboxDescription: '连接 Dropbox同步指定账号下的文件与文件夹。',
boxDescription: '连接你的 Box 云盘以同步文件和文件夹。',
bitbucketDescription: '连接 Bitbucket同步 PR 内容。',
bitbucketTopWorkspaceTip:
'要索引的 Bitbucket 工作区例如https://bitbucket.org/atlassian/workspace 中的 "atlassian"',
bitbucketWorkspaceTip: '该连接器将索引工作区下的所有仓库。',
bitbucketProjectsTip: '用英文逗号分隔的项目 key例如PROJ1,PROJ2',
bitbucketRepositorySlugsTip:
'用英文逗号分隔的仓库 slug例如repo-one,repo-two',
connectorNameTip: '为连接器命名',
githubDescription:
'连接 GitHub可同步 Pull Request 与 Issue 内容用于检索。',
airtableDescription: '连接 Airtable同步指定工作区下指定表格中的文件。',

View File

@ -25,13 +25,14 @@ import { useComposeLlmOptionsByModelTypes } from '@/hooks/use-llm-request';
import { cn } from '@/lib/utils';
import { t } from 'i18next';
import { Settings } from 'lucide-react';
import { useCallback, useEffect, useMemo, useState } from 'react';
import { useCallback, useContext, useEffect, useMemo, useState } from 'react';
import {
ControllerRenderProps,
FieldValues,
useFormContext,
} from 'react-hook-form';
import { useLocation } from 'umi';
import { history, useLocation } from 'umi';
import { DataSetContext } from '..';
import {
MetadataType,
useManageMetadata,
@ -371,6 +372,7 @@ export function AutoMetadata({
// get metadata field
const location = useLocation();
const form = useFormContext();
const datasetContext = useContext(DataSetContext);
const {
manageMetadataVisible,
showManageMetadataModal,
@ -394,13 +396,14 @@ export function AutoMetadata({
const locationState = location.state as
| { openMetadata?: boolean }
| undefined;
if (locationState?.openMetadata) {
if (locationState?.openMetadata && !datasetContext?.loading) {
setTimeout(() => {
handleClickOpenMetadata();
}, 100);
}, 0);
locationState.openMetadata = false;
history.replace({ ...location }, locationState);
}
}, [location, handleClickOpenMetadata]);
}, [location, handleClickOpenMetadata, datasetContext]);
const autoMetadataField: FormFieldConfig = {
name: 'parser_config.enable_metadata',

View File

@ -37,7 +37,8 @@ export function useHasParsedDocument(isEdit?: boolean) {
export const useFetchKnowledgeConfigurationOnMount = (
form: UseFormReturn<z.infer<typeof formSchema>, any, undefined>,
) => {
const { data: knowledgeDetails } = useFetchKnowledgeBaseConfiguration();
const { data: knowledgeDetails, loading } =
useFetchKnowledgeBaseConfiguration();
useEffect(() => {
const parser_config = {
@ -71,7 +72,7 @@ export const useFetchKnowledgeConfigurationOnMount = (
form.reset(formValues);
}, [form, knowledgeDetails]);
return knowledgeDetails;
return { knowledgeDetails, loading };
};
export const useSelectKnowledgeDetailsLoading = () =>

View File

@ -7,11 +7,11 @@ import { Form } from '@/components/ui/form';
import { FormLayout } from '@/constants/form';
import { DocumentParserType } from '@/constants/knowledge';
import { PermissionRole } from '@/constants/permission';
import { IConnector } from '@/interfaces/database/knowledge';
import { IConnector, IKnowledge } from '@/interfaces/database/knowledge';
import { useDataSourceInfo } from '@/pages/user-setting/data-source/constant';
import { IDataSourceBase } from '@/pages/user-setting/data-source/interface';
import { zodResolver } from '@hookform/resolvers/zod';
import { useEffect, useState } from 'react';
import { createContext, useEffect, useState } from 'react';
import { useForm, useWatch } from 'react-hook-form';
import { useTranslation } from 'react-i18next';
import { z } from 'zod';
@ -35,6 +35,10 @@ const enum DocumentType {
DeepDOC = 'DeepDOC',
PlainText = 'Plain Text',
}
export const DataSetContext = createContext<{
loading: boolean;
knowledgeDetails: IKnowledge;
}>({ loading: false, knowledgeDetails: {} as IKnowledge });
const initialEntityTypes = [
'organization',
@ -102,7 +106,8 @@ export default function DatasetSettings() {
},
});
const { dataSourceInfo } = useDataSourceInfo();
const knowledgeDetails = useFetchKnowledgeConfigurationOnMount(form);
const { knowledgeDetails, loading: datasetSettingLoading } =
useFetchKnowledgeConfigurationOnMount(form);
// const [pipelineData, setPipelineData] = useState<IDataPipelineNodeProps>();
const [sourceData, setSourceData] = useState<IDataSourceNodeProps[]>();
const [graphRagGenerateData, setGraphRagGenerateData] =
@ -254,81 +259,90 @@ export default function DatasetSettings() {
description={t('knowledgeConfiguration.titleDescription')}
></TopTitle>
<div className="flex gap-14 flex-1 min-h-0">
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-6 ">
<div className="w-[768px] h-[calc(100vh-240px)] pr-1 overflow-y-auto scrollbar-auto">
<MainContainer className="text-text-secondary">
<div className="text-base font-medium text-text-primary">
{t('knowledgeConfiguration.baseInfo')}
</div>
<GeneralForm></GeneralForm>
<DataSetContext.Provider
value={{
loading: datasetSettingLoading,
knowledgeDetails: knowledgeDetails,
}}
>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-6 ">
<div className="w-[768px] h-[calc(100vh-240px)] pr-1 overflow-y-auto scrollbar-auto">
<MainContainer className="text-text-secondary">
<div className="text-base font-medium text-text-primary">
{t('knowledgeConfiguration.baseInfo')}
</div>
<GeneralForm></GeneralForm>
<Divider />
<div className="text-base font-medium text-text-primary">
{t('knowledgeConfiguration.dataPipeline')}
</div>
<ParseTypeItem line={1} />
{parseType === 1 && (
<ChunkMethodItem line={1}></ChunkMethodItem>
)}
{parseType === 2 && (
<DataFlowSelect
isMult={false}
showToDataPipeline={true}
formFieldName="pipeline_id"
layout={FormLayout.Horizontal}
/>
)}
<Divider />
<div className="text-base font-medium text-text-primary">
{t('knowledgeConfiguration.dataPipeline')}
</div>
<ParseTypeItem line={1} />
{parseType === 1 && (
<ChunkMethodItem line={1}></ChunkMethodItem>
)}
{parseType === 2 && (
<DataFlowSelect
isMult={false}
showToDataPipeline={true}
formFieldName="pipeline_id"
layout={FormLayout.Horizontal}
/>
)}
{/* <Divider /> */}
{parseType === 1 && <ChunkMethodForm />}
{/* <Divider /> */}
{parseType === 1 && <ChunkMethodForm />}
{/* <LinkDataPipeline
{/* <LinkDataPipeline
data={pipelineData}
handleLinkOrEditSubmit={handleLinkOrEditSubmit}
/> */}
<Divider />
<LinkDataSource
data={sourceData}
handleLinkOrEditSubmit={handleLinkOrEditSubmit}
unbindFunc={unbindFunc}
handleAutoParse={handleAutoParse}
/>
<Divider />
<div className="text-base font-medium text-text-primary">
{t('knowledgeConfiguration.globalIndex')}
</div>
<GraphRagItems
className="border-none p-0"
data={graphRagGenerateData as IGenerateLogButtonProps}
onDelete={() =>
handleDeletePipelineTask(GenerateType.KnowledgeGraph)
}
></GraphRagItems>
<Divider />
<RaptorFormFields
data={raptorGenerateData as IGenerateLogButtonProps}
onDelete={() => handleDeletePipelineTask(GenerateType.Raptor)}
></RaptorFormFields>
</MainContainer>
</div>
<div className="text-right items-center flex justify-end gap-3 w-[768px]">
<Button
type="reset"
className="bg-transparent text-color-white hover:bg-transparent border-gray-500 border-[1px]"
onClick={() => {
form.reset();
}}
>
{t('knowledgeConfiguration.cancel')}
</Button>
<SavingButton></SavingButton>
</div>
</form>
</Form>
<div className="flex-1">
{parseType === 1 && <ChunkMethodLearnMore parserId={selectedTag} />}
</div>
<Divider />
<LinkDataSource
data={sourceData}
handleLinkOrEditSubmit={handleLinkOrEditSubmit}
unbindFunc={unbindFunc}
handleAutoParse={handleAutoParse}
/>
<Divider />
<div className="text-base font-medium text-text-primary">
{t('knowledgeConfiguration.globalIndex')}
</div>
<GraphRagItems
className="border-none p-0"
data={graphRagGenerateData as IGenerateLogButtonProps}
onDelete={() =>
handleDeletePipelineTask(GenerateType.KnowledgeGraph)
}
></GraphRagItems>
<Divider />
<RaptorFormFields
data={raptorGenerateData as IGenerateLogButtonProps}
onDelete={() =>
handleDeletePipelineTask(GenerateType.Raptor)
}
></RaptorFormFields>
</MainContainer>
</div>
<div className="text-right items-center flex justify-end gap-3 w-[768px]">
<Button
type="reset"
className="bg-transparent text-color-white hover:bg-transparent border-gray-500 border-[1px]"
onClick={() => {
form.reset();
}}
>
{t('knowledgeConfiguration.cancel')}
</Button>
<SavingButton></SavingButton>
</div>
</form>
</Form>
<div className="flex-1">
{parseType === 1 && <ChunkMethodLearnMore parserId={selectedTag} />}
</div>
</DataSetContext.Provider>
</div>
</section>
);

View File

@ -10,18 +10,21 @@ import { useNavigate } from 'umi';
import { Agents } from './agent-list';
import { SeeAllAppCard } from './application-card';
import { ChatList } from './chat-list';
import { MemoryList } from './memory-list';
import { SearchList } from './search-list';
const IconMap = {
[Routes.Chats]: 'chats',
[Routes.Searches]: 'searches',
[Routes.Agents]: 'agents',
[Routes.Memories]: 'memory',
};
const EmptyTypeMap = {
[Routes.Chats]: EmptyCardType.Chat,
[Routes.Searches]: EmptyCardType.Search,
[Routes.Agents]: EmptyCardType.Agent,
[Routes.Memories]: EmptyCardType.Memory,
};
export function Applications() {
@ -47,6 +50,7 @@ export function Applications() {
{ value: Routes.Chats, label: t('chat.chatApps') },
{ value: Routes.Searches, label: t('search.searchApps') },
{ value: Routes.Agents, label: t('header.flow') },
{ value: Routes.Memories, label: t('memories.memory') },
],
[t],
);
@ -96,6 +100,12 @@ export function Applications() {
setLoading={(loading: boolean) => setLoading(loading)}
></SearchList>
)}
{val === Routes.Memories && (
<MemoryList
setListLength={(length: number) => setListLength(length)}
setLoading={(loading: boolean) => setLoading(loading)}
></MemoryList>
)}
{listLength > 0 && (
<SeeAllAppCard
click={() => handleNavigate({ isCreate: false })}

View File

@ -0,0 +1,79 @@
import { HomeCard } from '@/components/home-card';
import { MoreButton } from '@/components/more-button';
import { useNavigatePage } from '@/hooks/logic-hooks/navigate-hooks';
import { useEffect } from 'react';
import { AddOrEditModal } from '../memories/add-or-edit-modal';
import { useFetchMemoryList, useRenameMemory } from '../memories/hooks';
import { ICreateMemoryProps } from '../memories/interface';
import { MemoryDropdown } from '../memories/memory-dropdown';
export function MemoryList({
setListLength,
setLoading,
}: {
setListLength: (length: number) => void;
setLoading?: (loading: boolean) => void;
}) {
const { data, refetch: refetchList, isLoading } = useFetchMemoryList();
const { navigateToMemory } = useNavigatePage();
// const {
// openCreateModal,
// showSearchRenameModal,
// hideSearchRenameModal,
// searchRenameLoading,
// onSearchRenameOk,
// initialSearchName,
// } = useRenameSearch();
const {
openCreateModal,
showMemoryRenameModal,
hideMemoryModal,
searchRenameLoading,
onMemoryRenameOk,
initialMemory,
} = useRenameMemory();
const onMemoryConfirm = (data: ICreateMemoryProps) => {
onMemoryRenameOk(data, () => {
refetchList();
});
};
useEffect(() => {
setListLength(data?.data?.memory_list?.length || 0);
setLoading?.(isLoading || false);
}, [data, setListLength, isLoading, setLoading]);
return (
<>
{data?.data.memory_list.slice(0, 10).map((x) => (
<HomeCard
key={x.id}
data={{
name: x?.name,
avatar: x?.avatar,
description: x?.description,
update_time: x?.create_time,
}}
onClick={navigateToMemory(x.id)}
moreDropdown={
<MemoryDropdown
memory={x}
showMemoryRenameModal={showMemoryRenameModal}
>
<MoreButton></MoreButton>
</MemoryDropdown>
}
></HomeCard>
))}
{openCreateModal && (
<AddOrEditModal
initialMemory={initialMemory}
isCreate={false}
open={openCreateModal}
loading={searchRenameLoading}
onClose={hideMemoryModal}
onSubmit={onMemoryConfirm}
/>
)}
</>
);
}

View File

@ -1,247 +0,0 @@
import { useEffect, useMemo, useState } from 'react';
import { useFormContext } from 'react-hook-form';
import { SelectWithSearch } from '@/components/originui/select-with-search';
import { RAGFlowFormItem } from '@/components/ragflow-form';
import { Input } from '@/components/ui/input';
import { Segmented } from '@/components/ui/segmented';
import { t } from 'i18next';
// UI-only auth modes for S3
// access_key: Access Key ID + Secret
// iam_role: only Role ARN
// assume_role: no input fields (uses environment role)
type AuthMode = 'access_key' | 'iam_role' | 'assume_role';
type BlobMode = 's3' | 's3_compatible';
const modeOptions = [
{ label: 'S3', value: 's3' },
{ label: 'S3 Compatible', value: 's3_compatible' },
];
const authOptions = [
{ label: 'Access Key', value: 'access_key' },
{ label: 'IAM Role', value: 'iam_role' },
{ label: 'Assume Role', value: 'assume_role' },
];
const addressingOptions = [
{ label: 'Virtual Hosted Style', value: 'virtual' },
{ label: 'Path Style', value: 'path' },
];
const deriveInitialAuthMode = (credentials: any): AuthMode => {
const authMethod = credentials?.authentication_method;
if (authMethod === 'iam_role') return 'iam_role';
if (authMethod === 'assume_role') return 'assume_role';
if (credentials?.aws_role_arn) return 'iam_role';
if (credentials?.aws_access_key_id || credentials?.aws_secret_access_key)
return 'access_key';
return 'access_key';
};
const deriveInitialMode = (bucketType?: string): BlobMode =>
bucketType === 's3_compatible' ? 's3_compatible' : 's3';
const BlobTokenField = () => {
const form = useFormContext();
const credentials = form.watch('config.credentials');
const watchedBucketType = form.watch('config.bucket_type');
const [mode, setMode] = useState<BlobMode>(
deriveInitialMode(watchedBucketType),
);
const [authMode, setAuthMode] = useState<AuthMode>(() =>
deriveInitialAuthMode(credentials),
);
// Keep bucket_type in sync with UI mode
useEffect(() => {
const nextMode = deriveInitialMode(watchedBucketType);
setMode((prev) => (prev === nextMode ? prev : nextMode));
}, [watchedBucketType]);
useEffect(() => {
form.setValue('config.bucket_type', mode, { shouldDirty: true });
// Default addressing style for compatible mode
if (
mode === 's3_compatible' &&
!form.getValues('config.credentials.addressing_style')
) {
form.setValue('config.credentials.addressing_style', 'virtual', {
shouldDirty: false,
});
}
if (mode === 's3_compatible' && authMode !== 'access_key') {
setAuthMode('access_key');
}
// Persist authentication_method for backend
const nextAuthMethod: AuthMode =
mode === 's3_compatible' ? 'access_key' : authMode;
form.setValue('config.credentials.authentication_method', nextAuthMethod, {
shouldDirty: true,
});
// Clear errors for fields that are not relevant in the current mode/auth selection
const inactiveFields: string[] = [];
if (mode === 's3_compatible') {
inactiveFields.push('config.credentials.aws_role_arn');
} else {
if (authMode === 'iam_role') {
inactiveFields.push('config.credentials.aws_access_key_id');
inactiveFields.push('config.credentials.aws_secret_access_key');
}
if (authMode === 'assume_role') {
inactiveFields.push('config.credentials.aws_access_key_id');
inactiveFields.push('config.credentials.aws_secret_access_key');
inactiveFields.push('config.credentials.aws_role_arn');
}
}
if (inactiveFields.length) {
form.clearErrors(inactiveFields as any);
}
}, [form, mode, authMode]);
const isS3 = mode === 's3';
const requiresAccessKey =
authMode === 'access_key' || mode === 's3_compatible';
const requiresRoleArn = isS3 && authMode === 'iam_role';
// Help text for assume role (no inputs)
const assumeRoleNote = useMemo(
() => t('No credentials required. Uses the default environment role.'),
[t],
);
return (
<div className="flex flex-col gap-4">
<div className="flex flex-col gap-2">
<div className="text-sm text-text-secondary">Mode</div>
<Segmented
options={modeOptions}
value={mode}
onChange={(val) => setMode(val as BlobMode)}
className="w-full"
itemClassName="flex-1 justify-center"
/>
</div>
{isS3 && (
<div className="flex flex-col gap-2">
<div className="text-sm text-text-secondary">Authentication</div>
<Segmented
options={authOptions}
value={authMode}
onChange={(val) => setAuthMode(val as AuthMode)}
className="w-full"
itemClassName="flex-1 justify-center"
/>
</div>
)}
{requiresAccessKey && (
<RAGFlowFormItem
name="config.credentials.aws_access_key_id"
label="AWS Access Key ID"
required={requiresAccessKey}
rules={{
validate: (val) =>
requiresAccessKey
? Boolean(val) || 'Access Key ID is required'
: true,
}}
>
{(field) => (
<Input {...field} placeholder="AKIA..." autoComplete="off" />
)}
</RAGFlowFormItem>
)}
{requiresAccessKey && (
<RAGFlowFormItem
name="config.credentials.aws_secret_access_key"
label="AWS Secret Access Key"
required={requiresAccessKey}
rules={{
validate: (val) =>
requiresAccessKey
? Boolean(val) || 'Secret Access Key is required'
: true,
}}
>
{(field) => (
<Input
{...field}
type="password"
placeholder="****************"
autoComplete="new-password"
/>
)}
</RAGFlowFormItem>
)}
{requiresRoleArn && (
<RAGFlowFormItem
name="config.credentials.aws_role_arn"
label="Role ARN"
required={requiresRoleArn}
tooltip="The role will be assumed by the runtime environment."
rules={{
validate: (val) =>
requiresRoleArn ? Boolean(val) || 'Role ARN is required' : true,
}}
>
{(field) => (
<Input
{...field}
placeholder="arn:aws:iam::123456789012:role/YourRole"
autoComplete="off"
/>
)}
</RAGFlowFormItem>
)}
{isS3 && authMode === 'assume_role' && (
<div className="text-sm text-text-secondary bg-bg-card border border-border-button rounded-md px-3 py-2">
{assumeRoleNote}
</div>
)}
{mode === 's3_compatible' && (
<div className="flex flex-col gap-4">
<RAGFlowFormItem
name="config.credentials.addressing_style"
label="Addressing Style"
tooltip={t('setting.S3CompatibleAddressingStyleTip')}
required={false}
>
{(field) => (
<SelectWithSearch
triggerClassName="!shrink"
options={addressingOptions}
value={field.value || 'virtual'}
onChange={(val) => field.onChange(val)}
/>
)}
</RAGFlowFormItem>
<RAGFlowFormItem
name="config.credentials.endpoint_url"
label="Endpoint URL"
required={false}
tooltip={t('setting.S3CompatibleEndpointUrlTip')}
>
{(field) => (
<Input
{...field}
placeholder="https://fsn1.your-objectstorage.com"
autoComplete="off"
/>
)}
</RAGFlowFormItem>
</div>
)}
</div>
);
};
export default BlobTokenField;

View File

@ -131,7 +131,6 @@ const BoxTokenField = ({ value, onChange }: BoxTokenFieldProps) => {
const finalValue: Record<string, any> = {
...rest,
// 确保客户端配置字段有值(优先后端返回,其次当前输入)
client_id: rest.client_id ?? clientId.trim(),
client_secret: rest.client_secret ?? clientSecret.trim(),
};
@ -146,8 +145,6 @@ const BoxTokenField = ({ value, onChange }: BoxTokenFieldProps) => {
finalValue.authorization_code = code;
}
// access_token / refresh_token 由后端返回,已在 ...rest 中带上,无需额外 state
onChange(JSON.stringify(finalValue));
message.success('Box authorization completed.');
clearWebState();

View File

@ -1,200 +0,0 @@
import { useCallback, useEffect, useMemo, useState } from 'react';
import { ControllerRenderProps, useFormContext } from 'react-hook-form';
import { Checkbox } from '@/components/ui/checkbox';
import { Input } from '@/components/ui/input';
import { cn } from '@/lib/utils';
import { debounce } from 'lodash';
/* ---------------- Token Field ---------------- */
export type ConfluenceTokenFieldProps = ControllerRenderProps & {
fieldType: 'username' | 'token';
placeholder?: string;
disabled?: boolean;
};
const ConfluenceTokenField = ({
fieldType,
value,
onChange,
placeholder,
disabled,
...rest
}: ConfluenceTokenFieldProps) => {
return (
<div className="flex w-full flex-col gap-2">
<Input
className="w-full"
type={fieldType === 'token' ? 'password' : 'text'}
value={value ?? ''}
onChange={(e) => onChange(e.target.value)}
placeholder={
placeholder ||
(fieldType === 'token'
? 'Enter your Confluence access token'
: 'Confluence username or email')
}
disabled={disabled}
{...rest}
/>
</div>
);
};
/* ---------------- Indexing Mode Field ---------------- */
type ConfluenceIndexingMode = 'everything' | 'space' | 'page';
export type ConfluenceIndexingModeFieldProps = ControllerRenderProps;
export const ConfluenceIndexingModeField = (
fieldProps: ControllerRenderProps,
) => {
const { value, onChange, disabled } = fieldProps;
const [mode, setMode] = useState<ConfluenceIndexingMode>(
value || 'everything',
);
const { watch, setValue } = useFormContext();
useEffect(() => setMode(value), [value]);
const spaceValue = watch('config.space');
const pageIdValue = watch('config.page_id');
const indexRecursively = watch('config.index_recursively');
useEffect(() => {
if (!value) onChange('everything');
}, [value, onChange]);
const handleModeChange = useCallback(
(nextMode?: string) => {
let normalized: ConfluenceIndexingMode = 'everything';
if (nextMode) {
normalized = nextMode as ConfluenceIndexingMode;
setMode(normalized);
onChange(normalized);
} else {
setMode(mode);
normalized = mode;
onChange(mode);
// onChange(mode);
}
if (normalized === 'everything') {
setValue('config.space', '');
setValue('config.page_id', '');
setValue('config.index_recursively', false);
} else if (normalized === 'space') {
setValue('config.page_id', '');
setValue('config.index_recursively', false);
} else if (normalized === 'page') {
setValue('config.space', '');
}
},
[mode, onChange, setValue],
);
const debouncedHandleChange = useMemo(
() =>
debounce(() => {
handleModeChange();
}, 300),
[handleModeChange],
);
return (
<div className="w-full rounded-lg border border-border-button bg-bg-card p-4 space-y-4">
<div className="flex items-center gap-2 text-sm font-medium text-text-secondary">
{INDEX_MODE_OPTIONS.map((option) => {
const isActive = option.value === mode;
return (
<button
key={option.value}
type="button"
disabled={disabled}
onClick={() => handleModeChange(option.value)}
className={cn(
'flex-1 rounded-lg border px-3 py-2 transition-all',
'border-transparent bg-transparent text-text-secondary hover:border-border-button hover:bg-bg-card-secondary',
isActive &&
'border-border-button bg-background text-primary shadow-sm',
)}
>
{option.label}
</button>
);
})}
</div>
{mode === 'everything' && (
<p className="text-sm text-text-secondary">
This connector will index all pages the provided credentials have
access to.
</p>
)}
{mode === 'space' && (
<div className="space-y-2">
<div className="text-sm font-semibold text-text-primary">
Space Key
</div>
<Input
className="w-full"
value={spaceValue ?? ''}
onChange={(e) => {
const value = e.target.value;
setValue('config.space', value);
debouncedHandleChange();
}}
placeholder="e.g. KB"
disabled={disabled}
/>
<p className="text-xs text-text-secondary">
The Confluence space key to index.
</p>
</div>
)}
{mode === 'page' && (
<div className="space-y-2">
<div className="text-sm font-semibold text-text-primary">Page ID</div>
<Input
className="w-full"
value={pageIdValue ?? ''}
onChange={(e) => {
setValue('config.page_id', e.target.value);
debouncedHandleChange();
}}
placeholder="e.g. 123456"
disabled={disabled}
/>
<p className="text-xs text-text-secondary">
The Confluence page ID to index.
</p>
<div className="flex items-center gap-2 pt-2">
<Checkbox
checked={Boolean(indexRecursively)}
onCheckedChange={(checked) => {
setValue('config.index_recursively', Boolean(checked));
debouncedHandleChange();
}}
disabled={disabled}
/>
<span className="text-sm text-text-secondary">
Index child pages recursively
</span>
</div>
</div>
)}
</div>
);
};
const INDEX_MODE_OPTIONS = [
{ label: 'Everything', value: 'everything' },
{ label: 'Space', value: 'space' },
{ label: 'Page', value: 'page' },
];
export default ConfluenceTokenField;

View File

@ -0,0 +1,83 @@
import { FilterFormField, FormFieldType } from '@/components/dynamic-form';
import { TFunction } from 'i18next';
export const bitbucketConstant = (t: TFunction) => [
{
label: 'Bitbucket Account Email',
name: 'config.credentials.bitbucket_account_email',
type: FormFieldType.Email,
required: true,
},
{
label: 'Bitbucket API Token',
name: 'config.credentials.bitbucket_api_token',
type: FormFieldType.Password,
required: true,
},
{
label: 'Workspace',
name: 'config.workspace',
type: FormFieldType.Text,
required: true,
tooltip: t('setting.bitbucketTopWorkspaceTip'),
},
{
label: 'Index Mode',
name: 'config.index_mode',
type: FormFieldType.Segmented,
options: [
{ label: 'Repositories', value: 'repositories' },
{ label: 'Project(s)', value: 'projects' },
{ label: 'Workspace', value: 'workspace' },
],
},
{
label: 'Repository Slugs',
name: 'config.repository_slugs',
type: FormFieldType.Text,
customValidate: (val: string, formValues: any) => {
const index_mode = formValues?.config?.index_mode;
if (!val && index_mode === 'repositories') {
return 'Repository Slugs is required';
}
return true;
},
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
return index_mode === 'repositories';
},
tooltip: t('setting.bitbucketRepositorySlugsTip'),
},
{
label: 'Projects',
name: 'config.projects',
type: FormFieldType.Text,
customValidate: (val: string, formValues: any) => {
const index_mode = formValues?.config?.index_mode;
if (!val && index_mode === 'projects') {
return 'Projects is required';
}
return true;
},
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
console.log('formValues.config', formValues?.config);
return index_mode === 'projects';
},
tooltip: t('setting.bitbucketProjectsTip'),
},
{
name: FilterFormField + '.tip',
label: ' ',
type: FormFieldType.Custom,
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
return index_mode === 'workspace';
},
render: () => (
<div className="text-sm text-text-secondary bg-bg-card border border-border-button rounded-md px-3 py-2">
{t('setting.bitbucketWorkspaceTip')}
</div>
),
},
];

View File

@ -0,0 +1,121 @@
import { FilterFormField, FormFieldType } from '@/components/dynamic-form';
import { TFunction } from 'i18next';
export const confluenceConstant = (t: TFunction) => [
{
label: 'Confluence Username',
name: 'config.credentials.confluence_username',
type: FormFieldType.Text,
required: true,
tooltip: t('setting.connectorNameTip'),
},
{
label: 'Confluence Access Token',
name: 'config.credentials.confluence_access_token',
type: FormFieldType.Password,
required: true,
},
{
label: 'Wiki Base URL',
name: 'config.wiki_base',
type: FormFieldType.Text,
required: false,
tooltip: t('setting.confluenceWikiBaseUrlTip'),
},
{
label: 'Is Cloud',
name: 'config.is_cloud',
type: FormFieldType.Checkbox,
required: false,
tooltip: t('setting.confluenceIsCloudTip'),
},
{
label: 'Index Mode',
name: 'config.index_mode',
type: FormFieldType.Segmented,
options: [
{ label: 'Everything', value: 'everything' },
{ label: 'Space', value: 'space' },
{ label: 'Page', value: 'page' },
],
},
{
name: 'config.page_id',
label: 'Page ID',
type: FormFieldType.Text,
customValidate: (val: string, formValues: any) => {
const index_mode = formValues?.config?.index_mode;
console.log('index_mode', index_mode, val);
if (!val && index_mode === 'page') {
return 'Page ID is required';
}
return true;
},
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
return index_mode === 'page';
},
},
{
name: 'config.space',
label: 'Space Key',
type: FormFieldType.Text,
customValidate: (val: string, formValues: any) => {
const index_mode = formValues?.config?.index_mode;
if (!val && index_mode === 'space') {
return 'Space Key is required';
}
return true;
},
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
return index_mode === 'space';
},
},
{
name: 'config.index_recursively',
label: 'Index Recursively',
type: FormFieldType.Checkbox,
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
return index_mode === 'page';
},
},
{
name: FilterFormField + '.tip',
label: ' ',
type: FormFieldType.Custom,
shouldRender: (formValues: any) => {
const index_mode = formValues?.config?.index_mode;
return index_mode === 'everything';
},
render: () => (
<div className="text-sm text-text-secondary bg-bg-card border border-border-button rounded-md px-3 py-2">
{
'This choice will index all pages the provided credentials have access to.'
}
</div>
),
},
{
label: 'Space Key',
name: 'config.space',
type: FormFieldType.Text,
required: false,
hidden: true,
},
{
label: 'Page ID',
name: 'config.page_id',
type: FormFieldType.Text,
required: false,
hidden: true,
},
{
label: 'Index Recursively',
name: 'config.index_recursively',
type: FormFieldType.Checkbox,
required: false,
hidden: true,
},
];

View File

@ -4,11 +4,13 @@ import { t, TFunction } from 'i18next';
import { useEffect, useState } from 'react';
import { useTranslation } from 'react-i18next';
import BoxTokenField from '../component/box-token-field';
import { ConfluenceIndexingModeField } from '../component/confluence-token-field';
import GmailTokenField from '../component/gmail-token-field';
import GoogleDriveTokenField from '../component/google-drive-token-field';
import { IDataSourceInfoMap } from '../interface';
import { bitbucketConstant } from './bitbucket-constant';
import { confluenceConstant } from './confluence-constant';
import { S3Constant } from './s3-constant';
export enum DataSourceKey {
CONFLUENCE = 'confluence',
S3 = 's3',
@ -29,6 +31,8 @@ export enum DataSourceKey {
ASANA = 'asana',
IMAP = 'imap',
GITHUB = 'github',
BITBUCKET = 'bitbucket',
ZENDESK = 'zendesk',
// SHAREPOINT = 'sharepoint',
// SLACK = 'slack',
// TEAMS = 'teams',
@ -133,6 +137,16 @@ export const generateDataSourceInfo = (t: TFunction) => {
description: t(`setting.${DataSourceKey.IMAP}Description`),
icon: <SvgIcon name={'data-source/imap'} width={38} />,
},
[DataSourceKey.BITBUCKET]: {
name: 'Bitbucket',
description: t(`setting.${DataSourceKey.BITBUCKET}Description`),
icon: <SvgIcon name={'data-source/bitbucket'} width={38} />,
},
[DataSourceKey.ZENDESK]: {
name: 'Zendesk',
description: t(`setting.${DataSourceKey.ZENDESK}Description`),
icon: <SvgIcon name={'data-source/zendesk'} width={38} />,
},
};
};
@ -288,67 +302,7 @@ export const DataSourceFormFields = {
},
],
[DataSourceKey.CONFLUENCE]: [
{
label: 'Confluence Username',
name: 'config.credentials.confluence_username',
type: FormFieldType.Text,
required: true,
tooltip: 'A descriptive name for the connector.',
},
{
label: 'Confluence Access Token',
name: 'config.credentials.confluence_access_token',
type: FormFieldType.Password,
required: true,
},
{
label: 'Wiki Base URL',
name: 'config.wiki_base',
type: FormFieldType.Text,
required: false,
tooltip: t('setting.confluenceWikiBaseUrlTip'),
},
{
label: 'Is Cloud',
name: 'config.is_cloud',
type: FormFieldType.Checkbox,
required: false,
tooltip: t('setting.confluenceIsCloudTip'),
},
{
label: 'Index Method',
name: 'config.index_mode',
type: FormFieldType.Text,
required: false,
horizontal: true,
labelClassName: 'self-start pt-4',
render: (fieldProps: any) => (
<ConfluenceIndexingModeField {...fieldProps} />
),
},
{
label: 'Space Key',
name: 'config.space',
type: FormFieldType.Text,
required: false,
hidden: true,
},
{
label: 'Page ID',
name: 'config.page_id',
type: FormFieldType.Text,
required: false,
hidden: true,
},
{
label: 'Index Recursively',
name: 'config.index_recursively',
type: FormFieldType.Checkbox,
required: false,
hidden: true,
},
],
[DataSourceKey.CONFLUENCE]: confluenceConstant(t),
[DataSourceKey.GOOGLE_DRIVE]: [
{
label: 'Primary Admin Email',
@ -822,6 +776,37 @@ export const DataSourceFormFields = {
required: false,
},
],
[DataSourceKey.BITBUCKET]: bitbucketConstant(t),
[DataSourceKey.ZENDESK]: [
{
label: 'Zendesk Domain',
name: 'config.credentials.zendesk_subdomain',
type: FormFieldType.Text,
required: true,
},
{
label: 'Zendesk Email',
name: 'config.credentials.zendesk_email',
type: FormFieldType.Text,
required: true,
},
{
label: 'Zendesk Token',
name: 'config.credentials.zendesk_token',
type: FormFieldType.Password,
required: true,
},
{
label: 'Content',
name: 'config.zendesk_content_type',
type: FormFieldType.Segmented,
required: true,
options: [
{ label: 'Articles', value: 'articles' },
{ label: 'Tickets', value: 'tickets' },
],
},
],
};
export const DataSourceFormDefaultValues = {
@ -883,6 +868,7 @@ export const DataSourceFormDefaultValues = {
wiki_base: '',
is_cloud: true,
space: '',
page_id: '',
credentials: {
confluence_username: '',
confluence_access_token: '',
@ -1076,4 +1062,30 @@ export const DataSourceFormDefaultValues = {
},
},
},
[DataSourceKey.BITBUCKET]: {
name: '',
source: DataSourceKey.BITBUCKET,
config: {
workspace: '',
index_mode: 'workspace',
repository_slugs: '',
projects: '',
},
credentials: {
bitbucket_api_token: '',
},
},
[DataSourceKey.ZENDESK]: {
name: '',
source: DataSourceKey.ZENDESK,
config: {
name: '',
zendesk_content_type: 'articles',
credentials: {
zendesk_subdomain: '',
zendesk_email: '',
zendesk_token: '',
},
},
},
};

View File

@ -5,7 +5,7 @@ import styles from './index.less';
const ApiPage = () => {
return (
<div className={styles.apiWrapper}>
<ApiContent idKey="dialogId" hideChatPreviewCard></ApiContent>
<ApiContent idKey="dialogId"></ApiContent>
</div>
);
};

View File

@ -45,11 +45,7 @@ export function LangfuseCard() {
<Eye /> {t('setting.view')}
</Button>
)}
<Button
size={'sm'}
onClick={showSaveLangfuseConfigurationModal}
className="bg-blue-500 hover:bg-blue-400"
>
<Button size={'sm'} onClick={showSaveLangfuseConfigurationModal}>
<Settings2 />
{t('setting.configuration')}
</Button>