mirror of
https://github.com/infiniflow/ragflow.git
synced 2026-01-04 03:25:30 +08:00
Compare commits
6 Commits
6a664fea3b
...
ca3bd2cf9f
| Author | SHA1 | Date | |
|---|---|---|---|
| ca3bd2cf9f | |||
| eb661c028d | |||
| 10c28c5ecd | |||
| 96810b7d97 | |||
| 365f9b01ae | |||
| 7d4d687dde |
@ -19,17 +19,17 @@ RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/huggingface.co
|
||||
# This is the only way to run python-tika without internet access. Without this set, the default is to check the tika version and pull latest every time from Apache.
|
||||
RUN --mount=type=bind,from=infiniflow/ragflow_deps:latest,source=/,target=/deps \
|
||||
cp -r /deps/nltk_data /root/ && \
|
||||
cp /deps/tika-server-standard-3.0.0.jar /deps/tika-server-standard-3.0.0.jar.md5 /ragflow/ && \
|
||||
cp /deps/tika-server-standard-3.2.3.jar /deps/tika-server-standard-3.2.3.jar.md5 /ragflow/ && \
|
||||
cp /deps/cl100k_base.tiktoken /ragflow/9b5ad71b2ce5302211f9c61530b329a4922fc6a4
|
||||
|
||||
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.0.0.jar"
|
||||
ENV TIKA_SERVER_JAR="file:///ragflow/tika-server-standard-3.2.3.jar"
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Setup apt
|
||||
# Python package and implicit dependencies:
|
||||
# opencv-python: libglib2.0-0 libglx-mesa0 libgl1
|
||||
# aspose-slides: pkg-config libicu-dev libgdiplus libssl1.1_1.1.1f-1ubuntu2_amd64.deb
|
||||
# python-pptx: default-jdk tika-server-standard-3.0.0.jar
|
||||
# python-pptx: default-jdk tika-server-standard-3.2.3.jar
|
||||
# selenium: libatk-bridge2.0-0 chrome-linux64-121-0-6167-85
|
||||
# Building C extensions: libpython3-dev libgtk-4-1 libnss3 xdg-utils libgbm-dev
|
||||
RUN --mount=type=cache,id=ragflow_apt,target=/var/cache/apt,sharing=locked \
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
FROM scratch
|
||||
|
||||
# Copy resources downloaded via download_deps.py
|
||||
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.0.0.jar tika-server-standard-3.0.0.jar.md5 libssl*.deb uv-x86_64-unknown-linux-gnu.tar.gz /
|
||||
COPY chromedriver-linux64-121-0-6167-85 chrome-linux64-121-0-6167-85 cl100k_base.tiktoken libssl1.1_1.1.1f-1ubuntu2_amd64.deb libssl1.1_1.1.1f-1ubuntu2_arm64.deb tika-server-standard-3.2.3.jar tika-server-standard-3.2.3.jar.md5 libssl*.deb uv-x86_64-unknown-linux-gnu.tar.gz /
|
||||
|
||||
COPY nltk_data /nltk_data
|
||||
|
||||
|
||||
@ -72,7 +72,7 @@
|
||||
|
||||
## 💡 What is RAGFlow?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged context engine and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
|
||||
[RAGFlow](https://ragflow.io/) is a leading open-source Retrieval-Augmented Generation ([RAG](https://ragflow.io/basics/what-is-rag)) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged [context engine](https://ragflow.io/basics/what-is-agent-context-engine) and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
|
||||
|
||||
## 🎮 Demo
|
||||
|
||||
@ -192,11 +192,11 @@ releases! 🌟
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
|
||||
# git checkout v0.23.1
|
||||
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
|
||||
# This step ensures the **entrypoint.sh** file in the code matches the Docker image version.
|
||||
|
||||
|
||||
# Use CPU for DeepDoc tasks:
|
||||
$ docker compose -f docker-compose.yml up -d
|
||||
|
||||
|
||||
@ -72,7 +72,7 @@
|
||||
|
||||
## 💡 Apa Itu RAGFlow?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) adalah mesin RAG (Retrieval-Augmented Generation) open-source terkemuka yang mengintegrasikan teknologi RAG mutakhir dengan kemampuan Agent untuk menciptakan lapisan kontekstual superior bagi LLM. Menyediakan alur kerja RAG yang efisien dan dapat diadaptasi untuk perusahaan segala skala. Didukung oleh mesin konteks terkonvergensi dan template Agent yang telah dipra-bangun, RAGFlow memungkinkan pengembang mengubah data kompleks menjadi sistem AI kesetiaan-tinggi dan siap-produksi dengan efisiensi dan presisi yang luar biasa.
|
||||
[RAGFlow](https://ragflow.io/) adalah mesin [RAG](https://ragflow.io/basics/what-is-rag) (Retrieval-Augmented Generation) open-source terkemuka yang mengintegrasikan teknologi RAG mutakhir dengan kemampuan Agent untuk menciptakan lapisan kontekstual superior bagi LLM. Menyediakan alur kerja RAG yang efisien dan dapat diadaptasi untuk perusahaan segala skala. Didukung oleh mesin konteks terkonvergensi dan template Agent yang telah dipra-bangun, RAGFlow memungkinkan pengembang mengubah data kompleks menjadi sistem AI kesetiaan-tinggi dan siap-produksi dengan efisiensi dan presisi yang luar biasa.
|
||||
|
||||
## 🎮 Demo
|
||||
|
||||
@ -192,7 +192,7 @@ Coba demo kami di [https://demo.ragflow.io](https://demo.ragflow.io).
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
|
||||
# git checkout v0.23.1
|
||||
# Opsional: gunakan tag stabil (lihat releases: https://github.com/infiniflow/ragflow/releases)
|
||||
# This steps ensures the **entrypoint.sh** file in the code matches the Docker image version.
|
||||
|
||||
@ -53,7 +53,7 @@
|
||||
|
||||
## 💡 RAGFlow とは?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) は、先進的なRAG(Retrieval-Augmented Generation)技術と Agent 機能を融合し、大規模言語モデル(LLM)に優れたコンテキスト層を構築する最先端のオープンソース RAG エンジンです。あらゆる規模の企業に対応可能な合理化された RAG ワークフローを提供し、統合型コンテキストエンジンと事前構築されたAgentテンプレートにより、開発者が複雑なデータを驚異的な効率性と精度で高精細なプロダクションレディAIシステムへ変換することを可能にします。
|
||||
[RAGFlow](https://ragflow.io/) は、先進的な[RAG](https://ragflow.io/basics/what-is-rag)(Retrieval-Augmented Generation)技術と Agent 機能を融合し、大規模言語モデル(LLM)に優れたコンテキスト層を構築する最先端のオープンソース RAG エンジンです。あらゆる規模の企業に対応可能な合理化された RAG ワークフローを提供し、統合型[コンテキストエンジン](https://ragflow.io/basics/what-is-agent-context-engine)と事前構築されたAgentテンプレートにより、開発者が複雑なデータを驚異的な効率性と精度で高精細なプロダクションレディAIシステムへ変換することを可能にします。
|
||||
|
||||
## 🎮 Demo
|
||||
|
||||
@ -194,8 +194,8 @@
|
||||
|
||||
> `v0.22.0` 以降、当プロジェクトでは slim エディションのみを提供し、イメージタグに **-slim** サフィックスを付けなくなりました。
|
||||
|
||||
1. サーバーを立ち上げた後、サーバーの状態を確認する:
|
||||
|
||||
1. サーバーを立ち上げた後、サーバーの状態を確認する:
|
||||
|
||||
```bash
|
||||
$ docker logs -f docker-ragflow-cpu-1
|
||||
```
|
||||
|
||||
@ -54,7 +54,7 @@
|
||||
|
||||
## 💡 RAGFlow란?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) 는 최첨단 RAG(Retrieval-Augmented Generation)와 Agent 기능을 융합하여 대규모 언어 모델(LLM)을 위한 우수한 컨텍스트 계층을 생성하는 선도적인 오픈소스 RAG 엔진입니다. 모든 규모의 기업에 적용 가능한 효율적인 RAG 워크플로를 제공하며, 통합 컨텍스트 엔진과 사전 구축된 Agent 템플릿을 통해 개발자들이 복잡한 데이터를 예외적인 효율성과 정밀도로 고급 구현도의 프로덕션 준비 완료 AI 시스템으로 변환할 수 있도록 지원합니다.
|
||||
[RAGFlow](https://ragflow.io/) 는 최첨단 [RAG](https://ragflow.io/basics/what-is-rag)(Retrieval-Augmented Generation)와 Agent 기능을 융합하여 대규모 언어 모델(LLM)을 위한 우수한 컨텍스트 계층을 생성하는 선도적인 오픈소스 RAG 엔진입니다. 모든 규모의 기업에 적용 가능한 효율적인 RAG 워크플로를 제공하며, 통합 [컨텍스트 엔진](https://ragflow.io/basics/what-is-agent-context-engine)과 사전 구축된 Agent 템플릿을 통해 개발자들이 복잡한 데이터를 예외적인 효율성과 정밀도로 고급 구현도의 프로덕션 준비 완료 AI 시스템으로 변환할 수 있도록 지원합니다.
|
||||
|
||||
## 🎮 데모
|
||||
|
||||
@ -174,7 +174,7 @@
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
|
||||
# git checkout v0.23.1
|
||||
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
|
||||
# 이 단계는 코드의 entrypoint.sh 파일이 Docker 이미지 버전과 일치하도록 보장합니다.
|
||||
|
||||
@ -73,7 +73,7 @@
|
||||
|
||||
## 💡 O que é o RAGFlow?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) é um mecanismo de RAG (Retrieval-Augmented Generation) open-source líder que fusiona tecnologias RAG de ponta com funcionalidades Agent para criar uma camada contextual superior para LLMs. Oferece um fluxo de trabalho RAG otimizado adaptável a empresas de qualquer escala. Alimentado por um motor de contexto convergente e modelos Agent pré-construídos, o RAGFlow permite que desenvolvedores transformem dados complexos em sistemas de IA de alta fidelidade e pronto para produção com excepcional eficiência e precisão.
|
||||
[RAGFlow](https://ragflow.io/) é um mecanismo de [RAG](https://ragflow.io/basics/what-is-rag) (Retrieval-Augmented Generation) open-source líder que fusiona tecnologias RAG de ponta com funcionalidades Agent para criar uma camada contextual superior para LLMs. Oferece um fluxo de trabalho RAG otimizado adaptável a empresas de qualquer escala. Alimentado por [um motor de contexto](https://ragflow.io/basics/what-is-agent-context-engine) convergente e modelos Agent pré-construídos, o RAGFlow permite que desenvolvedores transformem dados complexos em sistemas de IA de alta fidelidade e pronto para produção com excepcional eficiência e precisão.
|
||||
|
||||
## 🎮 Demo
|
||||
|
||||
@ -192,7 +192,7 @@ Experimente nossa demo em [https://demo.ragflow.io](https://demo.ragflow.io).
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
|
||||
# git checkout v0.23.1
|
||||
# Opcional: use uma tag estável (veja releases: https://github.com/infiniflow/ragflow/releases)
|
||||
# Esta etapa garante que o arquivo entrypoint.sh no código corresponda à versão da imagem do Docker.
|
||||
|
||||
@ -72,7 +72,7 @@
|
||||
|
||||
## 💡 RAGFlow 是什麼?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) 是一款領先的開源 RAG(Retrieval-Augmented Generation)引擎,通過融合前沿的 RAG 技術與 Agent 能力,為大型語言模型提供卓越的上下文層。它提供可適配任意規模企業的端到端 RAG 工作流,憑藉融合式上下文引擎與預置的 Agent 模板,助力開發者以極致效率與精度將複雜數據轉化為高可信、生產級的人工智能系統。
|
||||
[RAGFlow](https://ragflow.io/) 是一款領先的開源 [RAG](https://ragflow.io/basics/what-is-rag)(Retrieval-Augmented Generation)引擎,通過融合前沿的 RAG 技術與 Agent 能力,為大型語言模型提供卓越的上下文層。它提供可適配任意規模企業的端到端 RAG 工作流,憑藉融合式[上下文引擎](https://ragflow.io/basics/what-is-agent-context-engine)與預置的 Agent 模板,助力開發者以極致效率與精度將複雜數據轉化為高可信、生產級的人工智能系統。
|
||||
|
||||
## 🎮 Demo 試用
|
||||
|
||||
@ -191,7 +191,7 @@
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
|
||||
# git checkout v0.23.1
|
||||
# 可選:使用穩定版標籤(查看發佈:https://github.com/infiniflow/ragflow/releases)
|
||||
# 此步驟確保程式碼中的 entrypoint.sh 檔案與 Docker 映像版本一致。
|
||||
|
||||
@ -72,7 +72,7 @@
|
||||
|
||||
## 💡 RAGFlow 是什么?
|
||||
|
||||
[RAGFlow](https://ragflow.io/) 是一款领先的开源检索增强生成(RAG)引擎,通过融合前沿的 RAG 技术与 Agent 能力,为大型语言模型提供卓越的上下文层。它提供可适配任意规模企业的端到端 RAG 工作流,凭借融合式上下文引擎与预置的 Agent 模板,助力开发者以极致效率与精度将复杂数据转化为高可信、生产级的人工智能系统。
|
||||
[RAGFlow](https://ragflow.io/) 是一款领先的开源检索增强生成([RAG](https://ragflow.io/basics/what-is-rag))引擎,通过融合前沿的 RAG 技术与 Agent 能力,为大型语言模型提供卓越的上下文层。它提供可适配任意规模企业的端到端 RAG 工作流,凭借融合式[上下文引擎](https://ragflow.io/basics/what-is-agent-context-engine)与预置的 Agent 模板,助力开发者以极致效率与精度将复杂数据转化为高可信、生产级的人工智能系统。
|
||||
|
||||
## 🎮 Demo 试用
|
||||
|
||||
@ -192,7 +192,7 @@
|
||||
|
||||
```bash
|
||||
$ cd ragflow/docker
|
||||
|
||||
|
||||
# git checkout v0.23.1
|
||||
# 可选:使用稳定版本标签(查看发布:https://github.com/infiniflow/ragflow/releases)
|
||||
# 这一步确保代码中的 entrypoint.sh 文件与 Docker 镜像的版本保持一致。
|
||||
@ -204,7 +204,7 @@
|
||||
# sed -i '1i DEVICE=gpu' .env
|
||||
# docker compose -f docker-compose.yml up -d
|
||||
```
|
||||
|
||||
|
||||
> 注意:在 `v0.22.0` 之前的版本,我们会同时提供包含 embedding 模型的镜像和不含 embedding 模型的 slim 镜像。具体如下:
|
||||
|
||||
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|
||||
|
||||
@ -133,6 +133,7 @@ class FileSource(StrEnum):
|
||||
GITHUB = "github"
|
||||
GITLAB = "gitlab"
|
||||
IMAP = "imap"
|
||||
BITBUCKET = "bitbucket"
|
||||
ZENDESK = "zendesk"
|
||||
|
||||
class PipelineTaskType(StrEnum):
|
||||
|
||||
@ -34,7 +34,6 @@ from .google_drive.connector import GoogleDriveConnector
|
||||
from .jira.connector import JiraConnector
|
||||
from .sharepoint_connector import SharePointConnector
|
||||
from .teams_connector import TeamsConnector
|
||||
from .webdav_connector import WebDAVConnector
|
||||
from .moodle_connector import MoodleConnector
|
||||
from .airtable_connector import AirtableConnector
|
||||
from .asana_connector import AsanaConnector
|
||||
@ -62,7 +61,6 @@ __all__ = [
|
||||
"JiraConnector",
|
||||
"SharePointConnector",
|
||||
"TeamsConnector",
|
||||
"WebDAVConnector",
|
||||
"MoodleConnector",
|
||||
"BlobType",
|
||||
"DocumentSource",
|
||||
|
||||
0
common/data_source/bitbucket/__init__.py
Normal file
0
common/data_source/bitbucket/__init__.py
Normal file
388
common/data_source/bitbucket/connector.py
Normal file
388
common/data_source/bitbucket/connector.py
Normal file
@ -0,0 +1,388 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import copy
|
||||
from collections.abc import Callable
|
||||
from collections.abc import Iterator
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
from typing import Any
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from typing_extensions import override
|
||||
|
||||
from common.data_source.config import INDEX_BATCH_SIZE
|
||||
from common.data_source.config import DocumentSource
|
||||
from common.data_source.config import REQUEST_TIMEOUT_SECONDS
|
||||
from common.data_source.exceptions import (
|
||||
ConnectorMissingCredentialError,
|
||||
CredentialExpiredError,
|
||||
InsufficientPermissionsError,
|
||||
UnexpectedValidationError,
|
||||
)
|
||||
from common.data_source.interfaces import CheckpointedConnector
|
||||
from common.data_source.interfaces import CheckpointOutput
|
||||
from common.data_source.interfaces import IndexingHeartbeatInterface
|
||||
from common.data_source.interfaces import SecondsSinceUnixEpoch
|
||||
from common.data_source.interfaces import SlimConnectorWithPermSync
|
||||
from common.data_source.models import ConnectorCheckpoint
|
||||
from common.data_source.models import ConnectorFailure
|
||||
from common.data_source.models import DocumentFailure
|
||||
from common.data_source.models import SlimDocument
|
||||
from common.data_source.bitbucket.utils import (
|
||||
build_auth_client,
|
||||
list_repositories,
|
||||
map_pr_to_document,
|
||||
paginate,
|
||||
PR_LIST_RESPONSE_FIELDS,
|
||||
SLIM_PR_LIST_RESPONSE_FIELDS,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import httpx
|
||||
|
||||
|
||||
class BitbucketConnectorCheckpoint(ConnectorCheckpoint):
|
||||
"""Checkpoint state for resumable Bitbucket PR indexing.
|
||||
|
||||
Fields:
|
||||
repos_queue: Materialized list of repository slugs to process.
|
||||
current_repo_index: Index of the repository currently being processed.
|
||||
next_url: Bitbucket "next" URL for continuing pagination within the current repo.
|
||||
"""
|
||||
|
||||
repos_queue: list[str] = []
|
||||
current_repo_index: int = 0
|
||||
next_url: str | None = None
|
||||
|
||||
|
||||
class BitbucketConnector(
|
||||
CheckpointedConnector[BitbucketConnectorCheckpoint],
|
||||
SlimConnectorWithPermSync,
|
||||
):
|
||||
"""Connector for indexing Bitbucket Cloud pull requests.
|
||||
|
||||
Args:
|
||||
workspace: Bitbucket workspace ID.
|
||||
repositories: Comma-separated list of repository slugs to index.
|
||||
projects: Comma-separated list of project keys to index all repositories within.
|
||||
batch_size: Max number of documents to yield per batch.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
workspace: str,
|
||||
repositories: str | None = None,
|
||||
projects: str | None = None,
|
||||
batch_size: int = INDEX_BATCH_SIZE,
|
||||
) -> None:
|
||||
self.workspace = workspace
|
||||
self._repositories = (
|
||||
[s.strip() for s in repositories.split(",") if s.strip()]
|
||||
if repositories
|
||||
else None
|
||||
)
|
||||
self._projects: list[str] | None = (
|
||||
[s.strip() for s in projects.split(",") if s.strip()] if projects else None
|
||||
)
|
||||
self.batch_size = batch_size
|
||||
self.email: str | None = None
|
||||
self.api_token: str | None = None
|
||||
|
||||
def load_credentials(self, credentials: dict[str, Any]) -> dict[str, Any] | None:
|
||||
"""Load API token-based credentials.
|
||||
|
||||
Expects a dict with keys: `bitbucket_email`, `bitbucket_api_token`.
|
||||
"""
|
||||
self.email = credentials.get("bitbucket_email")
|
||||
self.api_token = credentials.get("bitbucket_api_token")
|
||||
if not self.email or not self.api_token:
|
||||
raise ConnectorMissingCredentialError("Bitbucket")
|
||||
return None
|
||||
|
||||
def _client(self) -> httpx.Client:
|
||||
"""Build an authenticated HTTP client or raise if credentials missing."""
|
||||
if not self.email or not self.api_token:
|
||||
raise ConnectorMissingCredentialError("Bitbucket")
|
||||
return build_auth_client(self.email, self.api_token)
|
||||
|
||||
def _iter_pull_requests_for_repo(
|
||||
self,
|
||||
client: httpx.Client,
|
||||
repo_slug: str,
|
||||
params: dict[str, Any] | None = None,
|
||||
start_url: str | None = None,
|
||||
on_page: Callable[[str | None], None] | None = None,
|
||||
) -> Iterator[dict[str, Any]]:
|
||||
base = f"https://api.bitbucket.org/2.0/repositories/{self.workspace}/{repo_slug}/pullrequests"
|
||||
yield from paginate(
|
||||
client,
|
||||
base,
|
||||
params,
|
||||
start_url=start_url,
|
||||
on_page=on_page,
|
||||
)
|
||||
|
||||
def _build_params(
|
||||
self,
|
||||
fields: str = PR_LIST_RESPONSE_FIELDS,
|
||||
start: SecondsSinceUnixEpoch | None = None,
|
||||
end: SecondsSinceUnixEpoch | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Build Bitbucket fetch params.
|
||||
|
||||
Always include OPEN, MERGED, and DECLINED PRs. If both ``start`` and
|
||||
``end`` are provided, apply a single updated_on time window.
|
||||
"""
|
||||
|
||||
def _iso(ts: SecondsSinceUnixEpoch) -> str:
|
||||
return datetime.fromtimestamp(ts, tz=timezone.utc).isoformat()
|
||||
|
||||
def _tc_epoch(
|
||||
lower_epoch: SecondsSinceUnixEpoch | None,
|
||||
upper_epoch: SecondsSinceUnixEpoch | None,
|
||||
) -> str | None:
|
||||
if lower_epoch is not None and upper_epoch is not None:
|
||||
lower_iso = _iso(lower_epoch)
|
||||
upper_iso = _iso(upper_epoch)
|
||||
return f'(updated_on > "{lower_iso}" AND updated_on <= "{upper_iso}")'
|
||||
return None
|
||||
|
||||
params: dict[str, Any] = {"fields": fields, "pagelen": 50}
|
||||
time_clause = _tc_epoch(start, end)
|
||||
q = '(state = "OPEN" OR state = "MERGED" OR state = "DECLINED")'
|
||||
if time_clause:
|
||||
q = f"{q} AND {time_clause}"
|
||||
params["q"] = q
|
||||
return params
|
||||
|
||||
def _iter_target_repositories(self, client: httpx.Client) -> Iterator[str]:
|
||||
"""Yield repository slugs based on configuration.
|
||||
|
||||
Priority:
|
||||
- repositories list
|
||||
- projects list (list repos by project key)
|
||||
- workspace (all repos)
|
||||
"""
|
||||
if self._repositories:
|
||||
for slug in self._repositories:
|
||||
yield slug
|
||||
return
|
||||
if self._projects:
|
||||
for project_key in self._projects:
|
||||
for repo in list_repositories(client, self.workspace, project_key):
|
||||
slug_val = repo.get("slug")
|
||||
if isinstance(slug_val, str) and slug_val:
|
||||
yield slug_val
|
||||
return
|
||||
for repo in list_repositories(client, self.workspace, None):
|
||||
slug_val = repo.get("slug")
|
||||
if isinstance(slug_val, str) and slug_val:
|
||||
yield slug_val
|
||||
|
||||
@override
|
||||
def load_from_checkpoint(
|
||||
self,
|
||||
start: SecondsSinceUnixEpoch,
|
||||
end: SecondsSinceUnixEpoch,
|
||||
checkpoint: BitbucketConnectorCheckpoint,
|
||||
) -> CheckpointOutput[BitbucketConnectorCheckpoint]:
|
||||
"""Resumable PR ingestion across repos and pages within a time window.
|
||||
|
||||
Yields Documents (or ConnectorFailure for per-PR mapping failures) and returns
|
||||
an updated checkpoint that records repo position and next page URL.
|
||||
"""
|
||||
new_checkpoint = copy.deepcopy(checkpoint)
|
||||
|
||||
with self._client() as client:
|
||||
# Materialize target repositories once
|
||||
if not new_checkpoint.repos_queue:
|
||||
# Preserve explicit order; otherwise ensure deterministic ordering
|
||||
repos_list = list(self._iter_target_repositories(client))
|
||||
new_checkpoint.repos_queue = sorted(set(repos_list))
|
||||
new_checkpoint.current_repo_index = 0
|
||||
new_checkpoint.next_url = None
|
||||
|
||||
repos = new_checkpoint.repos_queue
|
||||
if not repos or new_checkpoint.current_repo_index >= len(repos):
|
||||
new_checkpoint.has_more = False
|
||||
return new_checkpoint
|
||||
|
||||
repo_slug = repos[new_checkpoint.current_repo_index]
|
||||
|
||||
first_page_params = self._build_params(
|
||||
fields=PR_LIST_RESPONSE_FIELDS,
|
||||
start=start,
|
||||
end=end,
|
||||
)
|
||||
|
||||
def _on_page(next_url: str | None) -> None:
|
||||
new_checkpoint.next_url = next_url
|
||||
|
||||
for pr in self._iter_pull_requests_for_repo(
|
||||
client,
|
||||
repo_slug,
|
||||
params=first_page_params,
|
||||
start_url=new_checkpoint.next_url,
|
||||
on_page=_on_page,
|
||||
):
|
||||
try:
|
||||
document = map_pr_to_document(pr, self.workspace, repo_slug)
|
||||
yield document
|
||||
except Exception as e:
|
||||
pr_id = pr.get("id")
|
||||
pr_link = (
|
||||
f"https://bitbucket.org/{self.workspace}/{repo_slug}/pull-requests/{pr_id}"
|
||||
if pr_id is not None
|
||||
else None
|
||||
)
|
||||
yield ConnectorFailure(
|
||||
failed_document=DocumentFailure(
|
||||
document_id=(
|
||||
f"{DocumentSource.BITBUCKET.value}:{self.workspace}:{repo_slug}:pr:{pr_id}"
|
||||
if pr_id is not None
|
||||
else f"{DocumentSource.BITBUCKET.value}:{self.workspace}:{repo_slug}:pr:unknown"
|
||||
),
|
||||
document_link=pr_link,
|
||||
),
|
||||
failure_message=f"Failed to process Bitbucket PR: {e}",
|
||||
exception=e,
|
||||
)
|
||||
|
||||
# Advance to next repository (if any) and set has_more accordingly
|
||||
new_checkpoint.current_repo_index += 1
|
||||
new_checkpoint.next_url = None
|
||||
new_checkpoint.has_more = new_checkpoint.current_repo_index < len(repos)
|
||||
|
||||
return new_checkpoint
|
||||
|
||||
@override
|
||||
def build_dummy_checkpoint(self) -> BitbucketConnectorCheckpoint:
|
||||
"""Create an initial checkpoint with work remaining."""
|
||||
return BitbucketConnectorCheckpoint(has_more=True)
|
||||
|
||||
@override
|
||||
def validate_checkpoint_json(
|
||||
self, checkpoint_json: str
|
||||
) -> BitbucketConnectorCheckpoint:
|
||||
"""Validate and deserialize a checkpoint instance from JSON."""
|
||||
return BitbucketConnectorCheckpoint.model_validate_json(checkpoint_json)
|
||||
|
||||
def retrieve_all_slim_docs_perm_sync(
|
||||
self,
|
||||
start: SecondsSinceUnixEpoch | None = None,
|
||||
end: SecondsSinceUnixEpoch | None = None,
|
||||
callback: IndexingHeartbeatInterface | None = None,
|
||||
) -> Iterator[list[SlimDocument]]:
|
||||
"""Return only document IDs for all existing pull requests."""
|
||||
batch: list[SlimDocument] = []
|
||||
params = self._build_params(
|
||||
fields=SLIM_PR_LIST_RESPONSE_FIELDS,
|
||||
start=start,
|
||||
end=end,
|
||||
)
|
||||
with self._client() as client:
|
||||
for slug in self._iter_target_repositories(client):
|
||||
for pr in self._iter_pull_requests_for_repo(
|
||||
client, slug, params=params
|
||||
):
|
||||
pr_id = pr["id"]
|
||||
doc_id = f"{DocumentSource.BITBUCKET.value}:{self.workspace}:{slug}:pr:{pr_id}"
|
||||
batch.append(SlimDocument(id=doc_id))
|
||||
if len(batch) >= self.batch_size:
|
||||
yield batch
|
||||
batch = []
|
||||
if callback:
|
||||
if callback.should_stop():
|
||||
# Note: this is not actually used for permission sync yet, just pruning
|
||||
raise RuntimeError(
|
||||
"bitbucket_pr_sync: Stop signal detected"
|
||||
)
|
||||
callback.progress("bitbucket_pr_sync", len(batch))
|
||||
if batch:
|
||||
yield batch
|
||||
|
||||
def validate_connector_settings(self) -> None:
|
||||
"""Validate Bitbucket credentials and workspace access by probing a lightweight endpoint.
|
||||
|
||||
Raises:
|
||||
CredentialExpiredError: on HTTP 401
|
||||
InsufficientPermissionsError: on HTTP 403
|
||||
UnexpectedValidationError: on any other failure
|
||||
"""
|
||||
try:
|
||||
with self._client() as client:
|
||||
url = f"https://api.bitbucket.org/2.0/repositories/{self.workspace}"
|
||||
resp = client.get(
|
||||
url,
|
||||
params={"pagelen": 1, "fields": "pagelen"},
|
||||
timeout=REQUEST_TIMEOUT_SECONDS,
|
||||
)
|
||||
if resp.status_code == 401:
|
||||
raise CredentialExpiredError(
|
||||
"Invalid or expired Bitbucket credentials (HTTP 401)."
|
||||
)
|
||||
if resp.status_code == 403:
|
||||
raise InsufficientPermissionsError(
|
||||
"Insufficient permissions to access Bitbucket workspace (HTTP 403)."
|
||||
)
|
||||
if resp.status_code < 200 or resp.status_code >= 300:
|
||||
raise UnexpectedValidationError(
|
||||
f"Unexpected Bitbucket error (status={resp.status_code})."
|
||||
)
|
||||
except Exception as e:
|
||||
# Network or other unexpected errors
|
||||
if isinstance(
|
||||
e,
|
||||
(
|
||||
CredentialExpiredError,
|
||||
InsufficientPermissionsError,
|
||||
UnexpectedValidationError,
|
||||
ConnectorMissingCredentialError,
|
||||
),
|
||||
):
|
||||
raise
|
||||
raise UnexpectedValidationError(
|
||||
f"Unexpected error while validating Bitbucket settings: {e}"
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
bitbucket = BitbucketConnector(
|
||||
workspace="<YOUR_WORKSPACE>"
|
||||
)
|
||||
|
||||
bitbucket.load_credentials({
|
||||
"bitbucket_email": "<YOUR_EMAIL>",
|
||||
"bitbucket_api_token": "<YOUR_API_TOKEN>",
|
||||
})
|
||||
|
||||
bitbucket.validate_connector_settings()
|
||||
print("Credentials validated successfully.")
|
||||
|
||||
start_time = datetime.fromtimestamp(0, tz=timezone.utc)
|
||||
end_time = datetime.now(timezone.utc)
|
||||
|
||||
for doc_batch in bitbucket.retrieve_all_slim_docs_perm_sync(
|
||||
start=start_time.timestamp(),
|
||||
end=end_time.timestamp(),
|
||||
):
|
||||
for doc in doc_batch:
|
||||
print(doc)
|
||||
|
||||
|
||||
bitbucket_checkpoint = bitbucket.build_dummy_checkpoint()
|
||||
|
||||
while bitbucket_checkpoint.has_more:
|
||||
gen = bitbucket.load_from_checkpoint(
|
||||
start=start_time.timestamp(),
|
||||
end=end_time.timestamp(),
|
||||
checkpoint=bitbucket_checkpoint,
|
||||
)
|
||||
|
||||
while True:
|
||||
try:
|
||||
doc = next(gen)
|
||||
print(doc)
|
||||
except StopIteration as e:
|
||||
bitbucket_checkpoint = e.value
|
||||
break
|
||||
|
||||
288
common/data_source/bitbucket/utils.py
Normal file
288
common/data_source/bitbucket/utils.py
Normal file
@ -0,0 +1,288 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from collections.abc import Callable
|
||||
from collections.abc import Iterator
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from common.data_source.config import REQUEST_TIMEOUT_SECONDS, DocumentSource
|
||||
from common.data_source.cross_connector_utils.rate_limit_wrapper import (
|
||||
rate_limit_builder,
|
||||
)
|
||||
from common.data_source.utils import sanitize_filename
|
||||
from common.data_source.models import BasicExpertInfo, Document
|
||||
from common.data_source.cross_connector_utils.retry_wrapper import retry_builder
|
||||
|
||||
# Fields requested from Bitbucket PR list endpoint to ensure rich PR data
|
||||
PR_LIST_RESPONSE_FIELDS: str = ",".join(
|
||||
[
|
||||
"next",
|
||||
"page",
|
||||
"pagelen",
|
||||
"values.author",
|
||||
"values.close_source_branch",
|
||||
"values.closed_by",
|
||||
"values.comment_count",
|
||||
"values.created_on",
|
||||
"values.description",
|
||||
"values.destination",
|
||||
"values.draft",
|
||||
"values.id",
|
||||
"values.links",
|
||||
"values.merge_commit",
|
||||
"values.participants",
|
||||
"values.reason",
|
||||
"values.rendered",
|
||||
"values.reviewers",
|
||||
"values.source",
|
||||
"values.state",
|
||||
"values.summary",
|
||||
"values.task_count",
|
||||
"values.title",
|
||||
"values.type",
|
||||
"values.updated_on",
|
||||
]
|
||||
)
|
||||
|
||||
# Minimal fields for slim retrieval (IDs only)
|
||||
SLIM_PR_LIST_RESPONSE_FIELDS: str = ",".join(
|
||||
[
|
||||
"next",
|
||||
"page",
|
||||
"pagelen",
|
||||
"values.id",
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
# Minimal fields for repository list calls
|
||||
REPO_LIST_RESPONSE_FIELDS: str = ",".join(
|
||||
[
|
||||
"next",
|
||||
"page",
|
||||
"pagelen",
|
||||
"values.slug",
|
||||
"values.full_name",
|
||||
"values.project.key",
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
class BitbucketRetriableError(Exception):
|
||||
"""Raised for retriable Bitbucket conditions (429, 5xx)."""
|
||||
|
||||
|
||||
class BitbucketNonRetriableError(Exception):
|
||||
"""Raised for non-retriable Bitbucket client errors (4xx except 429)."""
|
||||
|
||||
|
||||
@retry_builder(
|
||||
tries=6,
|
||||
delay=1,
|
||||
backoff=2,
|
||||
max_delay=30,
|
||||
exceptions=(BitbucketRetriableError, httpx.RequestError),
|
||||
)
|
||||
@rate_limit_builder(max_calls=60, period=60)
|
||||
def bitbucket_get(
|
||||
client: httpx.Client, url: str, params: dict[str, Any] | None = None
|
||||
) -> httpx.Response:
|
||||
"""Perform a GET against Bitbucket with retry and rate limiting.
|
||||
|
||||
Retries on 429 and 5xx responses, and on transport errors. Honors
|
||||
`Retry-After` header for 429 when present by sleeping before retrying.
|
||||
"""
|
||||
try:
|
||||
response = client.get(url, params=params, timeout=REQUEST_TIMEOUT_SECONDS)
|
||||
except httpx.RequestError:
|
||||
# Allow retry_builder to handle retries of transport errors
|
||||
raise
|
||||
|
||||
try:
|
||||
response.raise_for_status()
|
||||
except httpx.HTTPStatusError as e:
|
||||
status = e.response.status_code if e.response is not None else None
|
||||
if status == 429:
|
||||
retry_after = e.response.headers.get("Retry-After") if e.response else None
|
||||
if retry_after is not None:
|
||||
try:
|
||||
time.sleep(int(retry_after))
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
raise BitbucketRetriableError("Bitbucket rate limit exceeded (429)") from e
|
||||
if status is not None and 500 <= status < 600:
|
||||
raise BitbucketRetriableError(f"Bitbucket server error: {status}") from e
|
||||
if status is not None and 400 <= status < 500:
|
||||
raise BitbucketNonRetriableError(f"Bitbucket client error: {status}") from e
|
||||
# Unknown status, propagate
|
||||
raise
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def build_auth_client(email: str, api_token: str) -> httpx.Client:
|
||||
"""Create an authenticated httpx client for Bitbucket Cloud API."""
|
||||
return httpx.Client(auth=(email, api_token), http2=True)
|
||||
|
||||
|
||||
def paginate(
|
||||
client: httpx.Client,
|
||||
url: str,
|
||||
params: dict[str, Any] | None = None,
|
||||
start_url: str | None = None,
|
||||
on_page: Callable[[str | None], None] | None = None,
|
||||
) -> Iterator[dict[str, Any]]:
|
||||
"""Iterate over paginated Bitbucket API responses yielding individual values.
|
||||
|
||||
Args:
|
||||
client: Authenticated HTTP client.
|
||||
url: Base collection URL (first page when start_url is None).
|
||||
params: Query params for the first page.
|
||||
start_url: If provided, start from this absolute URL (ignores params).
|
||||
on_page: Optional callback invoked after each page with the next page URL.
|
||||
"""
|
||||
next_url = start_url or url
|
||||
# If resuming from a next URL, do not pass params again
|
||||
query = params.copy() if params else None
|
||||
query = None if start_url else query
|
||||
while next_url:
|
||||
resp = bitbucket_get(client, next_url, params=query)
|
||||
data = resp.json()
|
||||
values = data.get("values", [])
|
||||
for item in values:
|
||||
yield item
|
||||
next_url = data.get("next")
|
||||
if on_page is not None:
|
||||
on_page(next_url)
|
||||
# only include params on first call, next_url will contain all necessary params
|
||||
query = None
|
||||
|
||||
|
||||
def list_repositories(
|
||||
client: httpx.Client, workspace: str, project_key: str | None = None
|
||||
) -> Iterator[dict[str, Any]]:
|
||||
"""List repositories in a workspace, optionally filtered by project key."""
|
||||
base_url = f"https://api.bitbucket.org/2.0/repositories/{workspace}"
|
||||
params: dict[str, Any] = {
|
||||
"fields": REPO_LIST_RESPONSE_FIELDS,
|
||||
"pagelen": 100,
|
||||
# Ensure deterministic ordering
|
||||
"sort": "full_name",
|
||||
}
|
||||
if project_key:
|
||||
params["q"] = f'project.key="{project_key}"'
|
||||
yield from paginate(client, base_url, params)
|
||||
|
||||
|
||||
def map_pr_to_document(pr: dict[str, Any], workspace: str, repo_slug: str) -> Document:
|
||||
"""Map a Bitbucket pull request JSON to Onyx Document."""
|
||||
pr_id = pr["id"]
|
||||
title = pr.get("title") or f"PR {pr_id}"
|
||||
description = pr.get("description") or ""
|
||||
state = pr.get("state")
|
||||
draft = pr.get("draft", False)
|
||||
author = pr.get("author", {})
|
||||
reviewers = pr.get("reviewers", [])
|
||||
participants = pr.get("participants", [])
|
||||
|
||||
link = pr.get("links", {}).get("html", {}).get("href") or (
|
||||
f"https://bitbucket.org/{workspace}/{repo_slug}/pull-requests/{pr_id}"
|
||||
)
|
||||
|
||||
created_on = pr.get("created_on")
|
||||
updated_on = pr.get("updated_on")
|
||||
updated_dt = (
|
||||
datetime.fromisoformat(updated_on.replace("Z", "+00:00")).astimezone(
|
||||
timezone.utc
|
||||
)
|
||||
if isinstance(updated_on, str)
|
||||
else None
|
||||
)
|
||||
|
||||
source_branch = pr.get("source", {}).get("branch", {}).get("name", "")
|
||||
destination_branch = pr.get("destination", {}).get("branch", {}).get("name", "")
|
||||
|
||||
approved_by = [
|
||||
_get_user_name(p.get("user", {})) for p in participants if p.get("approved")
|
||||
]
|
||||
|
||||
primary_owner = None
|
||||
if author:
|
||||
primary_owner = BasicExpertInfo(
|
||||
display_name=_get_user_name(author),
|
||||
)
|
||||
|
||||
# secondary_owners = [
|
||||
# BasicExpertInfo(display_name=_get_user_name(r)) for r in reviewers
|
||||
# ] or None
|
||||
|
||||
reviewer_names = [_get_user_name(r) for r in reviewers]
|
||||
|
||||
# Create a concise summary of key PR info
|
||||
created_date = created_on.split("T")[0] if created_on else "N/A"
|
||||
updated_date = updated_on.split("T")[0] if updated_on else "N/A"
|
||||
content_text = (
|
||||
"Pull Request Information:\n"
|
||||
f"- Pull Request ID: {pr_id}\n"
|
||||
f"- Title: {title}\n"
|
||||
f"- State: {state or 'N/A'} {'(Draft)' if draft else ''}\n"
|
||||
)
|
||||
if state == "DECLINED":
|
||||
content_text += f"- Reason: {pr.get('reason', 'N/A')}\n"
|
||||
content_text += (
|
||||
f"- Author: {_get_user_name(author) if author else 'N/A'}\n"
|
||||
f"- Reviewers: {', '.join(reviewer_names) if reviewer_names else 'N/A'}\n"
|
||||
f"- Branch: {source_branch} -> {destination_branch}\n"
|
||||
f"- Created: {created_date}\n"
|
||||
f"- Updated: {updated_date}"
|
||||
)
|
||||
if description:
|
||||
content_text += f"\n\nDescription:\n{description}"
|
||||
|
||||
metadata: dict[str, str | list[str]] = {
|
||||
"object_type": "PullRequest",
|
||||
"workspace": workspace,
|
||||
"repository": repo_slug,
|
||||
"pr_key": f"{workspace}/{repo_slug}#{pr_id}",
|
||||
"id": str(pr_id),
|
||||
"title": title,
|
||||
"state": state or "",
|
||||
"draft": str(bool(draft)),
|
||||
"link": link,
|
||||
"author": _get_user_name(author) if author else "",
|
||||
"reviewers": reviewer_names,
|
||||
"approved_by": approved_by,
|
||||
"comment_count": str(pr.get("comment_count", "")),
|
||||
"task_count": str(pr.get("task_count", "")),
|
||||
"created_on": created_on or "",
|
||||
"updated_on": updated_on or "",
|
||||
"source_branch": source_branch,
|
||||
"destination_branch": destination_branch,
|
||||
"closed_by": (
|
||||
_get_user_name(pr.get("closed_by", {})) if pr.get("closed_by") else ""
|
||||
),
|
||||
"close_source_branch": str(bool(pr.get("close_source_branch", False))),
|
||||
}
|
||||
|
||||
name = sanitize_filename(title, "md")
|
||||
|
||||
return Document(
|
||||
id=f"{DocumentSource.BITBUCKET.value}:{workspace}:{repo_slug}:pr:{pr_id}",
|
||||
blob=content_text.encode("utf-8"),
|
||||
source=DocumentSource.BITBUCKET,
|
||||
extension=".md",
|
||||
semantic_identifier=f"#{pr_id}: {name}",
|
||||
size_bytes=len(content_text.encode("utf-8")),
|
||||
doc_updated_at=updated_dt,
|
||||
primary_owners=[primary_owner] if primary_owner else None,
|
||||
# secondary_owners=secondary_owners,
|
||||
metadata=metadata,
|
||||
)
|
||||
|
||||
|
||||
def _get_user_name(user: dict[str, Any]) -> str:
|
||||
return user.get("display_name") or user.get("nickname") or "unknown"
|
||||
@ -13,6 +13,9 @@ def get_current_tz_offset() -> int:
|
||||
return round(time_diff.total_seconds() / 3600)
|
||||
|
||||
|
||||
# Default request timeout, mostly used by connectors
|
||||
REQUEST_TIMEOUT_SECONDS = int(os.environ.get("REQUEST_TIMEOUT_SECONDS") or 60)
|
||||
|
||||
ONE_MINUTE = 60
|
||||
ONE_HOUR = 3600
|
||||
ONE_DAY = ONE_HOUR * 24
|
||||
@ -58,6 +61,7 @@ class DocumentSource(str, Enum):
|
||||
GITHUB = "github"
|
||||
GITLAB = "gitlab"
|
||||
IMAP = "imap"
|
||||
BITBUCKET = "bitbucket"
|
||||
ZENDESK = "zendesk"
|
||||
|
||||
|
||||
|
||||
126
common/data_source/cross_connector_utils/rate_limit_wrapper.py
Normal file
126
common/data_source/cross_connector_utils/rate_limit_wrapper.py
Normal file
@ -0,0 +1,126 @@
|
||||
import time
|
||||
import logging
|
||||
from collections.abc import Callable
|
||||
from functools import wraps
|
||||
from typing import Any
|
||||
from typing import cast
|
||||
from typing import TypeVar
|
||||
|
||||
import requests
|
||||
|
||||
F = TypeVar("F", bound=Callable[..., Any])
|
||||
|
||||
|
||||
class RateLimitTriedTooManyTimesError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class _RateLimitDecorator:
|
||||
"""Builds a generic wrapper/decorator for calls to external APIs that
|
||||
prevents making more than `max_calls` requests per `period`
|
||||
|
||||
Implementation inspired by the `ratelimit` library:
|
||||
https://github.com/tomasbasham/ratelimit.
|
||||
|
||||
NOTE: is not thread safe.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
max_calls: int,
|
||||
period: float, # in seconds
|
||||
sleep_time: float = 2, # in seconds
|
||||
sleep_backoff: float = 2, # applies exponential backoff
|
||||
max_num_sleep: int = 0,
|
||||
):
|
||||
self.max_calls = max_calls
|
||||
self.period = period
|
||||
self.sleep_time = sleep_time
|
||||
self.sleep_backoff = sleep_backoff
|
||||
self.max_num_sleep = max_num_sleep
|
||||
|
||||
self.call_history: list[float] = []
|
||||
self.curr_calls = 0
|
||||
|
||||
def __call__(self, func: F) -> F:
|
||||
@wraps(func)
|
||||
def wrapped_func(*args: list, **kwargs: dict[str, Any]) -> Any:
|
||||
# cleanup calls which are no longer relevant
|
||||
self._cleanup()
|
||||
|
||||
# check if we've exceeded the rate limit
|
||||
sleep_cnt = 0
|
||||
while len(self.call_history) == self.max_calls:
|
||||
sleep_time = self.sleep_time * (self.sleep_backoff**sleep_cnt)
|
||||
logging.warning(
|
||||
f"Rate limit exceeded for function {func.__name__}. "
|
||||
f"Waiting {sleep_time} seconds before retrying."
|
||||
)
|
||||
time.sleep(sleep_time)
|
||||
sleep_cnt += 1
|
||||
if self.max_num_sleep != 0 and sleep_cnt >= self.max_num_sleep:
|
||||
raise RateLimitTriedTooManyTimesError(
|
||||
f"Exceeded '{self.max_num_sleep}' retries for function '{func.__name__}'"
|
||||
)
|
||||
|
||||
self._cleanup()
|
||||
|
||||
# add the current call to the call history
|
||||
self.call_history.append(time.monotonic())
|
||||
return func(*args, **kwargs)
|
||||
|
||||
return cast(F, wrapped_func)
|
||||
|
||||
def _cleanup(self) -> None:
|
||||
curr_time = time.monotonic()
|
||||
time_to_expire_before = curr_time - self.period
|
||||
self.call_history = [
|
||||
call_time
|
||||
for call_time in self.call_history
|
||||
if call_time > time_to_expire_before
|
||||
]
|
||||
|
||||
|
||||
rate_limit_builder = _RateLimitDecorator
|
||||
|
||||
|
||||
"""If you want to allow the external service to tell you when you've hit the rate limit,
|
||||
use the following instead"""
|
||||
|
||||
R = TypeVar("R", bound=Callable[..., requests.Response])
|
||||
|
||||
|
||||
def wrap_request_to_handle_ratelimiting(
|
||||
request_fn: R, default_wait_time_sec: int = 30, max_waits: int = 30
|
||||
) -> R:
|
||||
def wrapped_request(*args: list, **kwargs: dict[str, Any]) -> requests.Response:
|
||||
for _ in range(max_waits):
|
||||
response = request_fn(*args, **kwargs)
|
||||
if response.status_code == 429:
|
||||
try:
|
||||
wait_time = int(
|
||||
response.headers.get("Retry-After", default_wait_time_sec)
|
||||
)
|
||||
except ValueError:
|
||||
wait_time = default_wait_time_sec
|
||||
|
||||
time.sleep(wait_time)
|
||||
continue
|
||||
|
||||
return response
|
||||
|
||||
raise RateLimitTriedTooManyTimesError(f"Exceeded '{max_waits}' retries")
|
||||
|
||||
return cast(R, wrapped_request)
|
||||
|
||||
|
||||
_rate_limited_get = wrap_request_to_handle_ratelimiting(requests.get)
|
||||
_rate_limited_post = wrap_request_to_handle_ratelimiting(requests.post)
|
||||
|
||||
|
||||
class _RateLimitedRequest:
|
||||
get = _rate_limited_get
|
||||
post = _rate_limited_post
|
||||
|
||||
|
||||
rl_requests = _RateLimitedRequest
|
||||
88
common/data_source/cross_connector_utils/retry_wrapper.py
Normal file
88
common/data_source/cross_connector_utils/retry_wrapper.py
Normal file
@ -0,0 +1,88 @@
|
||||
from collections.abc import Callable
|
||||
import logging
|
||||
from logging import Logger
|
||||
from typing import Any
|
||||
from typing import cast
|
||||
from typing import TypeVar
|
||||
import requests
|
||||
from retry import retry
|
||||
|
||||
from common.data_source.config import REQUEST_TIMEOUT_SECONDS
|
||||
|
||||
|
||||
F = TypeVar("F", bound=Callable[..., Any])
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def retry_builder(
|
||||
tries: int = 20,
|
||||
delay: float = 0.1,
|
||||
max_delay: float | None = 60,
|
||||
backoff: float = 2,
|
||||
jitter: tuple[float, float] | float = 1,
|
||||
exceptions: type[Exception] | tuple[type[Exception], ...] = (Exception,),
|
||||
) -> Callable[[F], F]:
|
||||
"""Builds a generic wrapper/decorator for calls to external APIs that
|
||||
may fail due to rate limiting, flakes, or other reasons. Applies exponential
|
||||
backoff with jitter to retry the call."""
|
||||
|
||||
def retry_with_default(func: F) -> F:
|
||||
@retry(
|
||||
tries=tries,
|
||||
delay=delay,
|
||||
max_delay=max_delay,
|
||||
backoff=backoff,
|
||||
jitter=jitter,
|
||||
logger=cast(Logger, logger),
|
||||
exceptions=exceptions,
|
||||
)
|
||||
def wrapped_func(*args: list, **kwargs: dict[str, Any]) -> Any:
|
||||
return func(*args, **kwargs)
|
||||
|
||||
return cast(F, wrapped_func)
|
||||
|
||||
return retry_with_default
|
||||
|
||||
|
||||
def request_with_retries(
|
||||
method: str,
|
||||
url: str,
|
||||
*,
|
||||
data: dict[str, Any] | None = None,
|
||||
headers: dict[str, Any] | None = None,
|
||||
params: dict[str, Any] | None = None,
|
||||
timeout: int = REQUEST_TIMEOUT_SECONDS,
|
||||
stream: bool = False,
|
||||
tries: int = 8,
|
||||
delay: float = 1,
|
||||
backoff: float = 2,
|
||||
) -> requests.Response:
|
||||
@retry(tries=tries, delay=delay, backoff=backoff, logger=cast(Logger, logger))
|
||||
def _make_request() -> requests.Response:
|
||||
response = requests.request(
|
||||
method=method,
|
||||
url=url,
|
||||
data=data,
|
||||
headers=headers,
|
||||
params=params,
|
||||
timeout=timeout,
|
||||
stream=stream,
|
||||
)
|
||||
try:
|
||||
response.raise_for_status()
|
||||
except requests.exceptions.HTTPError:
|
||||
logging.exception(
|
||||
"Request failed:\n%s",
|
||||
{
|
||||
"method": method,
|
||||
"url": url,
|
||||
"data": data,
|
||||
"headers": headers,
|
||||
"params": params,
|
||||
"timeout": timeout,
|
||||
"stream": stream,
|
||||
},
|
||||
)
|
||||
raise
|
||||
return response
|
||||
|
||||
return _make_request()
|
||||
@ -19,7 +19,7 @@ from github.PaginatedList import PaginatedList
|
||||
from github.PullRequest import PullRequest
|
||||
from pydantic import BaseModel
|
||||
from typing_extensions import override
|
||||
from common.data_source.google_util.util import sanitize_filename
|
||||
from common.data_source.utils import sanitize_filename
|
||||
from common.data_source.config import DocumentSource, GITHUB_CONNECTOR_BASE_URL
|
||||
from common.data_source.exceptions import (
|
||||
ConnectorMissingCredentialError,
|
||||
|
||||
@ -8,10 +8,10 @@ from common.data_source.config import INDEX_BATCH_SIZE, SLIM_BATCH_SIZE, Documen
|
||||
from common.data_source.google_util.auth import get_google_creds
|
||||
from common.data_source.google_util.constant import DB_CREDENTIALS_PRIMARY_ADMIN_KEY, MISSING_SCOPES_ERROR_STR, SCOPE_INSTRUCTIONS, USER_FIELDS
|
||||
from common.data_source.google_util.resource import get_admin_service, get_gmail_service
|
||||
from common.data_source.google_util.util import _execute_single_retrieval, execute_paginated_retrieval, sanitize_filename, clean_string
|
||||
from common.data_source.google_util.util import _execute_single_retrieval, execute_paginated_retrieval, clean_string
|
||||
from common.data_source.interfaces import LoadConnector, PollConnector, SecondsSinceUnixEpoch, SlimConnectorWithPermSync
|
||||
from common.data_source.models import BasicExpertInfo, Document, ExternalAccess, GenerateDocumentsOutput, GenerateSlimDocumentOutput, SlimDocument, TextSection
|
||||
from common.data_source.utils import build_time_range_query, clean_email_and_extract_name, get_message_body, is_mail_service_disabled_error, gmail_time_str_to_utc
|
||||
from common.data_source.utils import build_time_range_query, clean_email_and_extract_name, get_message_body, is_mail_service_disabled_error, gmail_time_str_to_utc, sanitize_filename
|
||||
|
||||
# Constants for Gmail API fields
|
||||
THREAD_LIST_FIELDS = "nextPageToken, threads(id)"
|
||||
|
||||
@ -191,42 +191,6 @@ def get_credentials_from_env(email: str, oauth: bool = False, source="drive") ->
|
||||
DB_CREDENTIALS_AUTHENTICATION_METHOD: "uploaded",
|
||||
}
|
||||
|
||||
def sanitize_filename(name: str, extension: str = "txt") -> str:
|
||||
"""
|
||||
Soft sanitize for MinIO/S3:
|
||||
- Replace only prohibited characters with a space.
|
||||
- Preserve readability (no ugly underscores).
|
||||
- Collapse multiple spaces.
|
||||
"""
|
||||
if name is None:
|
||||
return f"file.{extension}"
|
||||
|
||||
name = str(name).strip()
|
||||
|
||||
# Characters that MUST NOT appear in S3/MinIO object keys
|
||||
# Replace them with a space (not underscore)
|
||||
forbidden = r'[\\\?\#\%\*\:\|\<\>"]'
|
||||
name = re.sub(forbidden, " ", name)
|
||||
|
||||
# Replace slashes "/" (S3 interprets as folder) with space
|
||||
name = name.replace("/", " ")
|
||||
|
||||
# Collapse multiple spaces into one
|
||||
name = re.sub(r"\s+", " ", name)
|
||||
|
||||
# Trim both ends
|
||||
name = name.strip()
|
||||
|
||||
# Enforce reasonable max length
|
||||
if len(name) > 200:
|
||||
base, ext = os.path.splitext(name)
|
||||
name = base[:180].rstrip() + ext
|
||||
|
||||
if not os.path.splitext(name)[1]:
|
||||
name += f".{extension}"
|
||||
|
||||
return name
|
||||
|
||||
|
||||
def clean_string(text: str | None) -> str | None:
|
||||
"""
|
||||
|
||||
@ -1150,6 +1150,42 @@ def parallel_yield(gens: list[Iterator[R]], max_workers: int = 10) -> Iterator[R
|
||||
next_ind += 1
|
||||
del future_to_index[future]
|
||||
|
||||
|
||||
def sanitize_filename(name: str, extension: str = "txt") -> str:
|
||||
"""
|
||||
Soft sanitize for MinIO/S3:
|
||||
- Replace only prohibited characters with a space.
|
||||
- Preserve readability (no ugly underscores).
|
||||
- Collapse multiple spaces.
|
||||
"""
|
||||
if name is None:
|
||||
return f"file.{extension}"
|
||||
|
||||
name = str(name).strip()
|
||||
|
||||
# Characters that MUST NOT appear in S3/MinIO object keys
|
||||
# Replace them with a space (not underscore)
|
||||
forbidden = r'[\\\?\#\%\*\:\|\<\>"]'
|
||||
name = re.sub(forbidden, " ", name)
|
||||
|
||||
# Replace slashes "/" (S3 interprets as folder) with space
|
||||
name = name.replace("/", " ")
|
||||
|
||||
# Collapse multiple spaces into one
|
||||
name = re.sub(r"\s+", " ", name)
|
||||
|
||||
# Trim both ends
|
||||
name = name.strip()
|
||||
|
||||
# Enforce reasonable max length
|
||||
if len(name) > 200:
|
||||
base, ext = os.path.splitext(name)
|
||||
name = base[:180].rstrip() + ext
|
||||
|
||||
if not os.path.splitext(name)[1]:
|
||||
name += f".{extension}"
|
||||
|
||||
return name
|
||||
F = TypeVar("F", bound=Callable[..., Any])
|
||||
|
||||
class _RateLimitDecorator:
|
||||
@ -1246,4 +1282,4 @@ def retry_builder(
|
||||
|
||||
return cast(F, wrapped_func)
|
||||
|
||||
return retry_with_default
|
||||
return retry_with_default
|
||||
|
||||
@ -82,10 +82,6 @@ class WebDAVConnector(LoadConnector, PollConnector):
|
||||
base_url=self.base_url,
|
||||
auth=(username, password)
|
||||
)
|
||||
|
||||
# Test connection
|
||||
self.client.exists(self.remote_path)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to connect to WebDAV server: {e}")
|
||||
raise ConnectorMissingCredentialError(
|
||||
@ -308,60 +304,79 @@ class WebDAVConnector(LoadConnector, PollConnector):
|
||||
yield batch
|
||||
|
||||
def validate_connector_settings(self) -> None:
|
||||
"""Validate WebDAV connector settings
|
||||
|
||||
Raises:
|
||||
ConnectorMissingCredentialError: If credentials are not loaded
|
||||
ConnectorValidationError: If settings are invalid
|
||||
"""Validate WebDAV connector settings.
|
||||
|
||||
Validation should exercise the same code-paths used by the connector
|
||||
(directory listing / PROPFIND), avoiding exists() which may probe with
|
||||
methods that differ across servers.
|
||||
"""
|
||||
if self.client is None:
|
||||
raise ConnectorMissingCredentialError(
|
||||
"WebDAV credentials not loaded."
|
||||
)
|
||||
raise ConnectorMissingCredentialError("WebDAV credentials not loaded.")
|
||||
|
||||
if not self.base_url:
|
||||
raise ConnectorValidationError(
|
||||
"No base URL was provided in connector settings."
|
||||
)
|
||||
raise ConnectorValidationError("No base URL was provided in connector settings.")
|
||||
|
||||
# Normalize directory path: for collections, many servers behave better with trailing '/'
|
||||
test_path = self.remote_path or "/"
|
||||
if not test_path.startswith("/"):
|
||||
test_path = f"/{test_path}"
|
||||
if test_path != "/" and not test_path.endswith("/"):
|
||||
test_path = f"{test_path}/"
|
||||
|
||||
try:
|
||||
if not self.client.exists(self.remote_path):
|
||||
raise ConnectorValidationError(
|
||||
f"Remote path '{self.remote_path}' does not exist on WebDAV server."
|
||||
)
|
||||
# Use the same behavior as real sync: list directory with details (PROPFIND)
|
||||
self.client.ls(test_path, detail=True)
|
||||
|
||||
except Exception as e:
|
||||
error_message = str(e)
|
||||
|
||||
if "401" in error_message or "unauthorized" in error_message.lower():
|
||||
raise CredentialExpiredError(
|
||||
"WebDAV credentials appear invalid or expired."
|
||||
)
|
||||
|
||||
if "403" in error_message or "forbidden" in error_message.lower():
|
||||
# Prefer structured status codes if present on the exception/response
|
||||
status = None
|
||||
for attr in ("status_code", "code"):
|
||||
v = getattr(e, attr, None)
|
||||
if isinstance(v, int):
|
||||
status = v
|
||||
break
|
||||
if status is None:
|
||||
resp = getattr(e, "response", None)
|
||||
v = getattr(resp, "status_code", None)
|
||||
if isinstance(v, int):
|
||||
status = v
|
||||
|
||||
# If we can classify by status code, do it
|
||||
if status == 401:
|
||||
raise CredentialExpiredError("WebDAV credentials appear invalid or expired.")
|
||||
if status == 403:
|
||||
raise InsufficientPermissionsError(
|
||||
f"Insufficient permissions to access path '{self.remote_path}' on WebDAV server."
|
||||
)
|
||||
|
||||
if "404" in error_message or "not found" in error_message.lower():
|
||||
if status == 404:
|
||||
raise ConnectorValidationError(
|
||||
f"Remote path '{self.remote_path}' does not exist on WebDAV server."
|
||||
)
|
||||
|
||||
# Fallback: avoid brittle substring matching that caused false positives.
|
||||
# Provide the original exception for diagnosis.
|
||||
raise ConnectorValidationError(
|
||||
f"Unexpected WebDAV client error: {e}"
|
||||
f"WebDAV validation failed for path '{test_path}': {repr(e)}"
|
||||
)
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
credentials_dict = {
|
||||
"username": os.environ.get("WEBDAV_USERNAME"),
|
||||
"password": os.environ.get("WEBDAV_PASSWORD"),
|
||||
}
|
||||
|
||||
credentials_dict = {
|
||||
"username": "user",
|
||||
"password": "pass",
|
||||
}
|
||||
|
||||
|
||||
|
||||
connector = WebDAVConnector(
|
||||
base_url=os.environ.get("WEBDAV_URL") or "https://webdav.example.com",
|
||||
remote_path=os.environ.get("WEBDAV_PATH") or "/",
|
||||
base_url="http://172.17.0.1:8080/",
|
||||
remote_path="/",
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
@ -10542,6 +10542,5 @@
|
||||
"周五": ["礼拜五", "星期五"],
|
||||
"周六": ["礼拜六", "星期六"],
|
||||
"周日": ["礼拜日", "星期日", "星期天", "礼拜天"],
|
||||
"上班": "办公",
|
||||
"HELO":"agn"
|
||||
"上班": "办公"
|
||||
}
|
||||
|
||||
@ -46,7 +46,6 @@ from common.data_source import (
|
||||
MoodleConnector,
|
||||
JiraConnector,
|
||||
DropboxConnector,
|
||||
WebDAVConnector,
|
||||
AirtableConnector,
|
||||
AsanaConnector,
|
||||
ImapConnector,
|
||||
@ -54,11 +53,14 @@ from common.data_source import (
|
||||
)
|
||||
from common.constants import FileSource, TaskStatus
|
||||
from common.data_source.config import INDEX_BATCH_SIZE
|
||||
from common.data_source.models import ConnectorFailure
|
||||
from common.data_source.webdav_connector import WebDAVConnector
|
||||
from common.data_source.confluence_connector import ConfluenceConnector
|
||||
from common.data_source.gmail_connector import GmailConnector
|
||||
from common.data_source.box_connector import BoxConnector
|
||||
from common.data_source.github.connector import GithubConnector
|
||||
from common.data_source.gitlab_connector import GitlabConnector
|
||||
from common.data_source.bitbucket.connector import BitbucketConnector
|
||||
from common.data_source.interfaces import CheckpointOutputWrapper
|
||||
from common.log_utils import init_root_logger
|
||||
from common.signal_utils import start_tracemalloc_and_snapshot, stop_tracemalloc
|
||||
@ -694,7 +696,12 @@ class WebDAV(SyncBase):
|
||||
self.conf.get("remote_path", "/"),
|
||||
begin_info
|
||||
))
|
||||
return document_batch_generator
|
||||
|
||||
async def async_wrapper():
|
||||
for document_batch in document_batch_generator:
|
||||
yield document_batch
|
||||
|
||||
return async_wrapper()
|
||||
|
||||
|
||||
class Moodle(SyncBase):
|
||||
@ -1107,6 +1114,67 @@ class Gitlab(SyncBase):
|
||||
logging.info("Connect to Gitlab: ({}) {}".format(self.conf["project_name"], begin_info))
|
||||
return document_generator
|
||||
|
||||
|
||||
class Bitbucket(SyncBase):
|
||||
SOURCE_NAME: str = FileSource.BITBUCKET
|
||||
|
||||
async def _generate(self, task: dict):
|
||||
self.connector = BitbucketConnector(
|
||||
workspace=self.conf.get("workspace"),
|
||||
repositories=self.conf.get("repository_slugs"),
|
||||
projects=self.conf.get("projects"),
|
||||
)
|
||||
|
||||
self.connector.load_credentials(
|
||||
{
|
||||
"bitbucket_email": self.conf["credentials"].get("bitbucket_account_email"),
|
||||
"bitbucket_api_token": self.conf["credentials"].get("bitbucket_api_token"),
|
||||
}
|
||||
)
|
||||
|
||||
if task["reindex"] == "1" or not task["poll_range_start"]:
|
||||
start_time = datetime.fromtimestamp(0, tz=timezone.utc)
|
||||
begin_info = "totally"
|
||||
else:
|
||||
start_time = task.get("poll_range_start")
|
||||
begin_info = f"from {start_time}"
|
||||
|
||||
end_time = datetime.now(timezone.utc)
|
||||
|
||||
def document_batches():
|
||||
checkpoint = self.connector.build_dummy_checkpoint()
|
||||
|
||||
while checkpoint.has_more:
|
||||
gen = self.connector.load_from_checkpoint(
|
||||
start=start_time.timestamp(),
|
||||
end=end_time.timestamp(),
|
||||
checkpoint=checkpoint)
|
||||
|
||||
while True:
|
||||
try:
|
||||
item = next(gen)
|
||||
if isinstance(item, ConnectorFailure):
|
||||
logging.exception(
|
||||
"Bitbucket connector failure: %s",
|
||||
item.failure_message)
|
||||
break
|
||||
yield [item]
|
||||
except StopIteration as e:
|
||||
checkpoint = e.value
|
||||
break
|
||||
|
||||
async def async_wrapper():
|
||||
for batch in document_batches():
|
||||
yield batch
|
||||
|
||||
logging.info(
|
||||
"Connect to Bitbucket: workspace(%s), %s",
|
||||
self.conf.get("workspace"),
|
||||
begin_info,
|
||||
)
|
||||
|
||||
return async_wrapper()
|
||||
|
||||
func_factory = {
|
||||
FileSource.S3: S3,
|
||||
FileSource.R2: R2,
|
||||
@ -1131,6 +1199,7 @@ func_factory = {
|
||||
FileSource.ZENDESK: Zendesk,
|
||||
FileSource.GITHUB: Github,
|
||||
FileSource.GITLAB: Gitlab,
|
||||
FileSource.BITBUCKET: Bitbucket,
|
||||
}
|
||||
|
||||
|
||||
|
||||
7
web/src/assets/svg/data-source/bitbucket.svg
Normal file
7
web/src/assets/svg/data-source/bitbucket.svg
Normal file
@ -0,0 +1,7 @@
|
||||
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg"
|
||||
aria-label="Bitbucket" role="img"
|
||||
viewBox="0 0 512 512"><rect
|
||||
width="512" height="512"
|
||||
rx="15%"
|
||||
fill="#ffffff"/><path fill="#2684ff" d="M422 130a10 10 0 00-9.9-11.7H100.5a10 10 0 00-10 11.7L136 409a10 10 0 009.9 8.4h221c5 0 9.2-3.5 10 -8.4L422 130zM291 316.8h-69.3l-18.7-98h104.8z"/><path fill="url(#a)" d="M59.632 25.2H40.94l-3.1 18.3h-13v18.9H52c1 0 1.7-.7 1.8-1.6l5.8-35.6z" transform="translate(89.8 85) scale(5.3285)"/><linearGradient id="a" x2="1" gradientTransform="rotate(141 22.239 22.239) scale(31.4)" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#0052cc"/><stop offset="1" stop-color="#2684ff"/></linearGradient></svg>
|
||||
|
After Width: | Height: | Size: 803 B |
@ -1,11 +1,23 @@
|
||||
import CopyToClipboard from '@/components/copy-to-clipboard';
|
||||
import { Button } from '@/components/ui/button';
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from '@/components/ui/dialog';
|
||||
import {
|
||||
Table,
|
||||
TableBody,
|
||||
TableCell,
|
||||
TableHead,
|
||||
TableHeader,
|
||||
TableRow,
|
||||
} from '@/components/ui/table';
|
||||
import { useTranslate } from '@/hooks/common-hooks';
|
||||
import { IModalProps } from '@/interfaces/common';
|
||||
import { IToken } from '@/interfaces/database/chat';
|
||||
import { formatDate } from '@/utils/date';
|
||||
import { DeleteOutlined } from '@ant-design/icons';
|
||||
import type { TableProps } from 'antd';
|
||||
import { Button, Modal, Space, Table } from 'antd';
|
||||
import { Trash2 } from 'lucide-react';
|
||||
import { useOperateApiKey } from '../hooks';
|
||||
|
||||
const ChatApiKeyModal = ({
|
||||
@ -17,57 +29,59 @@ const ChatApiKeyModal = ({
|
||||
useOperateApiKey(idKey, dialogId);
|
||||
const { t } = useTranslate('chat');
|
||||
|
||||
const columns: TableProps<IToken>['columns'] = [
|
||||
{
|
||||
title: 'Token',
|
||||
dataIndex: 'token',
|
||||
key: 'token',
|
||||
render: (text) => <a>{text}</a>,
|
||||
},
|
||||
{
|
||||
title: t('created'),
|
||||
dataIndex: 'create_date',
|
||||
key: 'create_date',
|
||||
render: (text) => formatDate(text),
|
||||
},
|
||||
{
|
||||
title: t('action'),
|
||||
key: 'action',
|
||||
render: (_, record) => (
|
||||
<Space size="middle">
|
||||
<CopyToClipboard text={record.token}></CopyToClipboard>
|
||||
<DeleteOutlined onClick={() => removeToken(record.token)} />
|
||||
</Space>
|
||||
),
|
||||
},
|
||||
];
|
||||
|
||||
return (
|
||||
<>
|
||||
<Modal
|
||||
title={t('apiKey')}
|
||||
open
|
||||
onCancel={hideModal}
|
||||
cancelButtonProps={{ style: { display: 'none' } }}
|
||||
style={{ top: 300 }}
|
||||
onOk={hideModal}
|
||||
width={'50vw'}
|
||||
>
|
||||
<Table
|
||||
columns={columns}
|
||||
dataSource={tokenList}
|
||||
rowKey={'token'}
|
||||
loading={listLoading}
|
||||
pagination={false}
|
||||
/>
|
||||
<Button
|
||||
onClick={createToken}
|
||||
loading={creatingLoading}
|
||||
disabled={tokenList?.length > 0}
|
||||
>
|
||||
{t('createNewKey')}
|
||||
</Button>
|
||||
</Modal>
|
||||
<Dialog open onOpenChange={hideModal}>
|
||||
<DialogContent className="max-w-[50vw]">
|
||||
<DialogHeader>
|
||||
<DialogTitle>{t('apiKey')}</DialogTitle>
|
||||
</DialogHeader>
|
||||
<div className="space-y-4">
|
||||
{listLoading ? (
|
||||
<div className="flex justify-center py-8">Loading...</div>
|
||||
) : (
|
||||
<Table>
|
||||
<TableHeader>
|
||||
<TableRow>
|
||||
<TableHead>Token</TableHead>
|
||||
<TableHead>{t('created')}</TableHead>
|
||||
<TableHead>{t('action')}</TableHead>
|
||||
</TableRow>
|
||||
</TableHeader>
|
||||
<TableBody>
|
||||
{tokenList?.map((tokenItem) => (
|
||||
<TableRow key={tokenItem.token}>
|
||||
<TableCell className="font-medium break-all">
|
||||
{tokenItem.token}
|
||||
</TableCell>
|
||||
<TableCell>{formatDate(tokenItem.create_date)}</TableCell>
|
||||
<TableCell>
|
||||
<div className="flex items-center gap-2">
|
||||
<CopyToClipboard text={tokenItem.token} />
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={() => removeToken(tokenItem.token)}
|
||||
>
|
||||
<Trash2 className="h-4 w-4" />
|
||||
</Button>
|
||||
</div>
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
))}
|
||||
</TableBody>
|
||||
</Table>
|
||||
)}
|
||||
<Button
|
||||
onClick={createToken}
|
||||
loading={creatingLoading}
|
||||
disabled={tokenList?.length > 0}
|
||||
>
|
||||
{t('createNewKey')}
|
||||
</Button>
|
||||
</div>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
||||
@ -0,0 +1,93 @@
|
||||
import React, { useSyncExternalStore } from 'react';
|
||||
|
||||
export interface AnchorItem {
|
||||
key: string;
|
||||
href: string;
|
||||
title: string;
|
||||
children?: AnchorItem[];
|
||||
}
|
||||
|
||||
interface SimpleAnchorProps {
|
||||
items: AnchorItem[];
|
||||
className?: string;
|
||||
style?: React.CSSProperties;
|
||||
}
|
||||
|
||||
// Subscribe to URL hash changes
|
||||
const subscribeHash = (callback: () => void) => {
|
||||
window.addEventListener('hashchange', callback);
|
||||
return () => window.removeEventListener('hashchange', callback);
|
||||
};
|
||||
|
||||
const getHash = () => window.location.hash;
|
||||
|
||||
const Anchor: React.FC<SimpleAnchorProps> = ({
|
||||
items,
|
||||
className = '',
|
||||
style = {},
|
||||
}) => {
|
||||
// Sync with URL hash changes, to highlight the active item
|
||||
const hash = useSyncExternalStore(subscribeHash, getHash);
|
||||
|
||||
// Handle menu item click
|
||||
const handleClick = (
|
||||
e: React.MouseEvent<HTMLAnchorElement>,
|
||||
href: string,
|
||||
) => {
|
||||
e.preventDefault();
|
||||
const targetId = href.replace('#', '');
|
||||
const targetElement = document.getElementById(targetId);
|
||||
|
||||
if (targetElement) {
|
||||
// Update URL hash (triggers hashchange event)
|
||||
window.location.hash = href;
|
||||
// Smooth scroll to target
|
||||
targetElement.scrollIntoView({ behavior: 'smooth', block: 'start' });
|
||||
}
|
||||
};
|
||||
|
||||
if (items.length === 0) return null;
|
||||
|
||||
return (
|
||||
<nav className={className} style={style}>
|
||||
<ul className="list-none p-0 m-0">
|
||||
{items.map((item) => (
|
||||
<li key={item.key} className="mb-2">
|
||||
<a
|
||||
href={item.href}
|
||||
onClick={(e) => handleClick(e, item.href)}
|
||||
className={`block px-3 py-1.5 no-underline rounded cursor-pointer transition-all duration-300 hover:text-accent-primary/70 ${
|
||||
hash === item.href
|
||||
? 'text-accent-primary bg-accent-primary-5'
|
||||
: 'text-text-secondary bg-transparent'
|
||||
}`}
|
||||
>
|
||||
{item.title}
|
||||
</a>
|
||||
{item.children && item.children.length > 0 && (
|
||||
<ul className="list-none p-0 ml-4 mt-1">
|
||||
{item.children.map((child) => (
|
||||
<li key={child.key} className="mb-1">
|
||||
<a
|
||||
href={child.href}
|
||||
onClick={(e) => handleClick(e, child.href)}
|
||||
className={`block px-3 py-1 text-sm no-underline rounded cursor-pointer transition-all duration-300 hover:text-accent-primary/70 ${
|
||||
hash === child.href
|
||||
? 'text-accent-primary bg-accent-primary-5'
|
||||
: 'text-text-secondary bg-transparent'
|
||||
}`}
|
||||
>
|
||||
{child.title}
|
||||
</a>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
)}
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
</nav>
|
||||
);
|
||||
};
|
||||
|
||||
export default Anchor;
|
||||
@ -1,52 +1,26 @@
|
||||
import { useIsDarkTheme } from '@/components/theme-provider';
|
||||
import { useSetModalState, useTranslate } from '@/hooks/common-hooks';
|
||||
import { useSetModalState } from '@/hooks/common-hooks';
|
||||
import { LangfuseCard } from '@/pages/user-setting/setting-model/langfuse';
|
||||
import apiDoc from '@parent/docs/references/http_api_reference.md';
|
||||
import MarkdownPreview from '@uiw/react-markdown-preview';
|
||||
import { Button, Card, Flex, Space } from 'antd';
|
||||
import ChatApiKeyModal from '../chat-api-key-modal';
|
||||
import { usePreviewChat } from '../hooks';
|
||||
import BackendServiceApi from './backend-service-api';
|
||||
import MarkdownToc from './markdown-toc';
|
||||
|
||||
const ApiContent = ({
|
||||
id,
|
||||
idKey,
|
||||
hideChatPreviewCard = false,
|
||||
}: {
|
||||
id?: string;
|
||||
idKey: string;
|
||||
hideChatPreviewCard?: boolean;
|
||||
}) => {
|
||||
const { t } = useTranslate('chat');
|
||||
const ApiContent = ({ id, idKey }: { id?: string; idKey: string }) => {
|
||||
const {
|
||||
visible: apiKeyVisible,
|
||||
hideModal: hideApiKeyModal,
|
||||
showModal: showApiKeyModal,
|
||||
} = useSetModalState();
|
||||
// const { embedVisible, hideEmbedModal, showEmbedModal, embedToken } =
|
||||
// useShowEmbedModal(idKey);
|
||||
|
||||
const { handlePreview } = usePreviewChat(idKey);
|
||||
|
||||
const isDarkTheme = useIsDarkTheme();
|
||||
|
||||
return (
|
||||
<div className="pb-2">
|
||||
<Flex vertical gap={'middle'}>
|
||||
<section className="flex flex-col gap-2 pb-5">
|
||||
<BackendServiceApi show={showApiKeyModal}></BackendServiceApi>
|
||||
{!hideChatPreviewCard && (
|
||||
<Card title={`${name} Web App`}>
|
||||
<Flex gap={8} vertical>
|
||||
<Space size={'middle'}>
|
||||
<Button onClick={handlePreview}>{t('preview')}</Button>
|
||||
{/* <Button onClick={() => showEmbedModal(id)}>
|
||||
{t('embedded')}
|
||||
</Button> */}
|
||||
</Space>
|
||||
</Flex>
|
||||
</Card>
|
||||
)}
|
||||
|
||||
<div style={{ position: 'relative' }}>
|
||||
<MarkdownToc content={apiDoc} />
|
||||
</div>
|
||||
@ -54,7 +28,8 @@ const ApiContent = ({
|
||||
source={apiDoc}
|
||||
wrapperElement={{ 'data-color-mode': isDarkTheme ? 'dark' : 'light' }}
|
||||
></MarkdownPreview>
|
||||
</Flex>
|
||||
</section>
|
||||
<LangfuseCard></LangfuseCard>
|
||||
{apiKeyVisible && (
|
||||
<ChatApiKeyModal
|
||||
hideModal={hideApiKeyModal}
|
||||
@ -62,14 +37,6 @@ const ApiContent = ({
|
||||
idKey={idKey}
|
||||
></ChatApiKeyModal>
|
||||
)}
|
||||
{/* {embedVisible && (
|
||||
<EmbedModal
|
||||
token={embedToken}
|
||||
visible={embedVisible}
|
||||
hideModal={hideEmbedModal}
|
||||
></EmbedModal>
|
||||
)} */}
|
||||
<LangfuseCard></LangfuseCard>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@ -1,33 +1,28 @@
|
||||
import { Button, Card, Flex, Space, Typography } from 'antd';
|
||||
import { Button } from '@/components/ui/button';
|
||||
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
|
||||
|
||||
import { CopyToClipboardWithText } from '@/components/copy-to-clipboard';
|
||||
import { useTranslate } from '@/hooks/common-hooks';
|
||||
import styles from './index.less';
|
||||
|
||||
const { Paragraph } = Typography;
|
||||
|
||||
const BackendServiceApi = ({ show }: { show(): void }) => {
|
||||
const { t } = useTranslate('chat');
|
||||
|
||||
return (
|
||||
<Card
|
||||
title={
|
||||
<Space size={'large'}>
|
||||
<span>RAGFlow API</span>
|
||||
<Button onClick={show} type="primary">
|
||||
{t('apiKey')}
|
||||
</Button>
|
||||
</Space>
|
||||
}
|
||||
>
|
||||
<Flex gap={8} align="center">
|
||||
<b>{t('backendServiceApi')}</b>
|
||||
<Paragraph
|
||||
copyable={{ text: `${location.origin}` }}
|
||||
className={styles.apiLinkText}
|
||||
>
|
||||
{location.origin}
|
||||
</Paragraph>
|
||||
</Flex>
|
||||
<Card>
|
||||
<CardHeader>
|
||||
<div className="flex items-center gap-4">
|
||||
<CardTitle>RAGFlow API</CardTitle>
|
||||
<Button onClick={show}>{t('apiKey')}</Button>
|
||||
</div>
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div className="flex items-center gap-2">
|
||||
<b className="font-semibold">{t('backendServiceApi')}</b>
|
||||
<CopyToClipboardWithText
|
||||
text={location.origin}
|
||||
></CopyToClipboardWithText>
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
);
|
||||
};
|
||||
|
||||
@ -1,31 +0,0 @@
|
||||
import { useTranslate } from '@/hooks/common-hooks';
|
||||
import { IModalProps } from '@/interfaces/common';
|
||||
import { Modal } from 'antd';
|
||||
import ApiContent from './api-content';
|
||||
|
||||
const ChatOverviewModal = ({
|
||||
visible,
|
||||
hideModal,
|
||||
id,
|
||||
idKey,
|
||||
}: IModalProps<any> & { id: string; name?: string; idKey: string }) => {
|
||||
const { t } = useTranslate('chat');
|
||||
|
||||
return (
|
||||
<>
|
||||
<Modal
|
||||
title={t('overview')}
|
||||
open={visible}
|
||||
onCancel={hideModal}
|
||||
cancelButtonProps={{ style: { display: 'none' } }}
|
||||
onOk={hideModal}
|
||||
width={'100vw'}
|
||||
okText={t('close', { keyPrefix: 'common' })}
|
||||
>
|
||||
<ApiContent id={id} idKey={idKey}></ApiContent>
|
||||
</Modal>
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
||||
export default ChatOverviewModal;
|
||||
@ -1,21 +1,27 @@
|
||||
import { Anchor } from 'antd';
|
||||
import type { AnchorLinkItemProps } from 'antd/es/anchor/Anchor';
|
||||
import React, { useEffect, useState } from 'react';
|
||||
import Anchor, { AnchorItem } from './anchor';
|
||||
|
||||
interface MarkdownTocProps {
|
||||
content: string;
|
||||
}
|
||||
|
||||
const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
|
||||
const [items, setItems] = useState<AnchorLinkItemProps[]>([]);
|
||||
const [items, setItems] = useState<AnchorItem[]>([]);
|
||||
|
||||
useEffect(() => {
|
||||
const generateTocItems = () => {
|
||||
const headings = document.querySelectorAll(
|
||||
'.wmde-markdown h2, .wmde-markdown h3',
|
||||
);
|
||||
const tocItems: AnchorLinkItemProps[] = [];
|
||||
let currentH2Item: AnchorLinkItemProps | null = null;
|
||||
|
||||
// If headings haven't rendered yet, wait for next frame
|
||||
if (headings.length === 0) {
|
||||
requestAnimationFrame(generateTocItems);
|
||||
return;
|
||||
}
|
||||
|
||||
const tocItems: AnchorItem[] = [];
|
||||
let currentH2Item: AnchorItem | null = null;
|
||||
|
||||
headings.forEach((heading) => {
|
||||
const title = heading.textContent || '';
|
||||
@ -23,7 +29,7 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
|
||||
const isH2 = heading.tagName.toLowerCase() === 'h2';
|
||||
|
||||
if (id && title) {
|
||||
const item: AnchorLinkItemProps = {
|
||||
const item: AnchorItem = {
|
||||
key: id,
|
||||
href: `#${id}`,
|
||||
title,
|
||||
@ -48,7 +54,10 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
|
||||
setItems(tocItems.slice(1));
|
||||
};
|
||||
|
||||
setTimeout(generateTocItems, 100);
|
||||
// Use requestAnimationFrame to ensure execution after DOM rendering
|
||||
requestAnimationFrame(() => {
|
||||
requestAnimationFrame(generateTocItems);
|
||||
});
|
||||
}, [content]);
|
||||
|
||||
return (
|
||||
@ -56,7 +65,7 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
|
||||
className="markdown-toc bg-bg-base text-text-primary shadow shadow-text-secondary"
|
||||
style={{
|
||||
position: 'fixed',
|
||||
right: 20,
|
||||
right: 30,
|
||||
top: 100,
|
||||
bottom: 150,
|
||||
width: 200,
|
||||
@ -66,7 +75,7 @@ const MarkdownToc: React.FC<MarkdownTocProps> = ({ content }) => {
|
||||
zIndex: 1000,
|
||||
}}
|
||||
>
|
||||
<Anchor items={items} affix={false} />
|
||||
<Anchor items={items} />
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@ -1,21 +0,0 @@
|
||||
.codeCard {
|
||||
.clearCardBody();
|
||||
}
|
||||
|
||||
.codeText {
|
||||
padding: 10px;
|
||||
background-color: #ffffff09;
|
||||
}
|
||||
|
||||
.id {
|
||||
.linkText();
|
||||
}
|
||||
|
||||
.darkBg {
|
||||
background-color: rgb(69, 68, 68);
|
||||
}
|
||||
|
||||
.darkId {
|
||||
color: white;
|
||||
.darkBg();
|
||||
}
|
||||
@ -1,170 +0,0 @@
|
||||
import CopyToClipboard from '@/components/copy-to-clipboard';
|
||||
import HighLightMarkdown from '@/components/highlight-markdown';
|
||||
import { SharedFrom } from '@/constants/chat';
|
||||
import { useTranslate } from '@/hooks/common-hooks';
|
||||
import { IModalProps } from '@/interfaces/common';
|
||||
import {
|
||||
Card,
|
||||
Checkbox,
|
||||
Form,
|
||||
Modal,
|
||||
Select,
|
||||
Tabs,
|
||||
TabsProps,
|
||||
Typography,
|
||||
} from 'antd';
|
||||
import { useMemo, useState } from 'react';
|
||||
|
||||
import { useIsDarkTheme } from '@/components/theme-provider';
|
||||
import {
|
||||
LanguageAbbreviation,
|
||||
LanguageAbbreviationMap,
|
||||
} from '@/constants/common';
|
||||
import { cn } from '@/lib/utils';
|
||||
import styles from './index.less';
|
||||
|
||||
const { Paragraph, Link } = Typography;
|
||||
|
||||
const EmbedModal = ({
|
||||
visible,
|
||||
hideModal,
|
||||
token = '',
|
||||
form,
|
||||
beta = '',
|
||||
isAgent,
|
||||
}: IModalProps<any> & {
|
||||
token: string;
|
||||
form: SharedFrom;
|
||||
beta: string;
|
||||
isAgent: boolean;
|
||||
}) => {
|
||||
const { t } = useTranslate('chat');
|
||||
const isDarkTheme = useIsDarkTheme();
|
||||
|
||||
const [visibleAvatar, setVisibleAvatar] = useState(false);
|
||||
const [locale, setLocale] = useState('');
|
||||
|
||||
const languageOptions = useMemo(() => {
|
||||
return Object.values(LanguageAbbreviation).map((x) => ({
|
||||
label: LanguageAbbreviationMap[x],
|
||||
value: x,
|
||||
}));
|
||||
}, []);
|
||||
|
||||
const generateIframeSrc = () => {
|
||||
let src = `${location.origin}/chat/share?shared_id=${token}&from=${form}&auth=${beta}`;
|
||||
if (visibleAvatar) {
|
||||
src += '&visible_avatar=1';
|
||||
}
|
||||
if (locale) {
|
||||
src += `&locale=${locale}`;
|
||||
}
|
||||
return src;
|
||||
};
|
||||
|
||||
const iframeSrc = generateIframeSrc();
|
||||
|
||||
const text = `
|
||||
~~~ html
|
||||
<iframe
|
||||
src="${iframeSrc}"
|
||||
style="width: 100%; height: 100%; min-height: 600px"
|
||||
frameborder="0"
|
||||
>
|
||||
</iframe>
|
||||
~~~
|
||||
`;
|
||||
|
||||
const items: TabsProps['items'] = [
|
||||
{
|
||||
key: '1',
|
||||
label: t('fullScreenTitle'),
|
||||
children: (
|
||||
<Card
|
||||
title={t('fullScreenDescription')}
|
||||
extra={<CopyToClipboard text={text}></CopyToClipboard>}
|
||||
className={styles.codeCard}
|
||||
>
|
||||
<div className="p-2">
|
||||
<h2 className="mb-3">Option:</h2>
|
||||
|
||||
<Form.Item
|
||||
label={t('avatarHidden')}
|
||||
labelCol={{ span: 6 }}
|
||||
wrapperCol={{ span: 18 }}
|
||||
>
|
||||
<Checkbox
|
||||
checked={visibleAvatar}
|
||||
onChange={(e) => setVisibleAvatar(e.target.checked)}
|
||||
></Checkbox>
|
||||
</Form.Item>
|
||||
<Form.Item
|
||||
label={t('locale')}
|
||||
labelCol={{ span: 6 }}
|
||||
wrapperCol={{ span: 18 }}
|
||||
>
|
||||
<Select
|
||||
placeholder="Select a locale"
|
||||
onChange={(value) => setLocale(value)}
|
||||
options={languageOptions}
|
||||
style={{ width: '100%' }}
|
||||
/>
|
||||
</Form.Item>
|
||||
</div>
|
||||
<HighLightMarkdown>{text}</HighLightMarkdown>
|
||||
</Card>
|
||||
),
|
||||
},
|
||||
{
|
||||
key: '2',
|
||||
label: t('partialTitle'),
|
||||
children: t('comingSoon'),
|
||||
},
|
||||
{
|
||||
key: '3',
|
||||
label: t('extensionTitle'),
|
||||
children: t('comingSoon'),
|
||||
},
|
||||
];
|
||||
|
||||
const onChange = (key: string) => {
|
||||
console.log(key);
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal
|
||||
title={t('embedIntoSite', { keyPrefix: 'common' })}
|
||||
open={visible}
|
||||
style={{ top: 300 }}
|
||||
width={'50vw'}
|
||||
onOk={hideModal}
|
||||
onCancel={hideModal}
|
||||
>
|
||||
<Tabs defaultActiveKey="1" items={items} onChange={onChange} />
|
||||
<div className="text-base font-medium mt-4 mb-1">
|
||||
{t(isAgent ? 'flow' : 'chat', { keyPrefix: 'header' })}
|
||||
<span className="ml-1 inline-block">ID</span>
|
||||
</div>
|
||||
<Paragraph
|
||||
copyable={{ text: token }}
|
||||
className={cn(styles.id, {
|
||||
[styles.darkId]: isDarkTheme,
|
||||
})}
|
||||
>
|
||||
{token}
|
||||
</Paragraph>
|
||||
<Link
|
||||
href={
|
||||
isAgent
|
||||
? 'https://ragflow.io/docs/dev/http_api_reference#create-session-with-agent'
|
||||
: 'https://ragflow.io/docs/dev/http_api_reference#create-session-with-chat-assistant'
|
||||
}
|
||||
target="_blank"
|
||||
>
|
||||
{t('howUseId', { keyPrefix: isAgent ? 'flow' : 'chat' })}
|
||||
</Link>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
|
||||
export default EmbedModal;
|
||||
@ -1,4 +1,3 @@
|
||||
import { SharedFrom } from '@/constants/chat';
|
||||
import {
|
||||
useSetModalState,
|
||||
useShowDeleteConfirm,
|
||||
@ -80,11 +79,6 @@ export const useShowBetaEmptyError = () => {
|
||||
return { showBetaEmptyError };
|
||||
};
|
||||
|
||||
const getUrlWithToken = (token: string, from: string = 'chat') => {
|
||||
const { protocol, host } = window.location;
|
||||
return `${protocol}//${host}/chat/share?shared_id=${token}&from=${from}`;
|
||||
};
|
||||
|
||||
const useFetchTokenListBeforeOtherStep = () => {
|
||||
const { showTokenEmptyError } = useShowTokenEmptyError();
|
||||
const { showBetaEmptyError } = useShowBetaEmptyError();
|
||||
@ -149,31 +143,3 @@ export const useShowEmbedModal = () => {
|
||||
beta,
|
||||
};
|
||||
};
|
||||
|
||||
export const usePreviewChat = (idKey: string) => {
|
||||
const { handleOperate } = useFetchTokenListBeforeOtherStep();
|
||||
|
||||
const open = useCallback(
|
||||
(t: string) => {
|
||||
window.open(
|
||||
getUrlWithToken(
|
||||
t,
|
||||
idKey === 'canvasId' ? SharedFrom.Agent : SharedFrom.Chat,
|
||||
),
|
||||
'_blank',
|
||||
);
|
||||
},
|
||||
[idKey],
|
||||
);
|
||||
|
||||
const handlePreview = useCallback(async () => {
|
||||
const token = await handleOperate();
|
||||
if (token) {
|
||||
open(token);
|
||||
}
|
||||
}, [handleOperate, open]);
|
||||
|
||||
return {
|
||||
handlePreview,
|
||||
};
|
||||
};
|
||||
|
||||
@ -947,6 +947,19 @@ Beispiel: Virtual Hosted Style`,
|
||||
'Laden Sie das OAuth-JSON hoch, das von der Google Console generiert wurde. Wenn es nur Client-Anmeldeinformationen enthält, führen Sie die browserbasierte Überprüfung einmal durch, um langlebige Refresh-Token zu erstellen.',
|
||||
dropboxDescription:
|
||||
'Verbinden Sie Ihre Dropbox, um Dateien und Ordner von einem ausgewählten Konto zu synchronisieren.',
|
||||
bitbucketDescription:
|
||||
'Bitbucket verbinden, um PR-Inhalte zu synchronisieren.',
|
||||
zendeskDescription:
|
||||
'Verbinden Sie Ihr Zendesk, um Tickets, Artikel und andere Inhalte zu synchronisieren.',
|
||||
bitbucketTopWorkspaceTip:
|
||||
'Der zu indizierende Bitbucket-Workspace (z. B. "atlassian" aus https://bitbucket.org/atlassian/workspace )',
|
||||
bitbucketWorkspaceTip:
|
||||
'Dieser Connector indiziert alle Repositories im Workspace.',
|
||||
bitbucketProjectsTip: 'Kommagetrennte Projekt-Keys, z. B.: PROJ1,PROJ2',
|
||||
bitbucketRepositorySlugsTip:
|
||||
'Kommagetrennte Repository-Slugs, z. B.: repo-one,repo-two',
|
||||
connectorNameTip:
|
||||
'Geben Sie einen aussagekräftigen Namen für den Connector an',
|
||||
boxDescription:
|
||||
'Verbinden Sie Ihr Box-Laufwerk, um Dateien und Ordner zu synchronisieren.',
|
||||
githubDescription:
|
||||
|
||||
@ -879,6 +879,7 @@ This auto-tagging feature enhances retrieval by adding another layer of domain-s
|
||||
cropImage: 'Crop image',
|
||||
selectModelPlaceholder: 'Select model',
|
||||
configureModelTitle: 'Configure model',
|
||||
connectorNameTip: 'A descriptive name for the connector',
|
||||
confluenceIsCloudTip:
|
||||
'Check if this is a Confluence Cloud instance, uncheck for Confluence Server/Data Center',
|
||||
confluenceWikiBaseUrlTip:
|
||||
@ -923,7 +924,9 @@ Example: Virtual Hosted Style`,
|
||||
google_driveTokenTip:
|
||||
'Upload the OAuth token JSON generated from the OAuth helper or Google Cloud Console. You may also upload a client_secret JSON from an "installed" or "web" application. If this is your first sync, a browser window will open to complete the OAuth consent. If the JSON already contains a refresh token, it will be reused automatically.',
|
||||
google_drivePrimaryAdminTip:
|
||||
'Email address that has access to the Drive content being synced.',
|
||||
'Email address that has access to the Drive content being synced',
|
||||
zendeskDescription:
|
||||
'Connect your Zendesk to sync tickets, articles, and other content.',
|
||||
google_driveMyDriveEmailsTip:
|
||||
'Comma-separated emails whose "My Drive" contents should be indexed (include the primary admin).',
|
||||
google_driveSharedFoldersTip:
|
||||
@ -934,7 +937,16 @@ Example: Virtual Hosted Style`,
|
||||
'Upload the OAuth JSON generated from Google Console. If it only contains client credentials, run the browser-based verification once to mint long-lived refresh tokens.',
|
||||
dropboxDescription:
|
||||
'Connect your Dropbox to sync files and folders from a chosen account.',
|
||||
bitbucketDescription: 'Connect Bitbucket to sync PR content.',
|
||||
bitbucketTopWorkspaceTip:
|
||||
'The Bitbucket workspace to index (e.g., "atlassian" from https://bitbucket.org/atlassian/workspace ).',
|
||||
bitbucketRepositorySlugsTip:
|
||||
'Comma separated repository slugs. E.g., repo-one,repo-two',
|
||||
bitbucketProjectsTip: 'Comma separated project keys. E.g., PROJ1,PROJ2',
|
||||
bitbucketWorkspaceTip:
|
||||
'This connector will index all repositories in the workspace.',
|
||||
boxDescription: 'Connect your Box drive to sync files and folders.',
|
||||
|
||||
githubDescription:
|
||||
'Connect GitHub to sync pull requests and issues for retrieval.',
|
||||
airtableDescription:
|
||||
|
||||
@ -731,6 +731,7 @@ export default {
|
||||
newDocs: 'Новые документы',
|
||||
timeStarted: 'Время начала',
|
||||
log: 'Лог',
|
||||
connectorNameTip: 'Укажите понятное имя для коннектора',
|
||||
confluenceDescription:
|
||||
'Интегрируйте ваше рабочее пространство Confluence для поиска документации.',
|
||||
s3Description:
|
||||
@ -747,6 +748,18 @@ export default {
|
||||
'Синхронизируйте страницы и базы данных из Notion для извлечения знаний.',
|
||||
boxDescription:
|
||||
'Подключите ваш диск Box для синхронизации файлов и папок.',
|
||||
bitbucketDescription:
|
||||
'Подключите Bitbucket для синхронизации содержимого PR.',
|
||||
zendeskDescription:
|
||||
'Подключите Zendesk для синхронизации тикетов, статей и другого контента.',
|
||||
bitbucketTopWorkspaceTip:
|
||||
'Рабочее пространство Bitbucket для индексации (например, "atlassian" из https://bitbucket.org/atlassian/workspace )',
|
||||
bitbucketWorkspaceTip:
|
||||
'Этот коннектор проиндексирует все репозитории в рабочем пространстве.',
|
||||
bitbucketProjectsTip:
|
||||
'Ключи проектов через запятую, например: PROJ1,PROJ2',
|
||||
bitbucketRepositorySlugsTip:
|
||||
'Слоги репозиториев через запятую, например: repo-one,repo-two',
|
||||
githubDescription:
|
||||
'Подключите GitHub для синхронизации содержимого Pull Request и Issue для поиска.',
|
||||
airtableDescription:
|
||||
|
||||
@ -726,6 +726,16 @@ export default {
|
||||
view: '查看',
|
||||
modelsToBeAddedTooltip:
|
||||
'若您的模型供應商未列於此處,但宣稱與 OpenAI 相容,可透過選擇「OpenAI-API-compatible」卡片來設定相關模型。',
|
||||
dropboxDescription: '連接 Dropbox,同步指定帳號下的文件與文件夾。',
|
||||
bitbucketDescription: '連接 Bitbucket,同步 PR 內容。',
|
||||
zendeskDescription: '連接 Zendesk,同步工單、文章及其他內容。',
|
||||
bitbucketTopWorkspaceTip:
|
||||
'要索引的 Bitbucket 工作區(例如:https://bitbucket.org/atlassian/workspace 中的 "atlassian")',
|
||||
bitbucketWorkspaceTip: '此連接器將索引工作區下的所有倉庫。',
|
||||
bitbucketRepositorySlugsTip:
|
||||
'以英文逗號分隔的倉庫 slug,例如:repo-one,repo-two',
|
||||
bitbucketProjectsTip: '以英文逗號分隔的項目鍵,例如:PROJ1,PROJ2',
|
||||
connectorNameTip: '為連接器填寫一個有意義的名稱',
|
||||
},
|
||||
message: {
|
||||
registered: '註冊成功',
|
||||
|
||||
@ -53,6 +53,7 @@ export default {
|
||||
noData: '暂无数据',
|
||||
bedrockCredentialsHint:
|
||||
'提示:Access Key / Secret Key 可留空,以启用 AWS IAM 自动验证。',
|
||||
zendeskDescription: '连接 Zendesk,同步工单、文章及其他内容。',
|
||||
promptPlaceholder: '请输入或使用 / 快速插入变量。',
|
||||
selected: '已选择',
|
||||
},
|
||||
@ -864,6 +865,14 @@ General:实体和关系提取提示来自 GitHub - microsoft/graphrag:基于
|
||||
'请上传由 Google Console 生成的 OAuth JSON。如果仅包含 client credentials,请通过浏览器授权一次以获取长期有效的刷新 Token。',
|
||||
dropboxDescription: '连接 Dropbox,同步指定账号下的文件与文件夹。',
|
||||
boxDescription: '连接你的 Box 云盘以同步文件和文件夹。',
|
||||
bitbucketDescription: '连接 Bitbucket,同步 PR 内容。',
|
||||
bitbucketTopWorkspaceTip:
|
||||
'要索引的 Bitbucket 工作区(例如:https://bitbucket.org/atlassian/workspace 中的 "atlassian")',
|
||||
bitbucketWorkspaceTip: '该连接器将索引工作区下的所有仓库。',
|
||||
bitbucketProjectsTip: '用英文逗号分隔的项目 key,例如:PROJ1,PROJ2',
|
||||
bitbucketRepositorySlugsTip:
|
||||
'用英文逗号分隔的仓库 slug,例如:repo-one,repo-two',
|
||||
connectorNameTip: '为连接器命名',
|
||||
githubDescription:
|
||||
'连接 GitHub,可同步 Pull Request 与 Issue 内容用于检索。',
|
||||
airtableDescription: '连接 Airtable,同步指定工作区下指定表格中的文件。',
|
||||
|
||||
@ -25,13 +25,14 @@ import { useComposeLlmOptionsByModelTypes } from '@/hooks/use-llm-request';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { t } from 'i18next';
|
||||
import { Settings } from 'lucide-react';
|
||||
import { useCallback, useEffect, useMemo, useState } from 'react';
|
||||
import { useCallback, useContext, useEffect, useMemo, useState } from 'react';
|
||||
import {
|
||||
ControllerRenderProps,
|
||||
FieldValues,
|
||||
useFormContext,
|
||||
} from 'react-hook-form';
|
||||
import { useLocation } from 'umi';
|
||||
import { history, useLocation } from 'umi';
|
||||
import { DataSetContext } from '..';
|
||||
import {
|
||||
MetadataType,
|
||||
useManageMetadata,
|
||||
@ -371,6 +372,7 @@ export function AutoMetadata({
|
||||
// get metadata field
|
||||
const location = useLocation();
|
||||
const form = useFormContext();
|
||||
const datasetContext = useContext(DataSetContext);
|
||||
const {
|
||||
manageMetadataVisible,
|
||||
showManageMetadataModal,
|
||||
@ -394,13 +396,14 @@ export function AutoMetadata({
|
||||
const locationState = location.state as
|
||||
| { openMetadata?: boolean }
|
||||
| undefined;
|
||||
if (locationState?.openMetadata) {
|
||||
if (locationState?.openMetadata && !datasetContext?.loading) {
|
||||
setTimeout(() => {
|
||||
handleClickOpenMetadata();
|
||||
}, 100);
|
||||
}, 0);
|
||||
locationState.openMetadata = false;
|
||||
history.replace({ ...location }, locationState);
|
||||
}
|
||||
}, [location, handleClickOpenMetadata]);
|
||||
}, [location, handleClickOpenMetadata, datasetContext]);
|
||||
|
||||
const autoMetadataField: FormFieldConfig = {
|
||||
name: 'parser_config.enable_metadata',
|
||||
|
||||
@ -37,7 +37,8 @@ export function useHasParsedDocument(isEdit?: boolean) {
|
||||
export const useFetchKnowledgeConfigurationOnMount = (
|
||||
form: UseFormReturn<z.infer<typeof formSchema>, any, undefined>,
|
||||
) => {
|
||||
const { data: knowledgeDetails } = useFetchKnowledgeBaseConfiguration();
|
||||
const { data: knowledgeDetails, loading } =
|
||||
useFetchKnowledgeBaseConfiguration();
|
||||
|
||||
useEffect(() => {
|
||||
const parser_config = {
|
||||
@ -71,7 +72,7 @@ export const useFetchKnowledgeConfigurationOnMount = (
|
||||
form.reset(formValues);
|
||||
}, [form, knowledgeDetails]);
|
||||
|
||||
return knowledgeDetails;
|
||||
return { knowledgeDetails, loading };
|
||||
};
|
||||
|
||||
export const useSelectKnowledgeDetailsLoading = () =>
|
||||
|
||||
@ -7,11 +7,11 @@ import { Form } from '@/components/ui/form';
|
||||
import { FormLayout } from '@/constants/form';
|
||||
import { DocumentParserType } from '@/constants/knowledge';
|
||||
import { PermissionRole } from '@/constants/permission';
|
||||
import { IConnector } from '@/interfaces/database/knowledge';
|
||||
import { IConnector, IKnowledge } from '@/interfaces/database/knowledge';
|
||||
import { useDataSourceInfo } from '@/pages/user-setting/data-source/constant';
|
||||
import { IDataSourceBase } from '@/pages/user-setting/data-source/interface';
|
||||
import { zodResolver } from '@hookform/resolvers/zod';
|
||||
import { useEffect, useState } from 'react';
|
||||
import { createContext, useEffect, useState } from 'react';
|
||||
import { useForm, useWatch } from 'react-hook-form';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { z } from 'zod';
|
||||
@ -35,6 +35,10 @@ const enum DocumentType {
|
||||
DeepDOC = 'DeepDOC',
|
||||
PlainText = 'Plain Text',
|
||||
}
|
||||
export const DataSetContext = createContext<{
|
||||
loading: boolean;
|
||||
knowledgeDetails: IKnowledge;
|
||||
}>({ loading: false, knowledgeDetails: {} as IKnowledge });
|
||||
|
||||
const initialEntityTypes = [
|
||||
'organization',
|
||||
@ -102,7 +106,8 @@ export default function DatasetSettings() {
|
||||
},
|
||||
});
|
||||
const { dataSourceInfo } = useDataSourceInfo();
|
||||
const knowledgeDetails = useFetchKnowledgeConfigurationOnMount(form);
|
||||
const { knowledgeDetails, loading: datasetSettingLoading } =
|
||||
useFetchKnowledgeConfigurationOnMount(form);
|
||||
// const [pipelineData, setPipelineData] = useState<IDataPipelineNodeProps>();
|
||||
const [sourceData, setSourceData] = useState<IDataSourceNodeProps[]>();
|
||||
const [graphRagGenerateData, setGraphRagGenerateData] =
|
||||
@ -254,81 +259,90 @@ export default function DatasetSettings() {
|
||||
description={t('knowledgeConfiguration.titleDescription')}
|
||||
></TopTitle>
|
||||
<div className="flex gap-14 flex-1 min-h-0">
|
||||
<Form {...form}>
|
||||
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-6 ">
|
||||
<div className="w-[768px] h-[calc(100vh-240px)] pr-1 overflow-y-auto scrollbar-auto">
|
||||
<MainContainer className="text-text-secondary">
|
||||
<div className="text-base font-medium text-text-primary">
|
||||
{t('knowledgeConfiguration.baseInfo')}
|
||||
</div>
|
||||
<GeneralForm></GeneralForm>
|
||||
<DataSetContext.Provider
|
||||
value={{
|
||||
loading: datasetSettingLoading,
|
||||
knowledgeDetails: knowledgeDetails,
|
||||
}}
|
||||
>
|
||||
<Form {...form}>
|
||||
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-6 ">
|
||||
<div className="w-[768px] h-[calc(100vh-240px)] pr-1 overflow-y-auto scrollbar-auto">
|
||||
<MainContainer className="text-text-secondary">
|
||||
<div className="text-base font-medium text-text-primary">
|
||||
{t('knowledgeConfiguration.baseInfo')}
|
||||
</div>
|
||||
<GeneralForm></GeneralForm>
|
||||
|
||||
<Divider />
|
||||
<div className="text-base font-medium text-text-primary">
|
||||
{t('knowledgeConfiguration.dataPipeline')}
|
||||
</div>
|
||||
<ParseTypeItem line={1} />
|
||||
{parseType === 1 && (
|
||||
<ChunkMethodItem line={1}></ChunkMethodItem>
|
||||
)}
|
||||
{parseType === 2 && (
|
||||
<DataFlowSelect
|
||||
isMult={false}
|
||||
showToDataPipeline={true}
|
||||
formFieldName="pipeline_id"
|
||||
layout={FormLayout.Horizontal}
|
||||
/>
|
||||
)}
|
||||
<Divider />
|
||||
<div className="text-base font-medium text-text-primary">
|
||||
{t('knowledgeConfiguration.dataPipeline')}
|
||||
</div>
|
||||
<ParseTypeItem line={1} />
|
||||
{parseType === 1 && (
|
||||
<ChunkMethodItem line={1}></ChunkMethodItem>
|
||||
)}
|
||||
{parseType === 2 && (
|
||||
<DataFlowSelect
|
||||
isMult={false}
|
||||
showToDataPipeline={true}
|
||||
formFieldName="pipeline_id"
|
||||
layout={FormLayout.Horizontal}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* <Divider /> */}
|
||||
{parseType === 1 && <ChunkMethodForm />}
|
||||
{/* <Divider /> */}
|
||||
{parseType === 1 && <ChunkMethodForm />}
|
||||
|
||||
{/* <LinkDataPipeline
|
||||
{/* <LinkDataPipeline
|
||||
data={pipelineData}
|
||||
handleLinkOrEditSubmit={handleLinkOrEditSubmit}
|
||||
/> */}
|
||||
<Divider />
|
||||
<LinkDataSource
|
||||
data={sourceData}
|
||||
handleLinkOrEditSubmit={handleLinkOrEditSubmit}
|
||||
unbindFunc={unbindFunc}
|
||||
handleAutoParse={handleAutoParse}
|
||||
/>
|
||||
<Divider />
|
||||
<div className="text-base font-medium text-text-primary">
|
||||
{t('knowledgeConfiguration.globalIndex')}
|
||||
</div>
|
||||
<GraphRagItems
|
||||
className="border-none p-0"
|
||||
data={graphRagGenerateData as IGenerateLogButtonProps}
|
||||
onDelete={() =>
|
||||
handleDeletePipelineTask(GenerateType.KnowledgeGraph)
|
||||
}
|
||||
></GraphRagItems>
|
||||
<Divider />
|
||||
<RaptorFormFields
|
||||
data={raptorGenerateData as IGenerateLogButtonProps}
|
||||
onDelete={() => handleDeletePipelineTask(GenerateType.Raptor)}
|
||||
></RaptorFormFields>
|
||||
</MainContainer>
|
||||
</div>
|
||||
<div className="text-right items-center flex justify-end gap-3 w-[768px]">
|
||||
<Button
|
||||
type="reset"
|
||||
className="bg-transparent text-color-white hover:bg-transparent border-gray-500 border-[1px]"
|
||||
onClick={() => {
|
||||
form.reset();
|
||||
}}
|
||||
>
|
||||
{t('knowledgeConfiguration.cancel')}
|
||||
</Button>
|
||||
<SavingButton></SavingButton>
|
||||
</div>
|
||||
</form>
|
||||
</Form>
|
||||
<div className="flex-1">
|
||||
{parseType === 1 && <ChunkMethodLearnMore parserId={selectedTag} />}
|
||||
</div>
|
||||
<Divider />
|
||||
<LinkDataSource
|
||||
data={sourceData}
|
||||
handleLinkOrEditSubmit={handleLinkOrEditSubmit}
|
||||
unbindFunc={unbindFunc}
|
||||
handleAutoParse={handleAutoParse}
|
||||
/>
|
||||
<Divider />
|
||||
<div className="text-base font-medium text-text-primary">
|
||||
{t('knowledgeConfiguration.globalIndex')}
|
||||
</div>
|
||||
<GraphRagItems
|
||||
className="border-none p-0"
|
||||
data={graphRagGenerateData as IGenerateLogButtonProps}
|
||||
onDelete={() =>
|
||||
handleDeletePipelineTask(GenerateType.KnowledgeGraph)
|
||||
}
|
||||
></GraphRagItems>
|
||||
<Divider />
|
||||
<RaptorFormFields
|
||||
data={raptorGenerateData as IGenerateLogButtonProps}
|
||||
onDelete={() =>
|
||||
handleDeletePipelineTask(GenerateType.Raptor)
|
||||
}
|
||||
></RaptorFormFields>
|
||||
</MainContainer>
|
||||
</div>
|
||||
<div className="text-right items-center flex justify-end gap-3 w-[768px]">
|
||||
<Button
|
||||
type="reset"
|
||||
className="bg-transparent text-color-white hover:bg-transparent border-gray-500 border-[1px]"
|
||||
onClick={() => {
|
||||
form.reset();
|
||||
}}
|
||||
>
|
||||
{t('knowledgeConfiguration.cancel')}
|
||||
</Button>
|
||||
<SavingButton></SavingButton>
|
||||
</div>
|
||||
</form>
|
||||
</Form>
|
||||
<div className="flex-1">
|
||||
{parseType === 1 && <ChunkMethodLearnMore parserId={selectedTag} />}
|
||||
</div>
|
||||
</DataSetContext.Provider>
|
||||
</div>
|
||||
</section>
|
||||
);
|
||||
|
||||
@ -10,18 +10,21 @@ import { useNavigate } from 'umi';
|
||||
import { Agents } from './agent-list';
|
||||
import { SeeAllAppCard } from './application-card';
|
||||
import { ChatList } from './chat-list';
|
||||
import { MemoryList } from './memory-list';
|
||||
import { SearchList } from './search-list';
|
||||
|
||||
const IconMap = {
|
||||
[Routes.Chats]: 'chats',
|
||||
[Routes.Searches]: 'searches',
|
||||
[Routes.Agents]: 'agents',
|
||||
[Routes.Memories]: 'memory',
|
||||
};
|
||||
|
||||
const EmptyTypeMap = {
|
||||
[Routes.Chats]: EmptyCardType.Chat,
|
||||
[Routes.Searches]: EmptyCardType.Search,
|
||||
[Routes.Agents]: EmptyCardType.Agent,
|
||||
[Routes.Memories]: EmptyCardType.Memory,
|
||||
};
|
||||
|
||||
export function Applications() {
|
||||
@ -47,6 +50,7 @@ export function Applications() {
|
||||
{ value: Routes.Chats, label: t('chat.chatApps') },
|
||||
{ value: Routes.Searches, label: t('search.searchApps') },
|
||||
{ value: Routes.Agents, label: t('header.flow') },
|
||||
{ value: Routes.Memories, label: t('memories.memory') },
|
||||
],
|
||||
[t],
|
||||
);
|
||||
@ -96,6 +100,12 @@ export function Applications() {
|
||||
setLoading={(loading: boolean) => setLoading(loading)}
|
||||
></SearchList>
|
||||
)}
|
||||
{val === Routes.Memories && (
|
||||
<MemoryList
|
||||
setListLength={(length: number) => setListLength(length)}
|
||||
setLoading={(loading: boolean) => setLoading(loading)}
|
||||
></MemoryList>
|
||||
)}
|
||||
{listLength > 0 && (
|
||||
<SeeAllAppCard
|
||||
click={() => handleNavigate({ isCreate: false })}
|
||||
|
||||
79
web/src/pages/home/memory-list.tsx
Normal file
79
web/src/pages/home/memory-list.tsx
Normal file
@ -0,0 +1,79 @@
|
||||
import { HomeCard } from '@/components/home-card';
|
||||
import { MoreButton } from '@/components/more-button';
|
||||
import { useNavigatePage } from '@/hooks/logic-hooks/navigate-hooks';
|
||||
import { useEffect } from 'react';
|
||||
import { AddOrEditModal } from '../memories/add-or-edit-modal';
|
||||
import { useFetchMemoryList, useRenameMemory } from '../memories/hooks';
|
||||
import { ICreateMemoryProps } from '../memories/interface';
|
||||
import { MemoryDropdown } from '../memories/memory-dropdown';
|
||||
|
||||
export function MemoryList({
|
||||
setListLength,
|
||||
setLoading,
|
||||
}: {
|
||||
setListLength: (length: number) => void;
|
||||
setLoading?: (loading: boolean) => void;
|
||||
}) {
|
||||
const { data, refetch: refetchList, isLoading } = useFetchMemoryList();
|
||||
const { navigateToMemory } = useNavigatePage();
|
||||
// const {
|
||||
// openCreateModal,
|
||||
// showSearchRenameModal,
|
||||
// hideSearchRenameModal,
|
||||
// searchRenameLoading,
|
||||
// onSearchRenameOk,
|
||||
// initialSearchName,
|
||||
// } = useRenameSearch();
|
||||
const {
|
||||
openCreateModal,
|
||||
showMemoryRenameModal,
|
||||
hideMemoryModal,
|
||||
searchRenameLoading,
|
||||
onMemoryRenameOk,
|
||||
initialMemory,
|
||||
} = useRenameMemory();
|
||||
const onMemoryConfirm = (data: ICreateMemoryProps) => {
|
||||
onMemoryRenameOk(data, () => {
|
||||
refetchList();
|
||||
});
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
setListLength(data?.data?.memory_list?.length || 0);
|
||||
setLoading?.(isLoading || false);
|
||||
}, [data, setListLength, isLoading, setLoading]);
|
||||
return (
|
||||
<>
|
||||
{data?.data.memory_list.slice(0, 10).map((x) => (
|
||||
<HomeCard
|
||||
key={x.id}
|
||||
data={{
|
||||
name: x?.name,
|
||||
avatar: x?.avatar,
|
||||
description: x?.description,
|
||||
update_time: x?.create_time,
|
||||
}}
|
||||
onClick={navigateToMemory(x.id)}
|
||||
moreDropdown={
|
||||
<MemoryDropdown
|
||||
memory={x}
|
||||
showMemoryRenameModal={showMemoryRenameModal}
|
||||
>
|
||||
<MoreButton></MoreButton>
|
||||
</MemoryDropdown>
|
||||
}
|
||||
></HomeCard>
|
||||
))}
|
||||
{openCreateModal && (
|
||||
<AddOrEditModal
|
||||
initialMemory={initialMemory}
|
||||
isCreate={false}
|
||||
open={openCreateModal}
|
||||
loading={searchRenameLoading}
|
||||
onClose={hideMemoryModal}
|
||||
onSubmit={onMemoryConfirm}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
);
|
||||
}
|
||||
@ -1,247 +0,0 @@
|
||||
import { useEffect, useMemo, useState } from 'react';
|
||||
import { useFormContext } from 'react-hook-form';
|
||||
|
||||
import { SelectWithSearch } from '@/components/originui/select-with-search';
|
||||
import { RAGFlowFormItem } from '@/components/ragflow-form';
|
||||
import { Input } from '@/components/ui/input';
|
||||
import { Segmented } from '@/components/ui/segmented';
|
||||
import { t } from 'i18next';
|
||||
|
||||
// UI-only auth modes for S3
|
||||
// access_key: Access Key ID + Secret
|
||||
// iam_role: only Role ARN
|
||||
// assume_role: no input fields (uses environment role)
|
||||
type AuthMode = 'access_key' | 'iam_role' | 'assume_role';
|
||||
type BlobMode = 's3' | 's3_compatible';
|
||||
|
||||
const modeOptions = [
|
||||
{ label: 'S3', value: 's3' },
|
||||
{ label: 'S3 Compatible', value: 's3_compatible' },
|
||||
];
|
||||
|
||||
const authOptions = [
|
||||
{ label: 'Access Key', value: 'access_key' },
|
||||
{ label: 'IAM Role', value: 'iam_role' },
|
||||
{ label: 'Assume Role', value: 'assume_role' },
|
||||
];
|
||||
|
||||
const addressingOptions = [
|
||||
{ label: 'Virtual Hosted Style', value: 'virtual' },
|
||||
{ label: 'Path Style', value: 'path' },
|
||||
];
|
||||
|
||||
const deriveInitialAuthMode = (credentials: any): AuthMode => {
|
||||
const authMethod = credentials?.authentication_method;
|
||||
if (authMethod === 'iam_role') return 'iam_role';
|
||||
if (authMethod === 'assume_role') return 'assume_role';
|
||||
if (credentials?.aws_role_arn) return 'iam_role';
|
||||
if (credentials?.aws_access_key_id || credentials?.aws_secret_access_key)
|
||||
return 'access_key';
|
||||
return 'access_key';
|
||||
};
|
||||
|
||||
const deriveInitialMode = (bucketType?: string): BlobMode =>
|
||||
bucketType === 's3_compatible' ? 's3_compatible' : 's3';
|
||||
|
||||
const BlobTokenField = () => {
|
||||
const form = useFormContext();
|
||||
const credentials = form.watch('config.credentials');
|
||||
const watchedBucketType = form.watch('config.bucket_type');
|
||||
|
||||
const [mode, setMode] = useState<BlobMode>(
|
||||
deriveInitialMode(watchedBucketType),
|
||||
);
|
||||
const [authMode, setAuthMode] = useState<AuthMode>(() =>
|
||||
deriveInitialAuthMode(credentials),
|
||||
);
|
||||
|
||||
// Keep bucket_type in sync with UI mode
|
||||
useEffect(() => {
|
||||
const nextMode = deriveInitialMode(watchedBucketType);
|
||||
setMode((prev) => (prev === nextMode ? prev : nextMode));
|
||||
}, [watchedBucketType]);
|
||||
|
||||
useEffect(() => {
|
||||
form.setValue('config.bucket_type', mode, { shouldDirty: true });
|
||||
// Default addressing style for compatible mode
|
||||
if (
|
||||
mode === 's3_compatible' &&
|
||||
!form.getValues('config.credentials.addressing_style')
|
||||
) {
|
||||
form.setValue('config.credentials.addressing_style', 'virtual', {
|
||||
shouldDirty: false,
|
||||
});
|
||||
}
|
||||
if (mode === 's3_compatible' && authMode !== 'access_key') {
|
||||
setAuthMode('access_key');
|
||||
}
|
||||
// Persist authentication_method for backend
|
||||
const nextAuthMethod: AuthMode =
|
||||
mode === 's3_compatible' ? 'access_key' : authMode;
|
||||
form.setValue('config.credentials.authentication_method', nextAuthMethod, {
|
||||
shouldDirty: true,
|
||||
});
|
||||
// Clear errors for fields that are not relevant in the current mode/auth selection
|
||||
const inactiveFields: string[] = [];
|
||||
if (mode === 's3_compatible') {
|
||||
inactiveFields.push('config.credentials.aws_role_arn');
|
||||
} else {
|
||||
if (authMode === 'iam_role') {
|
||||
inactiveFields.push('config.credentials.aws_access_key_id');
|
||||
inactiveFields.push('config.credentials.aws_secret_access_key');
|
||||
}
|
||||
if (authMode === 'assume_role') {
|
||||
inactiveFields.push('config.credentials.aws_access_key_id');
|
||||
inactiveFields.push('config.credentials.aws_secret_access_key');
|
||||
inactiveFields.push('config.credentials.aws_role_arn');
|
||||
}
|
||||
}
|
||||
if (inactiveFields.length) {
|
||||
form.clearErrors(inactiveFields as any);
|
||||
}
|
||||
}, [form, mode, authMode]);
|
||||
|
||||
const isS3 = mode === 's3';
|
||||
const requiresAccessKey =
|
||||
authMode === 'access_key' || mode === 's3_compatible';
|
||||
const requiresRoleArn = isS3 && authMode === 'iam_role';
|
||||
|
||||
// Help text for assume role (no inputs)
|
||||
const assumeRoleNote = useMemo(
|
||||
() => t('No credentials required. Uses the default environment role.'),
|
||||
[t],
|
||||
);
|
||||
|
||||
return (
|
||||
<div className="flex flex-col gap-4">
|
||||
<div className="flex flex-col gap-2">
|
||||
<div className="text-sm text-text-secondary">Mode</div>
|
||||
<Segmented
|
||||
options={modeOptions}
|
||||
value={mode}
|
||||
onChange={(val) => setMode(val as BlobMode)}
|
||||
className="w-full"
|
||||
itemClassName="flex-1 justify-center"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{isS3 && (
|
||||
<div className="flex flex-col gap-2">
|
||||
<div className="text-sm text-text-secondary">Authentication</div>
|
||||
<Segmented
|
||||
options={authOptions}
|
||||
value={authMode}
|
||||
onChange={(val) => setAuthMode(val as AuthMode)}
|
||||
className="w-full"
|
||||
itemClassName="flex-1 justify-center"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{requiresAccessKey && (
|
||||
<RAGFlowFormItem
|
||||
name="config.credentials.aws_access_key_id"
|
||||
label="AWS Access Key ID"
|
||||
required={requiresAccessKey}
|
||||
rules={{
|
||||
validate: (val) =>
|
||||
requiresAccessKey
|
||||
? Boolean(val) || 'Access Key ID is required'
|
||||
: true,
|
||||
}}
|
||||
>
|
||||
{(field) => (
|
||||
<Input {...field} placeholder="AKIA..." autoComplete="off" />
|
||||
)}
|
||||
</RAGFlowFormItem>
|
||||
)}
|
||||
|
||||
{requiresAccessKey && (
|
||||
<RAGFlowFormItem
|
||||
name="config.credentials.aws_secret_access_key"
|
||||
label="AWS Secret Access Key"
|
||||
required={requiresAccessKey}
|
||||
rules={{
|
||||
validate: (val) =>
|
||||
requiresAccessKey
|
||||
? Boolean(val) || 'Secret Access Key is required'
|
||||
: true,
|
||||
}}
|
||||
>
|
||||
{(field) => (
|
||||
<Input
|
||||
{...field}
|
||||
type="password"
|
||||
placeholder="****************"
|
||||
autoComplete="new-password"
|
||||
/>
|
||||
)}
|
||||
</RAGFlowFormItem>
|
||||
)}
|
||||
|
||||
{requiresRoleArn && (
|
||||
<RAGFlowFormItem
|
||||
name="config.credentials.aws_role_arn"
|
||||
label="Role ARN"
|
||||
required={requiresRoleArn}
|
||||
tooltip="The role will be assumed by the runtime environment."
|
||||
rules={{
|
||||
validate: (val) =>
|
||||
requiresRoleArn ? Boolean(val) || 'Role ARN is required' : true,
|
||||
}}
|
||||
>
|
||||
{(field) => (
|
||||
<Input
|
||||
{...field}
|
||||
placeholder="arn:aws:iam::123456789012:role/YourRole"
|
||||
autoComplete="off"
|
||||
/>
|
||||
)}
|
||||
</RAGFlowFormItem>
|
||||
)}
|
||||
|
||||
{isS3 && authMode === 'assume_role' && (
|
||||
<div className="text-sm text-text-secondary bg-bg-card border border-border-button rounded-md px-3 py-2">
|
||||
{assumeRoleNote}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{mode === 's3_compatible' && (
|
||||
<div className="flex flex-col gap-4">
|
||||
<RAGFlowFormItem
|
||||
name="config.credentials.addressing_style"
|
||||
label="Addressing Style"
|
||||
tooltip={t('setting.S3CompatibleAddressingStyleTip')}
|
||||
required={false}
|
||||
>
|
||||
{(field) => (
|
||||
<SelectWithSearch
|
||||
triggerClassName="!shrink"
|
||||
options={addressingOptions}
|
||||
value={field.value || 'virtual'}
|
||||
onChange={(val) => field.onChange(val)}
|
||||
/>
|
||||
)}
|
||||
</RAGFlowFormItem>
|
||||
|
||||
<RAGFlowFormItem
|
||||
name="config.credentials.endpoint_url"
|
||||
label="Endpoint URL"
|
||||
required={false}
|
||||
tooltip={t('setting.S3CompatibleEndpointUrlTip')}
|
||||
>
|
||||
{(field) => (
|
||||
<Input
|
||||
{...field}
|
||||
placeholder="https://fsn1.your-objectstorage.com"
|
||||
autoComplete="off"
|
||||
/>
|
||||
)}
|
||||
</RAGFlowFormItem>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default BlobTokenField;
|
||||
@ -131,7 +131,6 @@ const BoxTokenField = ({ value, onChange }: BoxTokenFieldProps) => {
|
||||
|
||||
const finalValue: Record<string, any> = {
|
||||
...rest,
|
||||
// 确保客户端配置字段有值(优先后端返回,其次当前输入)
|
||||
client_id: rest.client_id ?? clientId.trim(),
|
||||
client_secret: rest.client_secret ?? clientSecret.trim(),
|
||||
};
|
||||
@ -146,8 +145,6 @@ const BoxTokenField = ({ value, onChange }: BoxTokenFieldProps) => {
|
||||
finalValue.authorization_code = code;
|
||||
}
|
||||
|
||||
// access_token / refresh_token 由后端返回,已在 ...rest 中带上,无需额外 state
|
||||
|
||||
onChange(JSON.stringify(finalValue));
|
||||
message.success('Box authorization completed.');
|
||||
clearWebState();
|
||||
|
||||
@ -1,200 +0,0 @@
|
||||
import { useCallback, useEffect, useMemo, useState } from 'react';
|
||||
import { ControllerRenderProps, useFormContext } from 'react-hook-form';
|
||||
|
||||
import { Checkbox } from '@/components/ui/checkbox';
|
||||
import { Input } from '@/components/ui/input';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { debounce } from 'lodash';
|
||||
|
||||
/* ---------------- Token Field ---------------- */
|
||||
|
||||
export type ConfluenceTokenFieldProps = ControllerRenderProps & {
|
||||
fieldType: 'username' | 'token';
|
||||
placeholder?: string;
|
||||
disabled?: boolean;
|
||||
};
|
||||
|
||||
const ConfluenceTokenField = ({
|
||||
fieldType,
|
||||
value,
|
||||
onChange,
|
||||
placeholder,
|
||||
disabled,
|
||||
...rest
|
||||
}: ConfluenceTokenFieldProps) => {
|
||||
return (
|
||||
<div className="flex w-full flex-col gap-2">
|
||||
<Input
|
||||
className="w-full"
|
||||
type={fieldType === 'token' ? 'password' : 'text'}
|
||||
value={value ?? ''}
|
||||
onChange={(e) => onChange(e.target.value)}
|
||||
placeholder={
|
||||
placeholder ||
|
||||
(fieldType === 'token'
|
||||
? 'Enter your Confluence access token'
|
||||
: 'Confluence username or email')
|
||||
}
|
||||
disabled={disabled}
|
||||
{...rest}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
/* ---------------- Indexing Mode Field ---------------- */
|
||||
|
||||
type ConfluenceIndexingMode = 'everything' | 'space' | 'page';
|
||||
|
||||
export type ConfluenceIndexingModeFieldProps = ControllerRenderProps;
|
||||
|
||||
export const ConfluenceIndexingModeField = (
|
||||
fieldProps: ControllerRenderProps,
|
||||
) => {
|
||||
const { value, onChange, disabled } = fieldProps;
|
||||
const [mode, setMode] = useState<ConfluenceIndexingMode>(
|
||||
value || 'everything',
|
||||
);
|
||||
const { watch, setValue } = useFormContext();
|
||||
|
||||
useEffect(() => setMode(value), [value]);
|
||||
|
||||
const spaceValue = watch('config.space');
|
||||
const pageIdValue = watch('config.page_id');
|
||||
const indexRecursively = watch('config.index_recursively');
|
||||
|
||||
useEffect(() => {
|
||||
if (!value) onChange('everything');
|
||||
}, [value, onChange]);
|
||||
|
||||
const handleModeChange = useCallback(
|
||||
(nextMode?: string) => {
|
||||
let normalized: ConfluenceIndexingMode = 'everything';
|
||||
if (nextMode) {
|
||||
normalized = nextMode as ConfluenceIndexingMode;
|
||||
setMode(normalized);
|
||||
onChange(normalized);
|
||||
} else {
|
||||
setMode(mode);
|
||||
normalized = mode;
|
||||
onChange(mode);
|
||||
// onChange(mode);
|
||||
}
|
||||
if (normalized === 'everything') {
|
||||
setValue('config.space', '');
|
||||
setValue('config.page_id', '');
|
||||
setValue('config.index_recursively', false);
|
||||
} else if (normalized === 'space') {
|
||||
setValue('config.page_id', '');
|
||||
setValue('config.index_recursively', false);
|
||||
} else if (normalized === 'page') {
|
||||
setValue('config.space', '');
|
||||
}
|
||||
},
|
||||
[mode, onChange, setValue],
|
||||
);
|
||||
|
||||
const debouncedHandleChange = useMemo(
|
||||
() =>
|
||||
debounce(() => {
|
||||
handleModeChange();
|
||||
}, 300),
|
||||
[handleModeChange],
|
||||
);
|
||||
|
||||
return (
|
||||
<div className="w-full rounded-lg border border-border-button bg-bg-card p-4 space-y-4">
|
||||
<div className="flex items-center gap-2 text-sm font-medium text-text-secondary">
|
||||
{INDEX_MODE_OPTIONS.map((option) => {
|
||||
const isActive = option.value === mode;
|
||||
return (
|
||||
<button
|
||||
key={option.value}
|
||||
type="button"
|
||||
disabled={disabled}
|
||||
onClick={() => handleModeChange(option.value)}
|
||||
className={cn(
|
||||
'flex-1 rounded-lg border px-3 py-2 transition-all',
|
||||
'border-transparent bg-transparent text-text-secondary hover:border-border-button hover:bg-bg-card-secondary',
|
||||
isActive &&
|
||||
'border-border-button bg-background text-primary shadow-sm',
|
||||
)}
|
||||
>
|
||||
{option.label}
|
||||
</button>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
{mode === 'everything' && (
|
||||
<p className="text-sm text-text-secondary">
|
||||
This connector will index all pages the provided credentials have
|
||||
access to.
|
||||
</p>
|
||||
)}
|
||||
|
||||
{mode === 'space' && (
|
||||
<div className="space-y-2">
|
||||
<div className="text-sm font-semibold text-text-primary">
|
||||
Space Key
|
||||
</div>
|
||||
<Input
|
||||
className="w-full"
|
||||
value={spaceValue ?? ''}
|
||||
onChange={(e) => {
|
||||
const value = e.target.value;
|
||||
setValue('config.space', value);
|
||||
debouncedHandleChange();
|
||||
}}
|
||||
placeholder="e.g. KB"
|
||||
disabled={disabled}
|
||||
/>
|
||||
<p className="text-xs text-text-secondary">
|
||||
The Confluence space key to index.
|
||||
</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{mode === 'page' && (
|
||||
<div className="space-y-2">
|
||||
<div className="text-sm font-semibold text-text-primary">Page ID</div>
|
||||
<Input
|
||||
className="w-full"
|
||||
value={pageIdValue ?? ''}
|
||||
onChange={(e) => {
|
||||
setValue('config.page_id', e.target.value);
|
||||
debouncedHandleChange();
|
||||
}}
|
||||
placeholder="e.g. 123456"
|
||||
disabled={disabled}
|
||||
/>
|
||||
<p className="text-xs text-text-secondary">
|
||||
The Confluence page ID to index.
|
||||
</p>
|
||||
|
||||
<div className="flex items-center gap-2 pt-2">
|
||||
<Checkbox
|
||||
checked={Boolean(indexRecursively)}
|
||||
onCheckedChange={(checked) => {
|
||||
setValue('config.index_recursively', Boolean(checked));
|
||||
debouncedHandleChange();
|
||||
}}
|
||||
disabled={disabled}
|
||||
/>
|
||||
<span className="text-sm text-text-secondary">
|
||||
Index child pages recursively
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
const INDEX_MODE_OPTIONS = [
|
||||
{ label: 'Everything', value: 'everything' },
|
||||
{ label: 'Space', value: 'space' },
|
||||
{ label: 'Page', value: 'page' },
|
||||
];
|
||||
|
||||
export default ConfluenceTokenField;
|
||||
@ -0,0 +1,83 @@
|
||||
import { FilterFormField, FormFieldType } from '@/components/dynamic-form';
|
||||
import { TFunction } from 'i18next';
|
||||
|
||||
export const bitbucketConstant = (t: TFunction) => [
|
||||
{
|
||||
label: 'Bitbucket Account Email',
|
||||
name: 'config.credentials.bitbucket_account_email',
|
||||
type: FormFieldType.Email,
|
||||
required: true,
|
||||
},
|
||||
{
|
||||
label: 'Bitbucket API Token',
|
||||
name: 'config.credentials.bitbucket_api_token',
|
||||
type: FormFieldType.Password,
|
||||
required: true,
|
||||
},
|
||||
{
|
||||
label: 'Workspace',
|
||||
name: 'config.workspace',
|
||||
type: FormFieldType.Text,
|
||||
required: true,
|
||||
tooltip: t('setting.bitbucketTopWorkspaceTip'),
|
||||
},
|
||||
{
|
||||
label: 'Index Mode',
|
||||
name: 'config.index_mode',
|
||||
type: FormFieldType.Segmented,
|
||||
options: [
|
||||
{ label: 'Repositories', value: 'repositories' },
|
||||
{ label: 'Project(s)', value: 'projects' },
|
||||
{ label: 'Workspace', value: 'workspace' },
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'Repository Slugs',
|
||||
name: 'config.repository_slugs',
|
||||
type: FormFieldType.Text,
|
||||
customValidate: (val: string, formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
if (!val && index_mode === 'repositories') {
|
||||
return 'Repository Slugs is required';
|
||||
}
|
||||
return true;
|
||||
},
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
return index_mode === 'repositories';
|
||||
},
|
||||
tooltip: t('setting.bitbucketRepositorySlugsTip'),
|
||||
},
|
||||
{
|
||||
label: 'Projects',
|
||||
name: 'config.projects',
|
||||
type: FormFieldType.Text,
|
||||
customValidate: (val: string, formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
if (!val && index_mode === 'projects') {
|
||||
return 'Projects is required';
|
||||
}
|
||||
return true;
|
||||
},
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
console.log('formValues.config', formValues?.config);
|
||||
return index_mode === 'projects';
|
||||
},
|
||||
tooltip: t('setting.bitbucketProjectsTip'),
|
||||
},
|
||||
{
|
||||
name: FilterFormField + '.tip',
|
||||
label: ' ',
|
||||
type: FormFieldType.Custom,
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
return index_mode === 'workspace';
|
||||
},
|
||||
render: () => (
|
||||
<div className="text-sm text-text-secondary bg-bg-card border border-border-button rounded-md px-3 py-2">
|
||||
{t('setting.bitbucketWorkspaceTip')}
|
||||
</div>
|
||||
),
|
||||
},
|
||||
];
|
||||
@ -0,0 +1,121 @@
|
||||
import { FilterFormField, FormFieldType } from '@/components/dynamic-form';
|
||||
import { TFunction } from 'i18next';
|
||||
|
||||
export const confluenceConstant = (t: TFunction) => [
|
||||
{
|
||||
label: 'Confluence Username',
|
||||
name: 'config.credentials.confluence_username',
|
||||
type: FormFieldType.Text,
|
||||
required: true,
|
||||
tooltip: t('setting.connectorNameTip'),
|
||||
},
|
||||
{
|
||||
label: 'Confluence Access Token',
|
||||
name: 'config.credentials.confluence_access_token',
|
||||
type: FormFieldType.Password,
|
||||
required: true,
|
||||
},
|
||||
{
|
||||
label: 'Wiki Base URL',
|
||||
name: 'config.wiki_base',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
tooltip: t('setting.confluenceWikiBaseUrlTip'),
|
||||
},
|
||||
{
|
||||
label: 'Is Cloud',
|
||||
name: 'config.is_cloud',
|
||||
type: FormFieldType.Checkbox,
|
||||
required: false,
|
||||
tooltip: t('setting.confluenceIsCloudTip'),
|
||||
},
|
||||
{
|
||||
label: 'Index Mode',
|
||||
name: 'config.index_mode',
|
||||
type: FormFieldType.Segmented,
|
||||
options: [
|
||||
{ label: 'Everything', value: 'everything' },
|
||||
{ label: 'Space', value: 'space' },
|
||||
{ label: 'Page', value: 'page' },
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'config.page_id',
|
||||
label: 'Page ID',
|
||||
type: FormFieldType.Text,
|
||||
customValidate: (val: string, formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
console.log('index_mode', index_mode, val);
|
||||
if (!val && index_mode === 'page') {
|
||||
return 'Page ID is required';
|
||||
}
|
||||
return true;
|
||||
},
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
return index_mode === 'page';
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'config.space',
|
||||
label: 'Space Key',
|
||||
type: FormFieldType.Text,
|
||||
customValidate: (val: string, formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
if (!val && index_mode === 'space') {
|
||||
return 'Space Key is required';
|
||||
}
|
||||
return true;
|
||||
},
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
return index_mode === 'space';
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'config.index_recursively',
|
||||
label: 'Index Recursively',
|
||||
type: FormFieldType.Checkbox,
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
return index_mode === 'page';
|
||||
},
|
||||
},
|
||||
{
|
||||
name: FilterFormField + '.tip',
|
||||
label: ' ',
|
||||
type: FormFieldType.Custom,
|
||||
shouldRender: (formValues: any) => {
|
||||
const index_mode = formValues?.config?.index_mode;
|
||||
return index_mode === 'everything';
|
||||
},
|
||||
render: () => (
|
||||
<div className="text-sm text-text-secondary bg-bg-card border border-border-button rounded-md px-3 py-2">
|
||||
{
|
||||
'This choice will index all pages the provided credentials have access to.'
|
||||
}
|
||||
</div>
|
||||
),
|
||||
},
|
||||
{
|
||||
label: 'Space Key',
|
||||
name: 'config.space',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
hidden: true,
|
||||
},
|
||||
{
|
||||
label: 'Page ID',
|
||||
name: 'config.page_id',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
hidden: true,
|
||||
},
|
||||
{
|
||||
label: 'Index Recursively',
|
||||
name: 'config.index_recursively',
|
||||
type: FormFieldType.Checkbox,
|
||||
required: false,
|
||||
hidden: true,
|
||||
},
|
||||
];
|
||||
@ -4,11 +4,13 @@ import { t, TFunction } from 'i18next';
|
||||
import { useEffect, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import BoxTokenField from '../component/box-token-field';
|
||||
import { ConfluenceIndexingModeField } from '../component/confluence-token-field';
|
||||
import GmailTokenField from '../component/gmail-token-field';
|
||||
import GoogleDriveTokenField from '../component/google-drive-token-field';
|
||||
import { IDataSourceInfoMap } from '../interface';
|
||||
import { bitbucketConstant } from './bitbucket-constant';
|
||||
import { confluenceConstant } from './confluence-constant';
|
||||
import { S3Constant } from './s3-constant';
|
||||
|
||||
export enum DataSourceKey {
|
||||
CONFLUENCE = 'confluence',
|
||||
S3 = 's3',
|
||||
@ -29,6 +31,7 @@ export enum DataSourceKey {
|
||||
ASANA = 'asana',
|
||||
IMAP = 'imap',
|
||||
GITHUB = 'github',
|
||||
BITBUCKET = 'bitbucket',
|
||||
ZENDESK = 'zendesk',
|
||||
// SHAREPOINT = 'sharepoint',
|
||||
// SLACK = 'slack',
|
||||
@ -134,6 +137,11 @@ export const generateDataSourceInfo = (t: TFunction) => {
|
||||
description: t(`setting.${DataSourceKey.IMAP}Description`),
|
||||
icon: <SvgIcon name={'data-source/imap'} width={38} />,
|
||||
},
|
||||
[DataSourceKey.BITBUCKET]: {
|
||||
name: 'Bitbucket',
|
||||
description: t(`setting.${DataSourceKey.BITBUCKET}Description`),
|
||||
icon: <SvgIcon name={'data-source/bitbucket'} width={38} />,
|
||||
},
|
||||
[DataSourceKey.ZENDESK]: {
|
||||
name: 'Zendesk',
|
||||
description: t(`setting.${DataSourceKey.ZENDESK}Description`),
|
||||
@ -294,67 +302,7 @@ export const DataSourceFormFields = {
|
||||
},
|
||||
],
|
||||
|
||||
[DataSourceKey.CONFLUENCE]: [
|
||||
{
|
||||
label: 'Confluence Username',
|
||||
name: 'config.credentials.confluence_username',
|
||||
type: FormFieldType.Text,
|
||||
required: true,
|
||||
tooltip: 'A descriptive name for the connector.',
|
||||
},
|
||||
{
|
||||
label: 'Confluence Access Token',
|
||||
name: 'config.credentials.confluence_access_token',
|
||||
type: FormFieldType.Password,
|
||||
required: true,
|
||||
},
|
||||
{
|
||||
label: 'Wiki Base URL',
|
||||
name: 'config.wiki_base',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
tooltip: t('setting.confluenceWikiBaseUrlTip'),
|
||||
},
|
||||
{
|
||||
label: 'Is Cloud',
|
||||
name: 'config.is_cloud',
|
||||
type: FormFieldType.Checkbox,
|
||||
required: false,
|
||||
tooltip: t('setting.confluenceIsCloudTip'),
|
||||
},
|
||||
{
|
||||
label: 'Index Method',
|
||||
name: 'config.index_mode',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
horizontal: true,
|
||||
labelClassName: 'self-start pt-4',
|
||||
render: (fieldProps: any) => (
|
||||
<ConfluenceIndexingModeField {...fieldProps} />
|
||||
),
|
||||
},
|
||||
{
|
||||
label: 'Space Key',
|
||||
name: 'config.space',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
hidden: true,
|
||||
},
|
||||
{
|
||||
label: 'Page ID',
|
||||
name: 'config.page_id',
|
||||
type: FormFieldType.Text,
|
||||
required: false,
|
||||
hidden: true,
|
||||
},
|
||||
{
|
||||
label: 'Index Recursively',
|
||||
name: 'config.index_recursively',
|
||||
type: FormFieldType.Checkbox,
|
||||
required: false,
|
||||
hidden: true,
|
||||
},
|
||||
],
|
||||
[DataSourceKey.CONFLUENCE]: confluenceConstant(t),
|
||||
[DataSourceKey.GOOGLE_DRIVE]: [
|
||||
{
|
||||
label: 'Primary Admin Email',
|
||||
@ -828,6 +776,7 @@ export const DataSourceFormFields = {
|
||||
required: false,
|
||||
},
|
||||
],
|
||||
[DataSourceKey.BITBUCKET]: bitbucketConstant(t),
|
||||
[DataSourceKey.ZENDESK]: [
|
||||
{
|
||||
label: 'Zendesk Domain',
|
||||
@ -919,6 +868,7 @@ export const DataSourceFormDefaultValues = {
|
||||
wiki_base: '',
|
||||
is_cloud: true,
|
||||
space: '',
|
||||
page_id: '',
|
||||
credentials: {
|
||||
confluence_username: '',
|
||||
confluence_access_token: '',
|
||||
@ -1112,6 +1062,19 @@ export const DataSourceFormDefaultValues = {
|
||||
},
|
||||
},
|
||||
},
|
||||
[DataSourceKey.BITBUCKET]: {
|
||||
name: '',
|
||||
source: DataSourceKey.BITBUCKET,
|
||||
config: {
|
||||
workspace: '',
|
||||
index_mode: 'workspace',
|
||||
repository_slugs: '',
|
||||
projects: '',
|
||||
},
|
||||
credentials: {
|
||||
bitbucket_api_token: '',
|
||||
},
|
||||
},
|
||||
[DataSourceKey.ZENDESK]: {
|
||||
name: '',
|
||||
source: DataSourceKey.ZENDESK,
|
||||
|
||||
@ -5,7 +5,7 @@ import styles from './index.less';
|
||||
const ApiPage = () => {
|
||||
return (
|
||||
<div className={styles.apiWrapper}>
|
||||
<ApiContent idKey="dialogId" hideChatPreviewCard></ApiContent>
|
||||
<ApiContent idKey="dialogId"></ApiContent>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@ -45,11 +45,7 @@ export function LangfuseCard() {
|
||||
<Eye /> {t('setting.view')}
|
||||
</Button>
|
||||
)}
|
||||
<Button
|
||||
size={'sm'}
|
||||
onClick={showSaveLangfuseConfigurationModal}
|
||||
className="bg-blue-500 hover:bg-blue-400"
|
||||
>
|
||||
<Button size={'sm'} onClick={showSaveLangfuseConfigurationModal}>
|
||||
<Settings2 />
|
||||
{t('setting.configuration')}
|
||||
</Button>
|
||||
|
||||
Reference in New Issue
Block a user