Compare commits

...

11 Commits

Author SHA1 Message Date
09570c7eef Feat: expand the capabilities of the MCP Server (#8707)
### What problem does this PR solve?

Expand the capabilities of the MCP Server. #8644.

Special thanks to @Drasek, this change is largely based on his original
implementation, it is super neat and well-structured to me. I basically
just integrated his code into the codebase with minimal modifications.

My main contribution is implementing a proper cache layer for dataset
and document metadata, using the LRU strategy with a 300s ± random 30s
TTL. The original code did not actually perform caching.

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Caspar Armster <caspar@armster.de>
2025-08-20 19:30:25 +08:00
312f1a0477 Fix: enlarge raptor timeout limits. (#9600)
### What problem does this PR solve?


### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 17:29:15 +08:00
1ca226e43b Feat: Updated some colors according to the design draft #3221 (#9599)
### What problem does this PR solve?

Feat: Updated some colors according to the design draft #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-20 16:32:29 +08:00
830cda6a3a Fix (web): Optimize text display effect #3221 (#9594)
### What problem does this PR solve?

Fix (web): Optimize text display effect
-Add text ellipsis and overflow hidden classes to the HomeCard component
to achieve text overflow hiding and ellipsis effects
-Add text ellipsis and overflow hidden classes to the DatasetSidebar
component to improve the display of dataset names

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 15:42:21 +08:00
c66dbbe433 Fix: Fixed the issue where the save button at the bottom of the chat page could not be displayed on small screens #3221 (#9596)
### What problem does this PR solve?

Fix: Fixed the issue where the save button at the bottom of the chat
page could not be displayed on small screens #3221

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 15:42:09 +08:00
3b218b2dc0 fix:passing empty database array when updating assistant (#9570)
### What problem does this PR solve?

When the `dataset_ids` parameter is omitted in the **update assistant**
request, Passing an empty array `[]` triggers a misleading
message"Dataset use different embedding models", while omitting the
field does not.
To fix this, we:
- Provide a default empty list: `ids = req.get("dataset_ids", [])`.  
- Replace the `is not None` check with a truthy check: `if ids:`.

**Files changed**  
`api/apps/sdk/chat.py`  
- L153: `ids = req.get("dataset_ids")` → `ids = req.get("dataset_ids",
[])`
- L156: `if ids is not None:` → `if ids:`

### Type of change
- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 13:40:05 +08:00
d58ef6127f Fix:KeyError: 'globals' KeyError: 'globals' (#9571)
### What problem does this PR solve?

https://github.com/infiniflow/ragflow/issues/9545
add backward compatible logics

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 13:39:38 +08:00
55173c7201 Fix (web): Update the style of segmented controls and add metallic texture gradients (#9591)
### What problem does this PR solve?

Fix (web): Update the style of segmented controls and add metallic
texture gradients #3221
-Modified the selected state style of Segmented components, adding
metallic texture gradient and lower border
-Added a metallic gradient background image in tailwind.diag.js
-Added the -- metallic variable in tailwind.css to define metallic
texture colors

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 13:39:23 +08:00
f860bdf0ad Revert "Feat: reference should also be list after 0.20.x" (#9592)
Reverts infiniflow/ragflow#9582
2025-08-20 13:38:57 +08:00
997627861a Feat: reference should also be list after 0.20.x (#9582)
### What problem does this PR solve?

In 0.19.0 reference is list,and it should be a list,otherwise last
conversation's reference will be lost

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2025-08-20 13:38:14 +08:00
9f9d32d2cd Feat: Make the old page accessible via URL #3221 (#9589)
### What problem does this PR solve?

Feat: Make the old page accessible via URL #3221

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2025-08-20 13:37:06 +08:00
13 changed files with 376 additions and 135 deletions

View File

@ -131,7 +131,16 @@ class Canvas:
self.path = self.dsl["path"]
self.history = self.dsl["history"]
self.globals = self.dsl["globals"]
if "globals" in self.dsl:
self.globals = self.dsl["globals"]
else:
self.globals = {
"sys.query": "",
"sys.user_id": "",
"sys.conversation_turns": 0,
"sys.files": []
}
self.retrieval = self.dsl["retrieval"]
self.memory = self.dsl.get("memory", [])

View File

@ -150,10 +150,10 @@ def update(tenant_id, chat_id):
if not DialogService.query(tenant_id=tenant_id, id=chat_id, status=StatusEnum.VALID.value):
return get_error_data_result(message="You do not own the chat")
req = request.json
ids = req.get("dataset_ids")
ids = req.get("dataset_ids", [])
if "show_quotation" in req:
req["do_refer"] = req.pop("show_quotation")
if ids is not None:
if ids:
for kb_id in ids:
kbs = KnowledgebaseService.accessible(kb_id=kb_id, user_id=tenant_id)
if not kbs:

View File

@ -16,6 +16,9 @@
import json
import logging
import random
import time
from collections import OrderedDict
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from functools import wraps
@ -53,6 +56,13 @@ JSON_RESPONSE = True
class RAGFlowConnector:
_MAX_DATASET_CACHE = 32
_MAX_DOCUMENT_CACHE = 128
_CACHE_TTL = 300
_dataset_metadata_cache: OrderedDict[str, tuple[dict, float | int]] = OrderedDict() # "dataset_id" -> (metadata, expiry_ts)
_document_metadata_cache: OrderedDict[str, tuple[list[tuple[str, dict]], float | int]] = OrderedDict() # "dataset_id" -> ([(document_id, doc_metadata)], expiry_ts)
def __init__(self, base_url: str, version="v1"):
self.base_url = base_url
self.version = version
@ -72,6 +82,43 @@ class RAGFlowConnector:
res = requests.get(url=self.api_url + path, params=params, headers=self.authorization_header, json=json)
return res
def _is_cache_valid(self, ts):
return time.time() < ts
def _get_expiry_timestamp(self):
offset = random.randint(-30, 30)
return time.time() + self._CACHE_TTL + offset
def _get_cached_dataset_metadata(self, dataset_id):
entry = self._dataset_metadata_cache.get(dataset_id)
if entry:
data, ts = entry
if self._is_cache_valid(ts):
self._dataset_metadata_cache.move_to_end(dataset_id)
return data
return None
def _set_cached_dataset_metadata(self, dataset_id, metadata):
self._dataset_metadata_cache[dataset_id] = (metadata, self._get_expiry_timestamp())
self._dataset_metadata_cache.move_to_end(dataset_id)
if len(self._dataset_metadata_cache) > self._MAX_DATASET_CACHE:
self._dataset_metadata_cache.popitem(last=False)
def _get_cached_document_metadata_by_dataset(self, dataset_id):
entry = self._document_metadata_cache.get(dataset_id)
if entry:
data_list, ts = entry
if self._is_cache_valid(ts):
self._document_metadata_cache.move_to_end(dataset_id)
return {doc_id: doc_meta for doc_id, doc_meta in data_list}
return None
def _set_cached_document_metadata_by_dataset(self, dataset_id, doc_id_meta_list):
self._document_metadata_cache[dataset_id] = (doc_id_meta_list, self._get_expiry_timestamp())
self._document_metadata_cache.move_to_end(dataset_id)
if len(self._document_metadata_cache) > self._MAX_DOCUMENT_CACHE:
self._document_metadata_cache.popitem(last=False)
def list_datasets(self, page: int = 1, page_size: int = 1000, orderby: str = "create_time", desc: bool = True, id: str | None = None, name: str | None = None):
res = self._get("/datasets", {"page": page, "page_size": page_size, "orderby": orderby, "desc": desc, "id": id, "name": name})
if not res:
@ -87,10 +134,38 @@ class RAGFlowConnector:
return ""
def retrieval(
self, dataset_ids, document_ids=None, question="", page=1, page_size=30, similarity_threshold=0.2, vector_similarity_weight=0.3, top_k=1024, rerank_id: str | None = None, keyword: bool = False
self,
dataset_ids,
document_ids=None,
question="",
page=1,
page_size=30,
similarity_threshold=0.2,
vector_similarity_weight=0.3,
top_k=1024,
rerank_id: str | None = None,
keyword: bool = False,
force_refresh: bool = False,
):
if document_ids is None:
document_ids = []
# If no dataset_ids provided or empty list, get all available dataset IDs
if not dataset_ids:
dataset_list_str = self.list_datasets()
dataset_ids = []
# Parse the dataset list to extract IDs
if dataset_list_str:
for line in dataset_list_str.strip().split('\n'):
if line.strip():
try:
dataset_info = json.loads(line.strip())
dataset_ids.append(dataset_info["id"])
except (json.JSONDecodeError, KeyError):
# Skip malformed lines
continue
data_json = {
"page": page,
"page_size": page_size,
@ -110,12 +185,127 @@ class RAGFlowConnector:
res = res.json()
if res.get("code") == 0:
data = res["data"]
chunks = []
for chunk_data in res["data"].get("chunks"):
chunks.append(json.dumps(chunk_data, ensure_ascii=False))
return [types.TextContent(type="text", text="\n".join(chunks))]
# Cache document metadata and dataset information
document_cache, dataset_cache = self._get_document_metadata_cache(dataset_ids, force_refresh=force_refresh)
# Process chunks with enhanced field mapping including per-chunk metadata
for chunk_data in data.get("chunks", []):
enhanced_chunk = self._map_chunk_fields(chunk_data, dataset_cache, document_cache)
chunks.append(enhanced_chunk)
# Build structured response (no longer need response-level document_metadata)
response = {
"chunks": chunks,
"pagination": {
"page": data.get("page", page),
"page_size": data.get("page_size", page_size),
"total_chunks": data.get("total", len(chunks)),
"total_pages": (data.get("total", len(chunks)) + page_size - 1) // page_size,
},
"query_info": {
"question": question,
"similarity_threshold": similarity_threshold,
"vector_weight": vector_similarity_weight,
"keyword_search": keyword,
"dataset_count": len(dataset_ids),
},
}
return [types.TextContent(type="text", text=json.dumps(response, ensure_ascii=False))]
raise Exception([types.TextContent(type="text", text=res.get("message"))])
def _get_document_metadata_cache(self, dataset_ids, force_refresh=False):
"""Cache document metadata for all documents in the specified datasets"""
document_cache = {}
dataset_cache = {}
try:
for dataset_id in dataset_ids:
dataset_meta = None if force_refresh else self._get_cached_dataset_metadata(dataset_id)
if not dataset_meta:
# First get dataset info for name
dataset_res = self._get("/datasets", {"id": dataset_id, "page_size": 1})
if dataset_res and dataset_res.status_code == 200:
dataset_data = dataset_res.json()
if dataset_data.get("code") == 0 and dataset_data.get("data"):
dataset_info = dataset_data["data"][0]
dataset_meta = {"name": dataset_info.get("name", "Unknown"), "description": dataset_info.get("description", "")}
self._set_cached_dataset_metadata(dataset_id, dataset_meta)
if dataset_meta:
dataset_cache[dataset_id] = dataset_meta
docs = None if force_refresh else self._get_cached_document_metadata_by_dataset(dataset_id)
if docs is None:
docs_res = self._get(f"/datasets/{dataset_id}/documents")
docs_data = docs_res.json()
if docs_data.get("code") == 0 and docs_data.get("data", {}).get("docs"):
doc_id_meta_list = []
docs = {}
for doc in docs_data["data"]["docs"]:
doc_id = doc.get("id")
if not doc_id:
continue
doc_meta = {
"document_id": doc_id,
"name": doc.get("name", ""),
"location": doc.get("location", ""),
"type": doc.get("type", ""),
"size": doc.get("size"),
"chunk_count": doc.get("chunk_count"),
# "chunk_method": doc.get("chunk_method", ""),
"create_date": doc.get("create_date", ""),
"update_date": doc.get("update_date", ""),
# "process_begin_at": doc.get("process_begin_at", ""),
# "process_duration": doc.get("process_duration"),
# "progress": doc.get("progress"),
# "progress_msg": doc.get("progress_msg", ""),
# "status": doc.get("status", ""),
# "run": doc.get("run", ""),
"token_count": doc.get("token_count"),
# "source_type": doc.get("source_type", ""),
"thumbnail": doc.get("thumbnail", ""),
"dataset_id": doc.get("dataset_id", dataset_id),
"meta_fields": doc.get("meta_fields", {}),
# "parser_config": doc.get("parser_config", {})
}
doc_id_meta_list.append((doc_id, doc_meta))
docs[doc_id] = doc_meta
self._set_cached_document_metadata_by_dataset(dataset_id, doc_id_meta_list)
if docs:
document_cache.update(docs)
except Exception:
# Gracefully handle metadata cache failures
pass
return document_cache, dataset_cache
def _map_chunk_fields(self, chunk_data, dataset_cache, document_cache):
"""Preserve all original API fields and add per-chunk document metadata"""
# Start with ALL raw data from API (preserve everything like original version)
mapped = dict(chunk_data)
# Add dataset name enhancement
dataset_id = chunk_data.get("dataset_id") or chunk_data.get("kb_id")
if dataset_id and dataset_id in dataset_cache:
mapped["dataset_name"] = dataset_cache[dataset_id]["name"]
else:
mapped["dataset_name"] = "Unknown"
# Add document name convenience field
mapped["document_name"] = chunk_data.get("document_keyword", "")
# Add per-chunk document metadata
document_id = chunk_data.get("document_id")
if document_id and document_id in document_cache:
mapped["document_metadata"] = document_cache[document_id]
return mapped
class RAGFlowCtx:
def __init__(self, connector: RAGFlowConnector):
@ -195,7 +385,58 @@ async def list_tools(*, connector) -> list[types.Tool]:
"items": {"type": "string"},
"description": "Optional array of document IDs to search within."
},
"question": {"type": "string", "description": "The question or query to search for."},
"question": {
"type": "string",
"description": "The question or query to search for."
},
"page": {
"type": "integer",
"description": "Page number for pagination",
"default": 1,
"minimum": 1,
},
"page_size": {
"type": "integer",
"description": "Number of results to return per page (default: 10, max recommended: 50 to avoid token limits)",
"default": 10,
"minimum": 1,
"maximum": 100,
},
"similarity_threshold": {
"type": "number",
"description": "Minimum similarity threshold for results",
"default": 0.2,
"minimum": 0.0,
"maximum": 1.0,
},
"vector_similarity_weight": {
"type": "number",
"description": "Weight for vector similarity vs term similarity",
"default": 0.3,
"minimum": 0.0,
"maximum": 1.0,
},
"keyword": {
"type": "boolean",
"description": "Enable keyword-based search",
"default": False,
},
"top_k": {
"type": "integer",
"description": "Maximum results to consider before ranking",
"default": 1024,
"minimum": 1,
"maximum": 1024,
},
"rerank_id": {
"type": "string",
"description": "Optional reranking model identifier",
},
"force_refresh": {
"type": "boolean",
"description": "Set to true only if fresh dataset and document metadata is explicitly required. Otherwise, cached metadata is used (default: false).",
"default": False,
},
},
"required": ["question"],
},
@ -209,6 +450,16 @@ async def call_tool(name: str, arguments: dict, *, connector) -> list[types.Text
if name == "ragflow_retrieval":
document_ids = arguments.get("document_ids", [])
dataset_ids = arguments.get("dataset_ids", [])
question = arguments.get("question", "")
page = arguments.get("page", 1)
page_size = arguments.get("page_size", 10)
similarity_threshold = arguments.get("similarity_threshold", 0.2)
vector_similarity_weight = arguments.get("vector_similarity_weight", 0.3)
keyword = arguments.get("keyword", False)
top_k = arguments.get("top_k", 1024)
rerank_id = arguments.get("rerank_id")
force_refresh = arguments.get("force_refresh", False)
# If no dataset_ids provided or empty list, get all available dataset IDs
if not dataset_ids:
@ -229,7 +480,15 @@ async def call_tool(name: str, arguments: dict, *, connector) -> list[types.Text
return connector.retrieval(
dataset_ids=dataset_ids,
document_ids=document_ids,
question=arguments["question"],
question=question,
page=page,
page_size=page_size,
similarity_threshold=similarity_threshold,
vector_similarity_weight=vector_similarity_weight,
keyword=keyword,
top_k=top_k,
rerank_id=rerank_id,
force_refresh=force_refresh,
)
raise ValueError(f"Tool not found: {name}")

View File

@ -289,7 +289,7 @@ class Pdf(PdfParser):
return [(b["text"], self._line_tag(b, zoomin)) for b in self.boxes], tbls, figures
else:
tbls = self._extract_table_figure(True, zoomin, True, True)
# self._naive_vertical_merge()
self._naive_vertical_merge()
self._concat_downward()
# self._filter_forpages()
logging.info("layouts cost: {}s".format(timer() - first_start))

View File

@ -42,7 +42,7 @@ class RecursiveAbstractiveProcessing4TreeOrganizedRetrieval:
self._prompt = prompt
self._max_token = max_token
@timeout(60)
@timeout(60*3)
async def _chat(self, system, history, gen_conf):
response = get_llm_cache(self._llm_model.llm_name, system, history, gen_conf)
if response:
@ -86,7 +86,7 @@ class RecursiveAbstractiveProcessing4TreeOrganizedRetrieval:
layers = [(0, len(chunks))]
start, end = 0, len(chunks)
@timeout(60)
@timeout(60*3)
async def summarize(ck_idx: list[int]):
nonlocal chunks
texts = [chunks[i][0] for i in ck_idx]

View File

@ -14,7 +14,7 @@ module.exports = {
'error',
{
'**/*.{jsx,tsx}': 'KEBAB_CASE',
'**/*.{js,ts}': 'KEBAB_CASE',
'**/*.{js,ts}': '[a-z0-9.-]*',
},
],
'check-file/folder-naming-convention': [

View File

@ -31,7 +31,7 @@ export function HomeCard({ data, onClick, moreDropdown }: IProps) {
</div>
<div className="flex flex-col justify-between gap-1 flex-1 h-full w-[calc(100%-50px)]">
<section className="flex justify-between">
<div className="text-[20px] font-bold w-80% leading-5">
<div className="text-[20px] font-bold w-80% leading-5 text-ellipsis overflow-hidden">
{data.name}
</div>
{moreDropdown}

View File

@ -57,8 +57,8 @@ export function Segmented({
className={cn(
'inline-flex items-center px-6 py-2 text-base font-normal rounded-3xl cursor-pointer',
{
'bg-text-primary': selectedValue === actualValue,
'text-bg-base': selectedValue === actualValue,
'text-bg-base bg-metallic-gradient border-b-[#00BEB4] border-b-2':
selectedValue === actualValue,
},
)}
onClick={() => handleOnChange(actualValue)}

View File

@ -62,8 +62,8 @@ export function SideBar({ refreshCount }: PropType) {
name={data.name}
className="size-16"
></RAGFlowAvatar>
<div className=" text-text-secondary text-xs space-y-1">
<h3 className="text-lg font-semibold line-clamp-1 text-text-primary">
<div className=" text-text-secondary text-xs space-y-1 overflow-hidden">
<h3 className="text-lg font-semibold line-clamp-1 text-text-primary text-ellipsis overflow-hidden">
{data.name}
</h3>
<div className="flex justify-between">

View File

@ -89,25 +89,28 @@ export function ChatSettings({ switchSettingVisible }: ChatSettingsProps) {
}, [data, form]);
return (
<section className="p-5 w-[440px] border-l">
<section className="p-5 w-[440px] border-l flex flex-col">
<div className="flex justify-between items-center text-base pb-2">
{t('chat.chatSetting')}
<X className="size-4 cursor-pointer" onClick={switchSettingVisible} />
</div>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit, onInvalid)}>
<section className="space-y-6 overflow-auto max-h-[82vh] pr-4">
<form
onSubmit={form.handleSubmit(onSubmit, onInvalid)}
className="flex-1 flex flex-col min-h-0"
>
<section className="space-y-6 overflow-auto flex-1 pr-4 min-h-0">
<ChatBasicSetting></ChatBasicSetting>
<Separator />
<ChatPromptEngine></ChatPromptEngine>
<Separator />
<ChatModelSettings></ChatModelSettings>
</section>
<div className="space-x-5 text-right">
<div className="space-x-5 text-right pt-4">
<Button variant={'outline'} onClick={switchSettingVisible}>
{t('chat.cancel')}
</Button>
<ButtonLoading className=" my-4" type="submit" loading={loading}>
<ButtonLoading type="submit" loading={loading}>
{t('common.save')}
</ButtonLoading>
</div>

View File

@ -70,116 +70,73 @@ const routes = [
component: `@/pages${Routes.AgentShare}`,
layout: false,
},
// {
// path: '/',
// component: '@/layouts',
// layout: false,
// wrappers: ['@/wrappers/auth'],
// routes: [
// { path: '/', redirect: '/knowledge' },
// {
// path: '/knowledge',
// component: '@/pages/knowledge',
// // component: '@/pages/knowledge/datasets',
// },
// {
// path: '/knowledge',
// component: '@/pages/add-knowledge',
// routes: [
// {
// path: '/knowledge/dataset',
// component: '@/pages/add-knowledge/components/knowledge-dataset',
// routes: [
// {
// path: '/knowledge/dataset',
// component: '@/pages/add-knowledge/components/knowledge-file',
// },
// {
// path: '/knowledge/dataset/chunk',
// component: '@/pages/add-knowledge/components/knowledge-chunk',
// },
// ],
// },
// {
// path: '/knowledge/configuration',
// component: '@/pages/add-knowledge/components/knowledge-setting',
// },
// {
// path: '/knowledge/testing',
// component: '@/pages/add-knowledge/components/knowledge-testing',
// },
// {
// path: '/knowledge/knowledgeGraph',
// component: '@/pages/add-knowledge/components/knowledge-graph',
// },
// ],
// },
// {
// path: '/chat',
// component: '@/pages/chat',
// },
// {
// path: '/user-setting',
// component: '@/pages/user-setting',
// routes: [
// { path: '/user-setting', redirect: '/user-setting/profile' },
// {
// path: '/user-setting/profile',
// // component: '@/pages/user-setting/setting-profile',
// component: '@/pages/user-setting/setting-profile',
// },
// {
// path: '/user-setting/locale',
// component: '@/pages/user-setting/setting-locale',
// },
// {
// path: '/user-setting/password',
// component: '@/pages/user-setting/setting-password',
// },
// {
// path: '/user-setting/model',
// component: '@/pages/user-setting/setting-model',
// },
// {
// path: '/user-setting/team',
// component: '@/pages/user-setting/setting-team',
// },
// {
// path: '/user-setting/system',
// component: '@/pages/user-setting/setting-system',
// },
// {
// path: '/user-setting/api',
// component: '@/pages/user-setting/setting-api',
// },
// {
// path: `/user-setting${Routes.Mcp}`,
// component: `@/pages${Routes.ProfileMcp}`,
// },
// ],
// },
// {
// path: '/file',
// component: '@/pages/file-manager',
// },
// {
// path: '/flow',
// component: '@/pages/flow/list',
// },
// {
// path: Routes.AgentList,
// component: `@/pages/${Routes.Agents}`,
// },
// {
// path: '/flow/:id',
// component: '@/pages/flow',
// },
// {
// path: '/search',
// component: '@/pages/search',
// },
// ],
// },
{
path: Routes.Home,
component: '@/layouts',
layout: false,
redirect: '/knowledge',
},
{
path: '/knowledge',
component: '@/pages/knowledge',
},
{
path: '/knowledge',
component: '@/pages/add-knowledge',
routes: [
{
path: 'dataset',
component: '@/pages/add-knowledge/components/knowledge-dataset',
routes: [
{
path: '',
component: '@/pages/add-knowledge/components/knowledge-file',
},
{
path: 'chunk',
component: '@/pages/add-knowledge/components/knowledge-chunk',
},
],
},
{
path: 'configuration',
component: '@/pages/add-knowledge/components/knowledge-setting',
},
{
path: 'testing',
component: '@/pages/add-knowledge/components/knowledge-testing',
},
{
path: 'knowledgeGraph',
component: '@/pages/add-knowledge/components/knowledge-graph',
},
],
},
{
path: '/chat',
component: '@/pages/chat',
},
{
path: '/file',
component: '@/pages/file-manager',
},
{
path: '/flow',
component: '@/pages/flow/list',
},
{
path: Routes.AgentList,
component: `@/pages/${Routes.Agents}`,
},
{
path: '/flow/:id',
component: '@/pages/flow',
},
{
path: '/search',
component: '@/pages/search',
},
{
path: '/document/:id',
component: '@/pages/document-viewer',

View File

@ -58,6 +58,8 @@ module.exports = {
'bg-base': 'var(--bg-base)',
'bg-card': 'var(--bg-card)',
'bg-component': 'var(--bg-component)',
'bg-input': 'var(--bg-input)',
'text-primary': 'var(--text-primary)',
'text-secondary': 'var(--text-secondary)',
'text-disabled': 'var(--text-disabled)',
@ -206,6 +208,10 @@ module.exports = {
ring: 'hsl(var(--sidebar-ring))',
},
},
backgroundImage: {
'metallic-gradient':
'linear-gradient(104deg, var(--text-primary) 30%, var(--metallic) 50%, var(--text-primary) 70%)',
},
borderRadius: {
lg: `var(--radius)`,
md: `calc(var(--radius) - 2px)`,

View File

@ -90,11 +90,15 @@
--input-border: rgba(22, 22, 24, 0.2);
--metallic: #46464a;
/* design colors */
--bg-base: #f6f6f7;
--bg-base: #ffffff;
/* card color , dividing line */
--bg-card: rgba(0, 0, 0, 0.05);
--bg-component: #ffffff;
--bg-input: rgba(255, 255, 255, 0);
--bg-accent: rgba(76, 164, 231, 0.05);
/* Button ,Body text, Input completed text */
--text-primary: #161618;
--text-secondary: #75787a;
@ -107,7 +111,7 @@
--border-accent: #000000;
--border-button: rgba(0, 0, 0, 0.1);
/* Regulators, parsing, switches, variables */
--accent-primary: #4ca4e7;
--accent-primary: #00beb4;
/* Output Variables Box */
--bg-accent: rgba(76, 164, 231, 0.05);
@ -230,10 +234,13 @@
--input-border: rgba(255, 255, 255, 0.2);
--metallic: #fafafa;
/* design colors */
--bg-base: #161618;
--bg-card: rgba(255, 255, 255, 0.05);
--bg-component: #202025;
--bg-input: rgba(255, 255, 255, 0.05);
--text-primary: #f6f6f7;
--text-secondary: #b2b5b7;
--text-disabled: #75787a;