mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-12-21 13:32:49 +08:00
## What problem does this PR solve? Adds AI Badgr as an optional LLM provider in RAGFlow. Users can use AI Badgr for chat completions and embeddings via its OpenAI-compatible API. **Background:** - AI Badgr provides OpenAI-compatible endpoints (`/v1/chat/completions`, `/v1/embeddings`, `/v1/models`) - Previously, RAGFlow didn't support AI Badgr - This PR adds support following the existing provider pattern (e.g., CometAPI, DeerAPI) **Implementation details:** - Added AI Badgr to the provider registry and configuration - Supports chat completions (via LiteLLMBase) and embeddings (via AIBadgrEmbed) - Uses standard API key authentication - Base URL: `https://aibadgr.com/api/v1` - Environment variables: `AIBADGR_API_KEY`, `AIBADGR_BASE_URL` (optional) ## Type of change - [x] New Feature (non-breaking change which adds functionality) This is a new feature that adds support for a new provider without changing existing functionality. --------- Co-authored-by: michaelmanley <55236695+michaelbrinkworth@users.noreply.github.com>
96 lines
9.4 KiB
Plaintext
96 lines
9.4 KiB
Plaintext
---
|
|
sidebar_position: 1
|
|
slug: /supported_models
|
|
---
|
|
|
|
# Supported models
|
|
|
|
import APITable from '@site/src/components/APITable';
|
|
|
|
A complete list of models supported by RAGFlow, which will continue to expand.
|
|
|
|
```mdx-code-block
|
|
<APITable>
|
|
```
|
|
|
|
| Provider | Chat | Embedding | Rerank | Img2txt | Speech2txt | TTS |
|
|
| --------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
|
|
| Anthropic | :heavy_check_mark: | | | | | |
|
|
| Azure-OpenAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | |
|
|
| BAAI | | :heavy_check_mark: | :heavy_check_mark: | | | |
|
|
| BaiChuan | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| BaiduYiyan | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| Bedrock | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| Cohere | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| DeepSeek | :heavy_check_mark: | | | | | |
|
|
| FastEmbed | | :heavy_check_mark: | | | | |
|
|
| Fish Audio | | | | | | :heavy_check_mark: |
|
|
| Gemini | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
|
|
| Google Cloud | :heavy_check_mark: | | | | | |
|
|
| GPUStack | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: |
|
|
| Groq | :heavy_check_mark: | | | | | |
|
|
| HuggingFace | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| Jina | | :heavy_check_mark: | :heavy_check_mark: | | | |
|
|
| LeptonAI | :heavy_check_mark: | | | | | |
|
|
| LocalAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
|
|
| LM-Studio | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
|
|
| MiniMax | :heavy_check_mark: | | | | | |
|
|
| Mistral | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| ModelScope | :heavy_check_mark: | | | | | |
|
|
| Moonshot | :heavy_check_mark: | | | :heavy_check_mark: | | |
|
|
| Novita AI | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| NVIDIA | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| Ollama | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
|
|
| OpenAI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
|
| OpenAI-API-Compatible | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| OpenRouter | :heavy_check_mark: | | | :heavy_check_mark: | | |
|
|
| PerfXCloud | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| Replicate | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| PPIO | :heavy_check_mark: | | | | | |
|
|
| SILICONFLOW | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| StepFun | :heavy_check_mark: | | | | | |
|
|
| Tencent Hunyuan | :heavy_check_mark: | | | | | |
|
|
| Tencent Cloud | | | | | :heavy_check_mark: | |
|
|
| TogetherAI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| Tongyi-Qianwen | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
|
| Upstage | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| VLLM | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| VolcEngine | :heavy_check_mark: | | | | | |
|
|
| Voyage AI | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| Xinference | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
|
| XunFei Spark | :heavy_check_mark: | | | | | :heavy_check_mark: |
|
|
| xAI | :heavy_check_mark: | | | :heavy_check_mark: | | |
|
|
| Youdao | | :heavy_check_mark: | :heavy_check_mark: | | | |
|
|
| ZHIPU-AI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | |
|
|
| 01.AI | :heavy_check_mark: | | | | | |
|
|
| DeepInfra | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: |
|
|
| 302.AI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | |
|
|
| CometAPI | :heavy_check_mark: | :heavy_check_mark: | | | | |
|
|
| DeerAPI | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | :heavy_check_mark: |
|
|
| Jiekou.AI | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | |
|
|
|
|
```mdx-code-block
|
|
</APITable>
|
|
```
|
|
|
|
:::danger IMPORTANT
|
|
If your model is not listed here but has APIs compatible with those of OpenAI, click **OpenAI-API-Compatible** on the **Model providers** page to configure your model.
|
|
:::
|
|
|
|
## Example: AI Badgr (OpenAI-compatible)
|
|
|
|
You can use **AI Badgr** with RAGFlow via the existing OpenAI-API-Compatible provider.
|
|
|
|
To configure AI Badgr:
|
|
|
|
- **Provider**: `OpenAI-API-Compatible`
|
|
- **Base URL**: `https://aibadgr.com/api/v1`
|
|
- **API Key**: your AI Badgr API key (from the AI Badgr dashboard)
|
|
- **Model**: any AI Badgr chat or embedding model ID, as exposed by AI Badgr's OpenAI-compatible APIs
|
|
|
|
AI Badgr implements OpenAI-compatible endpoints for `/v1/chat/completions`, `/v1/embeddings`, and `/v1/models`, so no additional code changes in RAGFlow are required.
|
|
|
|
:::note
|
|
The list of supported models is extracted from [this source](https://github.com/infiniflow/ragflow/blob/main/rag/llm/__init__.py) and may not be the most current. For the latest supported model list, please refer to the Python file.
|
|
:::
|