Editorial updates to Docker README (#3223)

### What problem does this PR solve?



### Type of change


- [x] Documentation Update
This commit is contained in:
writinwaters
2024-11-06 09:43:54 +08:00
committed by GitHub
parent a418a343d1
commit af74bf01c0
6 changed files with 51 additions and 73 deletions

View File

@ -223,24 +223,7 @@ With the default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (*
## Configure LLMs
RAGFlow is a RAG engine, and it needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. For now, RAGFlow supports the following LLMs, and the list is expanding:
- [OpenAI](https://platform.openai.com/login?launch)
- [Azure-OpenAI](https://ai.azure.com/)
- [Gemini](https://aistudio.google.com/)
- [Groq](https://console.groq.com/)
- [Mistral](https://mistral.ai/)
- [Bedrock](https://aws.amazon.com/cn/bedrock/)
- [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model)
- [ZHIPU-AI](https://open.bigmodel.cn/)
- [MiniMax](https://platform.minimaxi.com/)
- [Moonshot](https://platform.moonshot.cn/docs)
- [DeepSeek](https://platform.deepseek.com/api-docs/)
- [Baichuan](https://www.baichuan-ai.com/home)
- [VolcEngine](https://www.volcengine.com/docs/82379)
- [Jina](https://jina.ai/reader/)
- [OpenRouter](https://openrouter.ai/)
- [StepFun](https://platform.stepfun.com/)
RAGFlow is a RAG engine and needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. RAGFlow supports most mainstream LLMs. For a complete list of supported models, please refer to [Supported Models](./references/supported_models.mdx).
:::note
RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalAI, but this part is not covered in this quick start guide.