Commit Graph

16 Commits

Author SHA1 Message Date
b4a281eca1 add support for NVIDIA llm (#1645)
### What problem does this PR solve?

add support for NVIDIA llm
### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-07-23 10:43:09 +08:00
347cb61f26 add support for StepFun (#1611)
### What problem does this PR solve?

#1561 

### Type of change
- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-07-19 16:26:12 +08:00
fc8a752cd5 fix: Minimax API is error! #1353 (#1585)
### What problem does this PR solve?

fix: Minimax API is error! #1353

### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-07-18 12:09:25 +08:00
13389be3f4 feat: replace open-router.svg #1467 (#1538)
### What problem does this PR solve?

feat: replace open-router.svg #1467

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-07-16 17:11:37 +08:00
75086f41a9 'load llm infomation from a json file and add support for OpenRouter' (#1533)
### What problem does this PR solve?

#1467 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-07-16 15:19:43 +08:00
ddeac9ab3d added SVG for Groq model model providers (#1470)
#1432  #1447 
This PR adds support for the GROQ LLM (Large Language Model).

Groq is an AI solutions company delivering ultra-low latency inference
with the first-ever LPU™ Inference Engine. The Groq API enables
developers to integrate state-of-the-art LLMs, such as Llama-2 and
llama3-70b-8192, into low latency applications with the request limits
specified below. Learn more at [groq.com](https://groq.com/).
Supported Models


| ID | Requests per Minute | Requests per Day | Tokens per Minute |

|----------------------|---------------------|------------------|-------------------|
| gemma-7b-it | 30 | 14,400 | 15,000 |
| gemma2-9b-it | 30 | 14,400 | 15,000 |
| llama3-70b-8192 | 30 | 14,400 | 6,000 |
| llama3-8b-8192 | 30 | 14,400 | 30,000 |
| mixtral-8x7b-32768 | 30 | 14,400 | 5,000 |

---------

Co-authored-by: paresh0628 <paresh.tuvoc@gmail.com>
Co-authored-by: Kevin Hu <kevinhu.sh@gmail.com>
2024-07-12 09:25:44 +08:00
3e9f444e6b add support for Gemini (#1465)
### What problem does this PR solve?

#1036

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-07-11 15:41:00 +08:00
f8aa31b159 feat: add bedrock icon (#1430)
### What problem does this PR solve?

feat: add bedrock icon #918 

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-07-08 19:14:25 +08:00
98295caffe update Minimax and Azure-Openai icon in setting page (#1420)
### What problem does this PR solve?

update Minimax and Azure-Openai  icon in setting page
#1156 #308 #433

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

---------

Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
2024-07-08 17:55:04 +08:00
3ccb62910b fix: add icon to MiniMax and Mistral #1353 (#1367)
### What problem does this PR solve?

fix: add icon to MiniMax  and Mistral #1353
### Type of change

- [x] Bug Fix (non-breaking change which fixes an issue)
2024-07-04 10:24:43 +08:00
e1f0644deb feat: add jina (#967)
### What problem does this PR solve?
feat: add jina #650 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-05-29 16:48:52 +08:00
9ffd7ae321 Added support for Baichuan LLM (#934)
### What problem does this PR solve?

- Added support for Baichuan LLM

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

Co-authored-by: 海贼宅 <stu_xyx@163.com>
2024-05-28 09:09:37 +08:00
eb51ad73d6 Add support for VolcEngine - the current version supports SDK2 (#885)
- The main idea is to assemble **ak**, **sk**, and **ep_id** into a
dictionary and store it in the database **api_key** field
- I don’t know much about the front-end, so I learned from Ollama, which
may be redundant.

### Configuration method

- model name

- Format requirements: {"VolcEngine model name":"endpoint_id"}
    - For example: {"Skylark-pro-32K":"ep-xxxxxxxxx"}
    
- Volcano ACCESS_KEY
- Format requirements: VOLC_ACCESSKEY of the volcano engine
corresponding to the model

- Volcano SECRET_KEY
- Format requirements: VOLC_SECRETKEY of the volcano engine
corresponding to the model
    
### What problem does this PR solve?

_Briefly describe what this PR aims to solve. Include background context
that will help reviewers understand the purpose of the PR._

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-05-23 11:15:29 +08:00
a553dc8dbd feat: support DeepSeek (#667)
### What problem does this PR solve?

#666 
feat: support DeepSeek
feat: preview word and excel

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-05-08 10:30:18 +08:00
cb2cbf500c feat: support Xinference (#319)
### What problem does this PR solve?

support xorbitsai inference as model provider

Issue link:#299

### Type of change


- [x] New Feature (non-breaking change which adds functionality)
2024-04-11 18:17:45 +08:00
265a7a283a feat: add support for ollama #221 (#260)
### What problem does this PR solve?

add support for ollama

Issue link:#221

### Type of change

- [x] New Feature (non-breaking change which adds functionality)
2024-04-08 19:13:45 +08:00