cognitivecomputationsChat
Dolphin 2.6 Mixtral 8x7B 🐬
cognitivecomputations/dolphin-mixtral-8x7b
33KContext Window
Normal
This is a 16k context fine-tune of [Mixtral-8x7b](/models/mistralai/mixtral-8x7b). It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models). #moe #uncensored
Capabilities
Text GenerationCode GenerationAnalysis & ReasoningReasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Default Temperature
0.7
Default Top_P
1
Pricing
Pay per use, no monthly feesInput Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="cognitivecomputations/dolphin-mixtral-8x7b",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
Dolphin 2.6 Mixtral 8x7B 🐬
cognitivecomputations/dolphin-mixtral-8x7b
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window33K
Related Models
View All → →Dolphin3.0 R1 Mistral 24B
cognitivecomputations/dolphin3.0-r1-mistral-24b
< ¥0.001/1K
Dolphin3.0 Mistral 24B
cognitivecomputations/dolphin3.0-mistral-24b
< ¥0.001/1K
Dolphin Llama 3 70B 🐬
cognitivecomputations/dolphin-llama-3-70b
< ¥0.001/1K
Dolphin 2.9.2 Mixtral 8x22B 🐬
cognitivecomputations/dolphin-mixtral-8x22b
< ¥0.001/1K
Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →