AI Models/mistral/Mistral: Mistral Nemo
mistralChat

Mistral: Mistral Nemo

mistralai/mistral-nemo
131KContext Window
16KMax Output
Supported Protocols:max_tokenstemperaturetop_pstopfrequency_penaltypresence_penaltyrepetition_penaltytop_kseedmin_presponse_format
Normal

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.

Capabilities

Text GenerationCode GenerationAnalysis & ReasoningReasoning

Technical Specs

Input Modality
Text
Output Modality
Text
Arch
Default Temperature
0.3
Default Top_P
1

Pricing

Pay per use, no monthly fees
Billing TypeUnitPrice
Text Input$0.0200/M tokens
Text Output$0.0300/M tokens

Quick Start

from openai import OpenAI

client = OpenAI(
    base_url="https://api.uniontoken.ai/v1",
    api_key="YOUR_UNIONTOKEN_API_KEY",
)

response = client.chat.completions.create(
    model="mistralai/mistral-nemo",
    messages=[
        {"role": "user", "content": "Hello!"}
    ],
)

print(response.choices[0].message.content)

FAQ

Ready to get started?

Get 1M free tokens on registration, no monthly fees or minimum spend

Register Now →