AI Models/Qwen/Qwen: Qwen-Max
QwenChat

Qwen: Qwen-Max

qwen/qwen-max
33KContext Window
8KMax Output
Supported Protocols:max_tokenstemperaturetop_pseedpresence_penaltyresponse_formattoolstool_choice
Normal

Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown.

Capabilities

🔧 Function CallingText GenerationCode GenerationAnalysis & ReasoningReasoning

Technical Specs

Input Modality
Text
Output Modality
Text
Arch
Default Temperature
0.7
Default Top_P
1

Pricing

Pay per use, no monthly fees
Billing TypeUnitPrice
Text Input$1.0400/M tokens
Text Output$4.1600/M tokens
Cache Read$0.2080/M tokens

Quick Start

from openai import OpenAI

client = OpenAI(
    base_url="https://api.uniontoken.ai/v1",
    api_key="YOUR_UNIONTOKEN_API_KEY",
)

response = client.chat.completions.create(
    model="qwen/qwen-max",
    messages=[
        {"role": "user", "content": "Hello!"}
    ],
)

print(response.choices[0].message.content)

FAQ

Ready to get started?

Get 1M free tokens on registration, no monthly fees or minimum spend

Register Now →