Chat
inclusionAI: Ling-2.6-flash
inclusionai/ling-2.6-flash
262KContext Window
4KMax Output
Normal
Ling-2.6-flash is an instant (instruct) model from inclusionAI with 104B total parameters and 7.4B active parameters, designed for real-world agents that require fast responses, strong execution, and high token efficiency. It delivers performance comparable to state-of-the-art models at a similar scale while significantly reducing token usage across coding, document processing, and lightweight agent workflows.
Capabilities
Text GenerationCode GenerationAnalysis & Reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Pricing
Pay per use, no monthly fees| Billing Type | Unit | Price |
|---|---|---|
| Text Input | — | $0.0800/M tokens |
| Text Output | — | $0.2400/M tokens |
| Cache Read | — | $0.0160/M tokens |
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="inclusionai/ling-2.6-flash",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
inclusionAI: Ling-2.6-flash
inclusionai/ling-2.6-flash
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window262K
Max Output4K
Related Models
View All → →Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →