meta-llamaChat
Meta: Llama 3.3 70B Instruct
meta-llama/llama-3.3-70b-instruct
131KContext Window
16KMax Output
Supported Protocols:max_tokenstemperaturetop_pstopfrequency_penaltypresence_penaltyrepetition_penaltytop_kseedmin_presponse_formattoolstool_choice
Normal
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. [Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)
Capabilities
🔧 Function CallingText GenerationCode GenerationAnalysis & ReasoningReasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Default Temperature
0.7
Default Top_P
1
Pricing
Pay per use, no monthly fees| Billing Type | Unit | Price |
|---|---|---|
| Text Input | — | $0.1000/M tokens |
| Text Output | — | $0.3200/M tokens |
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="meta-llama/llama-3.3-70b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
Meta: Llama 3.3 70B Instruct
meta-llama/llama-3.3-70b-instruct
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window131K
Max Output16K
Related Models
View All → →Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →