AI Models/meta-llama/Meta: Llama 3.2 1B Instruct
meta-llamaChat

Meta: Llama 3.2 1B Instruct

meta-llama/llama-3.2-1b-instruct
60KContext Window
Supported Protocols:max_tokenstemperaturetop_ptop_kseedrepetition_penaltyfrequency_penaltypresence_penalty
Normal

Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance. Supporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Capabilities

Text GenerationCode GenerationAnalysis & ReasoningReasoning

Technical Specs

Input Modality
Text
Output Modality
Text
Arch
Default Temperature
0.7
Default Top_P
1

Pricing

Pay per use, no monthly fees
Billing TypeUnitPrice
Text Input$0.0270/M tokens
Text Output$0.2000/M tokens

Quick Start

from openai import OpenAI

client = OpenAI(
    base_url="https://api.uniontoken.ai/v1",
    api_key="YOUR_UNIONTOKEN_API_KEY",
)

response = client.chat.completions.create(
    model="meta-llama/llama-3.2-1b-instruct",
    messages=[
        {"role": "user", "content": "Hello!"}
    ],
)

print(response.choices[0].message.content)

FAQ

Ready to get started?

Get 1M free tokens on registration, no monthly fees or minimum spend

Register Now →