Models / Llama 4 Maverick

Llama 4 Maverick

Llama 4 Community

Meta's flagship MoE model. 400B total with 17B active. Strong multimodal and coding capabilities.

meta/llama-4-maverick
Context Window
1M
Max Output
33K
Providers
1
Released
2025-04

Capabilities

chatcodevisiontoolsstreaming

Pricing by Provider

ProviderInput $/1MOutput $/1MLatency p50Latency p95Status
together$0.27$0.85500ms1400ms

Quick Start

Python
import magicrouter

mr = magicrouter.Client(
    provider_keys={"together": "your-api-key"}
)

response = mr.chat(
    "meta/llama-4-maverick",
    "Your prompt here"
)
print(response.choices[0].message.content)
TypeScript
import { MagicRouter } from "magicrouter";

const mr = new MagicRouter({
  providerKeys: { together: "your-api-key" }
});

const response = await mr.chat({
  model: "meta/llama-4-maverick",
  messages: [{ role: "user", content: "Your prompt here" }]
});
console.log(response.choices[0].message.content);
cURL
curl https://api.together.xyz/v1/chat/completions \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8",
    "messages": [{"role": "user", "content": "Your prompt here"}]
  }'

Use this model

Sign up for free and test Llama 4 Maverick in the playground

Get Started
Llama 4 Maverick — MagicRouter