Models / o4-mini

o4-mini

proprietary

Cost-efficient reasoning model. Good for structured outputs, code generation with verification, and multi-step logic.

openai/o4-mini
Context Window
200K
Max Output
100K
Providers
1
Released
2025-04

Capabilities

chatcodereasoningthinkingtoolsstreaming

Pricing by Provider

ProviderInput $/1MOutput $/1MLatency p50Latency p95Status
openai$1.10$4.40500ms2000ms

Quick Start

Python
import magicrouter

mr = magicrouter.Client(
    provider_keys={"openai": "your-api-key"}
)

response = mr.chat(
    "openai/o4-mini",
    "Your prompt here"
)
print(response.choices[0].message.content)
TypeScript
import { MagicRouter } from "magicrouter";

const mr = new MagicRouter({
  providerKeys: { openai: "your-api-key" }
});

const response = await mr.chat({
  model: "openai/o4-mini",
  messages: [{ role: "user", content: "Your prompt here" }]
});
console.log(response.choices[0].message.content);
cURL
curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "o4-mini",
    "messages": [{"role": "user", "content": "Your prompt here"}]
  }'

Use this model

Sign up for free and test o4-mini in the playground

Get Started
o4-mini — MagicRouter