Models / Qwen2.5 72B
Qwen2.5 72B
Qwen Tier 1Apache-2.0Legacy stable 72B dense model. Well-tested in production pipelines with 128K context.
qwen/qwen2.5-72bContext Window
131K
Max Output
16K
Providers
2
Released
2024-09
Capabilities
chatcodetoolsfunction_callingstreamingjson_mode
Pricing by Provider
| Provider | Input $/1M | Output $/1M | Latency p50 | Latency p95 | Status |
|---|---|---|---|---|---|
| alibaba | $0.34 | $1.36 | 450ms | 1200ms | |
| together | $0.36 | $1.44 | 500ms | 1300ms |
Quick Start
Python
import magicrouter
mr = magicrouter.Client(
provider_keys={"alibaba": "your-api-key"}
)
response = mr.chat(
"qwen/qwen2.5-72b",
"Your prompt here"
)
print(response.choices[0].message.content)TypeScript
import { MagicRouter } from "magicrouter";
const mr = new MagicRouter({
providerKeys: { alibaba: "your-api-key" }
});
const response = await mr.chat({
model: "qwen/qwen2.5-72b",
messages: [{ role: "user", content: "Your prompt here" }]
});
console.log(response.choices[0].message.content);cURL
curl https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2.5-72b-instruct",
"messages": [{"role": "user", "content": "Your prompt here"}]
}'