Back to Models
Mistral: Mistral Small 3
mistralai/mistral-small-24b-instruct-2501Jan 30, 202532.8K context16.4K max output$0.05/M in · $0.08/M out
Description
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.
The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.
Specifications
Provider
mistralai
Context Length
32.8K
Max Output
16.4K
Modality
Intext
Outtext
Pricing
| Type | Price / 1M tokens |
|---|---|
| Input | $0.05 |
| Output | $0.08 |
Quick Start
curl https://api.ominigate.ai/v1/chat/completions \
-H "Authorization: Bearer sk-omg-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/mistral-small-24b-instruct-2501",
"messages": [{"role": "user", "content": "Hello!"}]
}'