Back to Models

Meta: Llama 4 Scout

meta-llama/llama-4-scout
Apr 5, 2025327.7K context16.4K max output$0.08/M in · $0.30/M out

Description

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens.

Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.

Specifications

Provider
meta-llama
Context Length
327.7K
Max Output
16.4K
Modality
Intextimage
Outtext

Pricing

TypePrice / 1M tokens
Input$0.08
Output$0.30

Quick Start

curl https://api.ominigate.ai/v1/chat/completions \
  -H "Authorization: Bearer sk-omg-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/llama-4-scout",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
Meta: Llama 4 Scout — Pricing & API on OminiGate