Back to Models

DeepSeek: DeepSeek V4 Flash

deepseek/deepseek-v4-flash
Apr 24, 20261.0M context384K max output$0.14/M in · $0.28/M outReasoning

Description

DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fast inference and high-throughput workloads, while maintaining strong reasoning and coding performance.

The model includes hybrid attention for efficient long-context processing and supports configurable reasoning modes. It is well suited for applications such as coding assistants, chat systems, and agent workflows where responsiveness and cost efficiency are important.

Specifications

Provider
deepseek
Context Length
1.0M
Max Output
384K
Modality
Intext
Outtext

Pricing

TypePrice / 1M tokens
Input$0.14
Output$0.28
Cache Read$0.03

Quick Start

curl https://api.ominigate.ai/v1/chat/completions \
  -H "Authorization: Bearer sk-omg-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek/deepseek-v4-flash",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'