Errors

Every OminiGate API error returns a consistent JSON envelope. Use this reference to look up what each code means and how to handle it.

Errors

Response format

All errors — gateway, authentication, and admin — share the same envelope. Inspect the top-level error object on any non-2xx response to get the error family, a stable code, and a human-readable message.

{
  "error": {
    "type": "authentication_error",
    "code": "invalid_api_key",
    "message": "Invalid API key provided."
  }
}

Envelope fields

typestring

High-level error family. Groups related codes so you can branch on broad handling (e.g. retry upstream errors, surface billing errors to the user).

codestring

Stable, lowercase snake_case identifier. Safe to switch on in code — codes do not change across releases.

messagestring

English, developer-facing description. Suitable for logs; localize for end-users yourself.

paramstring, optional

Only present when the error is caused by a specific request field (e.g. model, email). Use it to highlight the offending input.

authentication_error

HTTP 401

The request could not be authenticated. The credentials you sent were missing, malformed, expired, or no longer recognized.

invalid_api_key

HTTP 401

Invalid API key provided.

Example response

{
  "error": {
    "type": "authentication_error",
    "code": "invalid_api_key",
    "message": "Invalid API key provided."
  }
}

Common causes

  • The Authorization header is missing, or its value is not in the form Bearer sk-omg-xxx.
  • The API key was deleted from the Dashboard, or flagged as leaked and auto-revoked.
  • A Dashboard login token (JWT) was sent instead of an API key — API keys start with sk-omg-.
  • The key belongs to a different environment or has been regenerated since this client was deployed.

How to handle

  1. Confirm the header: Authorization: Bearer sk-omg-<your-key>. Note the space and the lowercase Bearer.
  2. Open Dashboard → API Keys and verify the key exists and is enabled.
  3. If the key may have leaked, delete it immediately and issue a new one; rotate it everywhere it's used.
  4. Use curl -v or a proxy like mitmproxy to confirm the header is being sent as you expect — CDNs or SDK wrappers occasionally strip it.

expired_api_key

HTTP 401

API key has expired.

Example response

{
  "error": {
    "type": "authentication_error",
    "code": "expired_api_key",
    "message": "API key has expired."
  }
}

Common causes

  • The key had an expiration date set at creation time, and that date has now passed.
  • The key was rotated but one of your deployments is still holding the old value.
  • Long-lived keys created under a retention policy that has since tightened.

How to handle

  1. Open Dashboard → API Keys to see each key's expires_at and status.
  2. Generate a new key and deploy it to every environment (production, staging, CI, local .env files).
  3. If you don't want expiration, leave the expires_at field blank when you create the next key.
  4. Wire up a reminder in your secrets manager (AWS Secrets Manager, Vault, Doppler) so rotations don't sneak up on you.

invalid_request_error

HTTP 400

Request validation failed. A required parameter is missing, malformed, or outside the accepted range.

invalid_request

HTTP 400

Invalid request body.

Example response

{
  "error": {
    "type": "invalid_request_error",
    "code": "invalid_request",
    "message": "Invalid request body."
  }
}

Common causes

  • The request body is not valid JSON — a trailing comma, unescaped quote, or missing Content-Type: application/json header.
  • The payload parses as JSON but violates the endpoint schema: wrong field type (e.g. messages as string instead of array), unknown fields in strict mode, or value out of range (e.g. temperature > 2).
  • You sent a non-streaming body to a streaming endpoint or vice versa — e.g. stream: true without reading an SSE response, or Anthropic-format messages sent to /v1/chat/completions.
  • A multipart/form-data upload (for /v1/images edits or video references) is missing a required part or uses the wrong field name.

How to handle

  1. Validate the JSON body against the OpenAI chat completions or Anthropic messages schema; run it through jq . first to catch syntax errors.
  2. Set Content-Type: application/json explicitly. Some SDKs drop the header when the body is small or empty.
  3. Re-read the endpoint reference under docs/ → API Reference and confirm every required field is present with the right type.
  4. If streaming, consume the response as Server-Sent Events (text/event-stream); do not buffer the whole body.

invalid_model

HTTP 400param: model

The model does not exist.

Example response

{
  "error": {
    "type": "invalid_request_error",
    "code": "invalid_model",
    "message": "The model does not exist.",
    "param": "model"
  }
}

Common causes

  • The model string is malformed — missing the provider prefix (e.g. gpt-4o instead of openai/gpt-4o), wrong slash direction, or containing trailing whitespace.
  • You passed a display name or marketing name (e.g. "GPT-4 Omni") instead of the canonical slug.
  • The value is empty, null, or a placeholder like "default" / "auto" that the gateway does not accept.
  • A SDK automatically rewrote the model name (e.g. stripped the provider prefix) before sending the request.

How to handle

  1. Use the canonical provider/model slug: openai/gpt-4o, anthropic/claude-sonnet-4-20250514, google/veo-2 — always lowercase, always slash-separated.
  2. Fetch GET /api/v1/models and copy the slug field from the response; do not hand-type model names.
  3. Trim whitespace and verify the exact string you are sending with curl -v or your SDK's debug logging.
  4. If using an SDK's baseURL override, disable any model-remapping middleware or set the model explicitly on each request.

missing_parameter

HTTP 400

A required parameter is missing.

Example response

{
  "error": {
    "type": "invalid_request_error",
    "code": "missing_parameter",
    "message": "A required parameter is missing."
  }
}

Common causes

  • A required top-level field is absent — e.g. messages for /v1/chat/completions, model, or prompt for /v1/images and /v1/videos/*.
  • A nested required field is missing, such as role or content inside each messages[] entry, or type inside a content block.
  • The video endpoint was called without the resource the action needs — e.g. POST /v1/videos/{provider}/create without prompt, or /status without the task id.
  • The field exists but is an empty string / empty array, which the gateway treats as missing.

How to handle

  1. Re-read the endpoint reference and list every required field; the error's param key (when present) points to the missing one.
  2. For /v1/chat/completions: ensure model and a non-empty messages array; each message must have role and content.
  3. For /v1/videos/{provider}/{action}: include model and prompt on create; include the task id returned by create when calling status or result.
  4. Log the final JSON your SDK sends (not the pre-serialization object) to catch fields dropped by null-stripping or schema validators.

video_tier_not_found

HTTP 400param: tier

Requested pricing tier not found for this model.

Example response

{
  "error": {
    "type": "invalid_request_error",
    "code": "video_tier_not_found",
    "message": "Requested pricing tier not found for this model.",
    "param": "tier"
  }
}

Common causes

  • The tier string does not exist for this model — e.g. asking for "4k" on google/veo-2 when only 720p and 1080p are offered.
  • The tier name is misspelled or uses a different casing than the catalog (e.g. "HD" vs "hd", "1080P" vs "1080p").
  • The model supports multiple tiers but you omitted the tier field on a model where it is required.
  • You copied a tier identifier from another provider's model (e.g. runway's tier names on a veo request).

How to handle

  1. Call GET /api/v1/models/{slug} and read the pricing.video.tiers array to see exactly which tier identifiers are valid.
  2. Use the tier value verbatim from the catalog — they are case-sensitive and match the exact string stored in the pricing JSON.
  3. If you do not care about tier, omit the tier field to fall back to the model's default tier (when the model supports a default).
  4. After switching tiers, re-check the per-second price — tiers differ in cost and may fail your usage budget.

not_found_error

HTTP 404

The target resource does not exist or has been deleted.

not_found

HTTP 404

Resource not found.

Example response

{
  "error": {
    "type": "not_found_error",
    "code": "not_found",
    "message": "Resource not found."
  }
}

Common causes

  • The URL path is wrong — missing the /v1 prefix, using the Anthropic-compat path on an OpenAI route, or a typo like /v1/chat/completion (no s).
  • You are calling a Dashboard endpoint at the wrong base URL, or an API endpoint with the /api/v1 prefix (API keys use /v1/*, not /api/v1/*).
  • The HTTP method is wrong — Ominigate only uses GET for reads and POST for writes; PUT/DELETE/PATCH always return 404.
  • The resource id in the path does not belong to your account (e.g. fetching another user's API key or video task).

How to handle

  1. Compare your URL against the reference: OpenAI-compat under /v1/chat/completions, Anthropic-compat under /v1/messages, images under /v1/images, videos under /v1/videos/{provider}/{action}.
  2. Confirm the base URL: https://api.ominigate.ai for API calls (sk-omg- auth), and the Dashboard host for /api/v1/* (JWT auth).
  3. Switch any PUT/DELETE/PATCH request to POST with an explicit action in the path.
  4. If the path looks correct, contact support@ominigate.ai with the X-Request-Id response header.

model_not_found

HTTP 404

Model not found.

Example response

{
  "error": {
    "type": "not_found_error",
    "code": "model_not_found",
    "message": "Model not found."
  }
}

Common causes

  • The slug format is valid but the model has never been listed on Ominigate — check the public catalog at /api/v1/models.
  • The model was delisted from the catalog and is no longer routable through the gateway.
  • Your API key has a model allowlist that excludes this model, and the gateway reports it as not found instead of forbidden.
  • The model is a preview/beta that requires an explicit opt-in on the account that has not been granted yet.

How to handle

  1. Call GET /api/v1/models and confirm the slug appears with status "active"; only active models are routable.
  2. If the model is listed but you still get 404, open Dashboard → API Keys and remove any model-allowlist restriction on this key.
  3. For delisted models, switch to the successor recommended in the model detail page (GET /api/v1/models/{slug}).
  4. To request beta access, email support@ominigate.ai with the model slug and your account id.

image_model_not_found

HTTP 404param: model

Image model not found.

Example response

{
  "error": {
    "type": "not_found_error",
    "code": "image_model_not_found",
    "message": "Image model not found.",
    "param": "model"
  }
}

Common causes

  • You called /v1/images with a chat or embedding model such as openai/gpt-4o or openai/text-embedding-3-large — these cannot generate images.
  • The slug is correct but refers to a video model (e.g. google/veo-2); /v1/images only accepts image-generation models.
  • The model is a legacy image model that has been removed from Ominigate's catalog.
  • You passed a provider slug (e.g. openai) without the specific model name.

How to handle

  1. List image-capable models with GET /api/v1/models?type=image and pick one — examples: openai/gpt-image-1, google/imagen-4.
  2. If you need to generate a video, switch to POST /v1/videos/{provider}/create; for chat, use /v1/chat/completions.
  3. Read the model detail page (GET /api/v1/models/{slug}) and confirm output_modalities contains "image".
  4. Always send the full provider/model slug — provider-only identifiers are never accepted.

video_model_not_found

HTTP 404param: model

Video model not found.

Example response

{
  "error": {
    "type": "not_found_error",
    "code": "video_model_not_found",
    "message": "Video model not found.",
    "param": "model"
  }
}

Common causes

  • You called /v1/videos/{provider}/{action} with a chat or image model such as openai/gpt-4o or openai/gpt-image-1.
  • The provider segment in the path does not match the model's provider — e.g. POST /v1/videos/openai/create with google/veo-2 as the model.
  • The video model was delisted or is temporarily disabled due to upstream outage.
  • The slug is valid but the action (create / status / result) is wrong for this model's workflow.

How to handle

  1. List video-capable models with GET /api/v1/models?type=video and copy the slug; examples: google/veo-2, runway/gen-4-turbo.
  2. Ensure the {provider} path segment matches the model prefix: use /v1/videos/google/create for google/veo-2, /v1/videos/runway/create for runway/*.
  3. After POST /v1/videos/{provider}/create, poll POST /v1/videos/{provider}/status with the returned task id until status is "completed", then call /result.
  4. If the model was working before and is now returning this error, check status.ominigate.ai or contact support@ominigate.ai with the X-Request-Id header.

billing_error

HTTP 402

The request was rejected for billing reasons — typically insufficient balance. Top up and retry.

insufficient_balance

HTTP 402

Your balance is insufficient. Please top up your account.

Example response

{
  "error": {
    "type": "billing_error",
    "code": "insufficient_balance",
    "message": "Your balance is insufficient. Please top up your account."
  }
}

Common causes

  • Your account balance has dropped to zero or below after real-time billing for tokens, images, or video seconds.
  • A long-running video polling job continued to accrue charges until the balance was exhausted.
  • A recent top-up has not yet cleared, so the available balance is still at zero.
  • Multiple concurrent requests drained the balance between the time you last checked and the time this request was billed.

How to handle

  1. Open Dashboard → Billing (/dashboard/billing) and top up before retrying the request.
  2. After topping up, wait a few seconds for the balance to refresh, then retry; the gateway reads the latest balance on each call.
  3. Enable usage caps on individual API Keys to prevent one workload from draining the whole account again.
  4. If you have topped up but the balance still shows 0, contact support@ominigate.ai with the X-Request-Id response header.

usage_cap_exceeded

HTTP 402 / 429

A configured spending cap (per-key or per-account) has been hit. Requests are blocked until the next reset, or until you raise the limit.

key_daily_limit

HTTP 429

API Key has exceeded its daily spending limit.

Example response

{
  "error": {
    "type": "usage_cap_exceeded",
    "code": "key_daily_limit",
    "message": "API Key has exceeded its daily spending limit."
  }
}

Common causes

  • This API Key has hit the per-key daily spending limit you configured on its detail page in the Dashboard.
  • A burst of requests from this key (for example a batch job or a buggy client loop) consumed the daily budget earlier than expected.
  • The response body contains limit, used, and resets_at so you can see exactly which budget was hit and when it resets.
  • The daily window is UTC-based and resets at UTC 00:00, not your local midnight.

How to handle

  1. Raise or remove this key's daily limit at Dashboard → API Keys (/dashboard/keys) by clicking the usage icon next to the key.
  2. Switch the workload to a different API Key that still has daily budget remaining, or wait until resets_at before retrying.
  3. Before resuming, check Dashboard → Billing to confirm there is enough account balance, otherwise you will next hit insufficient_balance.
  4. Add client-side backoff so a runaway loop cannot exhaust a whole day's budget in minutes.

key_monthly_limit

HTTP 429

API Key has exceeded its monthly spending limit.

Example response

{
  "error": {
    "type": "usage_cap_exceeded",
    "code": "key_monthly_limit",
    "message": "API Key has exceeded its monthly spending limit."
  }
}

Common causes

  • This API Key has reached the monthly spending cap you set for it in the Dashboard.
  • Traffic ramped up faster than the monthly budget anticipated, for example because of a new production rollout.
  • The response body includes limit, used, and resets_at so you can confirm the monthly bucket and the reset timestamp.
  • The monthly window resets at UTC 00:00 on the first day of each month, not on the key's creation anniversary.

How to handle

  1. Increase the monthly limit for this key at Dashboard → API Keys (/dashboard/keys) if the higher spend is expected.
  2. If the spike is unexpected, audit recent usage under the key to find the caller and pause it before raising the cap.
  3. For critical workloads, issue a separate API Key with its own monthly budget so unrelated traffic cannot block production.
  4. Wait until resets_at to retry automatically, or route traffic to another key until then.

account_daily_limit

HTTP 402

Account has exceeded its daily spending limit.

Example response

{
  "error": {
    "type": "usage_cap_exceeded",
    "code": "account_daily_limit",
    "message": "Account has exceeded its daily spending limit."
  }
}

Common causes

  • The combined spend across all API Keys on this account has hit the account-wide daily limit configured at Dashboard → Billing.
  • One or more keys without per-key caps consumed the bulk of the shared daily budget.
  • The response body includes limit, used, and resets_at, scoped to the whole account.
  • The account daily window resets at UTC 00:00, independent of any per-key windows.

How to handle

  1. Raise or remove the account daily limit at Dashboard → Billing (/dashboard/billing) if the aggregate spend is legitimate.
  2. Set per-key daily limits at Dashboard → API Keys so a single key cannot monopolise the account budget again.
  3. Identify the top-spending key in the usage breakdown and throttle or disable it before lifting the account cap.
  4. Wait until resets_at before retrying, and re-evaluate the daily cap to match real demand.

request_cost_limit

HTTP 402

Single request cost exceeds the configured limit.

Example response

{
  "error": {
    "type": "usage_cap_exceeded",
    "code": "request_cost_limit",
    "message": "Single request cost exceeds the configured limit."
  }
}

Common causes

  • The estimated or actual cost of this single request exceeds the per-request cost cap configured for this API Key.
  • A very large prompt, long max_tokens, high-resolution image, or long video duration pushed one call over the per-request threshold.
  • The response body includes limit (the cap) and used (the request's cost) so you can see how far over you went.
  • The cap applies to the full request cost, including cache writes, reasoning tokens, and any tool calls counted by the model.

How to handle

  1. Lower the cost of the request: reduce prompt length, shrink max_tokens, use a smaller model, or request a shorter video or lower-resolution image.
  2. If legitimate large requests are expected, raise this key's per-request cost limit at Dashboard → API Keys (/dashboard/keys).
  3. Split a single heavy job into multiple smaller requests that each fit under the cap.
  4. For video endpoints, cancel the polling loop on this error — the job will not be billed further but the current call is already rejected.

Daily caps include extra metadata (limit / used / resets_at). See the Usage Caps guide for the full response format and configuration details.

rate_limit_error

HTTP 429

Rate limits protect the platform from bursts. Back off (honor Retry-After if present) and retry with exponential backoff.

rate_limited

HTTP 429

Rate limit exceeded.

Example response

{
  "error": {
    "type": "rate_limit_error",
    "code": "rate_limited",
    "message": "Rate limit exceeded."
  }
}

Common causes

  • You hit a platform-wide throttle that protects the gateway from aggregate overload.
  • A sudden traffic spike from multiple keys on the same account crossed the combined ceiling.
  • Upstream providers are temporarily degraded, so Ominigate has tightened limits to keep the service stable.
  • Automated retries without backoff are amplifying the original spike and keeping you in the limited state.

How to handle

  1. Back off using exponential delay (for example 1s, 2s, 4s, 8s with jitter) and respect any Retry-After response header.
  2. Spread bursty workloads over a longer window, or queue requests client-side instead of firing them in parallel.
  3. If the limit is triggered by a single key, check whether you also have rpm_rate_limited or tpm_rate_limited errors and raise those first.
  4. If the error persists at normal traffic levels, contact support@ominigate.ai with the X-Request-Id response header.

rpm_rate_limited

HTTP 429

Rate limit exceeded: requests per minute.

Example response

{
  "error": {
    "type": "rate_limit_error",
    "code": "rpm_rate_limited",
    "message": "Rate limit exceeded: requests per minute."
  }
}

Common causes

  • This API Key has exceeded its requests-per-minute (RPM) limit configured in the Dashboard.
  • Many small, low-token requests were fired in parallel, so RPM was hit even though TPM is still fine.
  • A polling loop (for example on the video endpoints) is polling too aggressively and consuming the RPM budget.
  • Client code is retrying 4xx errors immediately, turning one failure into dozens of requests per minute.

How to handle

  1. Increase the RPM for this key at Dashboard → API Keys (/dashboard/keys) if the traffic pattern is expected.
  2. Add client-side concurrency control: limit in-flight requests and queue the rest instead of firing them simultaneously.
  3. For video polling, follow the interval hinted by the API and apply exponential backoff; do not poll tighter than the recommended cadence.
  4. Retry with backoff (for example 1s, 2s, 4s with jitter) and honour any Retry-After header returned with the 429.

tpm_rate_limited

HTTP 429

Rate limit exceeded: tokens per minute.

Example response

{
  "error": {
    "type": "rate_limit_error",
    "code": "tpm_rate_limited",
    "message": "Rate limit exceeded: tokens per minute."
  }
}

Common causes

  • This API Key has exceeded its tokens-per-minute (TPM) limit configured in the Dashboard.
  • Long prompts or large max_tokens values pushed the per-minute token volume over the cap, even with few requests.
  • A streaming chat with a very large context was counted in full against TPM as tokens flowed back.
  • Batch jobs that feed large documents into the model are concentrating too much token volume into a 60-second window.

How to handle

  1. Raise the TPM for this key at Dashboard → API Keys (/dashboard/keys) if the higher token throughput is intentional.
  2. Reduce token usage per call: trim prompts, enable prompt caching where supported, or lower max_tokens.
  3. Spread large jobs out over time so token consumption stays under TPM within any rolling minute.
  4. Retry with exponential backoff and honour any Retry-After header; immediate retries will simply hit TPM again.

upstream_error

HTTP 502 / 504

The gateway reached the upstream provider, but the provider returned an error, timed out, or is unreachable. Usually transient — retry with backoff.

upstream_failed

HTTP 502

Upstream request failed.

Example response

{
  "error": {
    "type": "upstream_error",
    "code": "upstream_failed",
    "message": "Upstream request failed."
  }
}

Common causes

  • The gateway sent the request to the upstream provider, but the single call failed (connection reset, TLS handshake error, or a 5xx response from the provider).
  • A transient network hiccup between the gateway and the provider interrupted the request mid-flight.
  • The upstream provider briefly rate-limited or throttled the gateway's egress IP.
  • If you were using stream=true, the SSE connection dropped before the final chunk arrived.

How to handle

  1. Retry the request with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3-5 attempts). Most calls succeed on the second attempt.
  2. For streaming requests that dropped mid-response, record the last chunk you received, then resend the full original request rather than trying to resume — the gateway does not support mid-stream resume.
  3. Capture the X-Request-Id response header and include it when reporting the incident to support@ominigate.ai.
  4. If retries keep failing for the same model, switch to an equivalent model from a different provider as a fallback.

upstream_unavailable

HTTP 502

Upstream API is unavailable.

Example response

{
  "error": {
    "type": "upstream_error",
    "code": "upstream_unavailable",
    "message": "Upstream API is unavailable."
  }
}

Common causes

  • The upstream provider is completely unreachable — their status page likely shows an ongoing incident, or their API domain is returning DNS/TLS/connection errors at the network level.
  • A regional outage is blocking traffic from the gateway's egress region to the provider.
  • The provider is performing scheduled maintenance that takes their API offline.
  • Certificate or CA-chain issues on the provider side are causing TLS negotiation to fail for every request.

How to handle

  1. Do not hammer retries — the provider is down as a whole. Wait at least 2-5 minutes before trying again.
  2. Switch to an equivalent model from a different provider (e.g., if OpenAI is down, route to Anthropic or Google) to unblock production traffic.
  3. Check the provider's official status page to estimate recovery time before scheduling retries.
  4. Send the X-Request-Id and the affected model slug to support@ominigate.ai so we can confirm the outage scope on our side.

image_generation_failed

HTTP 502

Image generation failed.

Example response

{
  "error": {
    "type": "upstream_error",
    "code": "image_generation_failed",
    "message": "Image generation failed."
  }
}

Common causes

  • The upstream image-generation endpoint returned an error (invalid prompt per their policy, model-side rendering failure, or a 5xx from the provider).
  • The prompt or reference image was rejected by the provider's content safety filter.
  • The requested image size, count (n), or format is not supported by the chosen model.
  • The provider experienced a transient failure on this specific generation request.

How to handle

  1. Retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts) — most single-shot failures recover on retry.
  2. Some providers charge on the first attempt even when generation fails. If you retry and succeed, check your usage log to reconcile any double-charge, then email support@ominigate.ai with the X-Request-Id for adjustment.
  3. If the failure reproduces, simplify the prompt, reduce the image size, or lower the count (n) and try again.
  4. If only one specific model fails repeatedly, try a different image model listed in /v1/models.

image_generation_timeout

HTTP 504

Image generation timed out.

Example response

{
  "error": {
    "type": "upstream_error",
    "code": "image_generation_timeout",
    "message": "Image generation timed out."
  }
}

Common causes

  • The upstream image-generation job exceeded the gateway's timeout threshold, but the job itself may still be running on the provider side.
  • The provider is under heavy load and rendering queues are backed up.
  • The chosen model or requested resolution is computationally expensive and legitimately takes longer than the timeout window.
  • A slow network path between the gateway and the provider caused the response to arrive after the deadline.

How to handle

  1. Do NOT retry immediately — the upstream task may still be running and a blind retry can result in duplicate billing. Treat this as 'unknown outcome', not 'failed'.
  2. Email support@ominigate.ai with the X-Request-Id from the response header; we will look up the upstream task status and confirm whether it completed.
  3. Once support confirms the task did not finish, retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts).
  4. To reduce timeout risk, lower the image resolution or count (n), or pick a faster model from /v1/models.

video_generation_failed

HTTP 502

Video generation failed.

Example response

{
  "error": {
    "type": "upstream_error",
    "code": "video_generation_failed",
    "message": "Video generation failed."
  }
}

Common causes

  • The upstream video-generation provider rejected the submit request (invalid prompt, unsupported parameter combination, or a 5xx from their API).
  • The provider's safety filter rejected the prompt, reference image, or seed video.
  • The requested duration, resolution, or aspect ratio is not supported by the selected model.
  • A transient network or upstream failure occurred during task submission.

How to handle

  1. Retry the submit call with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts) — the task was never created upstream, so retrying will not double-charge.
  2. If the failure reproduces, simplify the prompt, shorten the duration, or lower the resolution to stay within the model's supported range.
  3. Capture the X-Request-Id and send it to support@ominigate.ai if retries keep failing with the same payload.
  4. Consider switching to a different video model from /v1/models if one provider is consistently unreliable.

video_task_id_missing

HTTP 502

Upstream returned success but task ID is missing.

Example response

{
  "error": {
    "type": "upstream_error",
    "code": "video_task_id_missing",
    "message": "Upstream returned success but task ID is missing."
  }
}

Common causes

  • The upstream video provider returned HTTP 200 for the submit call but omitted the task_id field in the response body.
  • The provider changed their response schema without notice and the gateway has not yet adapted.
  • A transient upstream bug silently dropped the task_id during serialization.
  • The provider accepted the submission but the task was created in an unexpected internal state with no addressable ID.

How to handle

  1. Resubmit the request — without a task_id there is no way to poll or cancel, so you need a fresh submission to get a usable handle.
  2. Retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts); include a short delay between submissions to avoid creating orphan tasks upstream.
  3. Send the X-Request-Id and the affected provider/model to support@ominigate.ai so we can file an incident with the upstream provider.
  4. If the same provider produces this error repeatedly within a short window, switch to a different video model until the upstream issue is resolved.

internal_error

HTTP 500

Something went wrong on the OminiGate side. These should be rare; please report with the X-Request-Id response header.

internal_error

HTTP 500

Internal server error.

Example response

{
  "error": {
    "type": "internal_error",
    "code": "internal_error",
    "message": "Internal server error."
  }
}

Common causes

  • The gateway itself encountered an unexpected bug or unhandled exception (not a business-logic rejection — this is an internal defect).
  • A dependency inside the gateway (database, cache, internal RPC) briefly failed during request handling.
  • A recent deployment introduced a regression that affects the specific code path your request exercised.
  • Resource pressure (CPU, memory, connection pool) on the gateway caused the request to fail mid-flight.

How to handle

  1. Retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3-5 attempts) — the majority of internal errors are transient and succeed on the next attempt.
  2. Always capture the X-Request-Id from the response header and send it to support@ominigate.ai; we use it to trace the exact stack and fix the root cause.
  3. If the error reproduces consistently for the same payload, include a minimal curl repro in your support email — this short-circuits our debugging.
  4. Monitor https://status.ominigate.ai (or your private status channel) for any active incidents that match the timing of your failures.

video_billing_error

HTTP 500

Failed to calculate video cost.

Example response

{
  "error": {
    "type": "internal_error",
    "code": "video_billing_error",
    "message": "Failed to calculate video cost."
  }
}

Common causes

  • The gateway's billing module failed to compute or persist the cost record for a completed video task (database timeout, decimal overflow, or missing pricing entry).
  • The video task finished successfully upstream, but the cost calculator encountered an unexpected model/parameter combination it could not price.
  • A transient database or cache failure prevented the billing transaction from being committed.
  • A pricing-config deployment race left the model without a valid price entry at the moment the task completed.

How to handle

  1. Do NOT resubmit the video task — it has likely completed successfully upstream and a resubmission will charge you again for the same output.
  2. Email support@ominigate.ai immediately with the X-Request-Id and the video task ID; we will reconcile the billing record manually and confirm the output is available.
  3. While waiting for reconciliation, you can usually still fetch the completed video via the standard task-polling endpoint using the task ID you already hold.
  4. For new video requests, proceed as normal — this error is specific to the billing post-processing step and does not block subsequent submissions.

Can't find your error code?

Email support@ominigate.ai with the full response body and the X-Request-Id response header — we'll get back within one business day.