Errors
Every OminiGate API error returns a consistent JSON envelope. Use this reference to look up what each code means and how to handle it.
Errors
Response format
All errors — gateway, authentication, and admin — share the same envelope. Inspect the top-level error object on any non-2xx response to get the error family, a stable code, and a human-readable message.
{
"error": {
"type": "authentication_error",
"code": "invalid_api_key",
"message": "Invalid API key provided."
}
}Envelope fields
typestringHigh-level error family. Groups related codes so you can branch on broad handling (e.g. retry upstream errors, surface billing errors to the user).
codestringStable, lowercase snake_case identifier. Safe to switch on in code — codes do not change across releases.
messagestringEnglish, developer-facing description. Suitable for logs; localize for end-users yourself.
paramstring, optionalOnly present when the error is caused by a specific request field (e.g. model, email). Use it to highlight the offending input.
authentication_error
HTTP 401The request could not be authenticated. The credentials you sent were missing, malformed, expired, or no longer recognized.
invalid_api_key
Invalid API key provided.
Example response
{
"error": {
"type": "authentication_error",
"code": "invalid_api_key",
"message": "Invalid API key provided."
}
}Common causes
- The Authorization header is missing, or its value is not in the form Bearer sk-omg-xxx.
- The API key was deleted from the Dashboard, or flagged as leaked and auto-revoked.
- A Dashboard login token (JWT) was sent instead of an API key — API keys start with sk-omg-.
- The key belongs to a different environment or has been regenerated since this client was deployed.
How to handle
- Confirm the header: Authorization: Bearer sk-omg-<your-key>. Note the space and the lowercase Bearer.
- Open Dashboard → API Keys and verify the key exists and is enabled.
- If the key may have leaked, delete it immediately and issue a new one; rotate it everywhere it's used.
- Use curl -v or a proxy like mitmproxy to confirm the header is being sent as you expect — CDNs or SDK wrappers occasionally strip it.
expired_api_key
API key has expired.
Example response
{
"error": {
"type": "authentication_error",
"code": "expired_api_key",
"message": "API key has expired."
}
}Common causes
- The key had an expiration date set at creation time, and that date has now passed.
- The key was rotated but one of your deployments is still holding the old value.
- Long-lived keys created under a retention policy that has since tightened.
How to handle
- Open Dashboard → API Keys to see each key's expires_at and status.
- Generate a new key and deploy it to every environment (production, staging, CI, local .env files).
- If you don't want expiration, leave the expires_at field blank when you create the next key.
- Wire up a reminder in your secrets manager (AWS Secrets Manager, Vault, Doppler) so rotations don't sneak up on you.
invalid_request_error
HTTP 400Request validation failed. A required parameter is missing, malformed, or outside the accepted range.
invalid_request
Invalid request body.
Example response
{
"error": {
"type": "invalid_request_error",
"code": "invalid_request",
"message": "Invalid request body."
}
}Common causes
- The request body is not valid JSON — a trailing comma, unescaped quote, or missing Content-Type: application/json header.
- The payload parses as JSON but violates the endpoint schema: wrong field type (e.g. messages as string instead of array), unknown fields in strict mode, or value out of range (e.g. temperature > 2).
- You sent a non-streaming body to a streaming endpoint or vice versa — e.g. stream: true without reading an SSE response, or Anthropic-format messages sent to /v1/chat/completions.
- A multipart/form-data upload (for /v1/images edits or video references) is missing a required part or uses the wrong field name.
How to handle
- Validate the JSON body against the OpenAI chat completions or Anthropic messages schema; run it through jq . first to catch syntax errors.
- Set Content-Type: application/json explicitly. Some SDKs drop the header when the body is small or empty.
- Re-read the endpoint reference under docs/ → API Reference and confirm every required field is present with the right type.
- If streaming, consume the response as Server-Sent Events (text/event-stream); do not buffer the whole body.
invalid_model
The model does not exist.
Example response
{
"error": {
"type": "invalid_request_error",
"code": "invalid_model",
"message": "The model does not exist.",
"param": "model"
}
}Common causes
- The model string is malformed — missing the provider prefix (e.g. gpt-4o instead of openai/gpt-4o), wrong slash direction, or containing trailing whitespace.
- You passed a display name or marketing name (e.g. "GPT-4 Omni") instead of the canonical slug.
- The value is empty, null, or a placeholder like "default" / "auto" that the gateway does not accept.
- A SDK automatically rewrote the model name (e.g. stripped the provider prefix) before sending the request.
How to handle
- Use the canonical provider/model slug: openai/gpt-4o, anthropic/claude-sonnet-4-20250514, google/veo-2 — always lowercase, always slash-separated.
- Fetch GET /api/v1/models and copy the slug field from the response; do not hand-type model names.
- Trim whitespace and verify the exact string you are sending with curl -v or your SDK's debug logging.
- If using an SDK's baseURL override, disable any model-remapping middleware or set the model explicitly on each request.
missing_parameter
A required parameter is missing.
Example response
{
"error": {
"type": "invalid_request_error",
"code": "missing_parameter",
"message": "A required parameter is missing."
}
}Common causes
- A required top-level field is absent — e.g. messages for /v1/chat/completions, model, or prompt for /v1/images and /v1/videos/*.
- A nested required field is missing, such as role or content inside each messages[] entry, or type inside a content block.
- The video endpoint was called without the resource the action needs — e.g. POST /v1/videos/{provider}/create without prompt, or /status without the task id.
- The field exists but is an empty string / empty array, which the gateway treats as missing.
How to handle
- Re-read the endpoint reference and list every required field; the error's param key (when present) points to the missing one.
- For /v1/chat/completions: ensure model and a non-empty messages array; each message must have role and content.
- For /v1/videos/{provider}/{action}: include model and prompt on create; include the task id returned by create when calling status or result.
- Log the final JSON your SDK sends (not the pre-serialization object) to catch fields dropped by null-stripping or schema validators.
video_tier_not_found
Requested pricing tier not found for this model.
Example response
{
"error": {
"type": "invalid_request_error",
"code": "video_tier_not_found",
"message": "Requested pricing tier not found for this model.",
"param": "tier"
}
}Common causes
- The tier string does not exist for this model — e.g. asking for "4k" on google/veo-2 when only 720p and 1080p are offered.
- The tier name is misspelled or uses a different casing than the catalog (e.g. "HD" vs "hd", "1080P" vs "1080p").
- The model supports multiple tiers but you omitted the tier field on a model where it is required.
- You copied a tier identifier from another provider's model (e.g. runway's tier names on a veo request).
How to handle
- Call GET /api/v1/models/{slug} and read the pricing.video.tiers array to see exactly which tier identifiers are valid.
- Use the tier value verbatim from the catalog — they are case-sensitive and match the exact string stored in the pricing JSON.
- If you do not care about tier, omit the tier field to fall back to the model's default tier (when the model supports a default).
- After switching tiers, re-check the per-second price — tiers differ in cost and may fail your usage budget.
not_found_error
HTTP 404The target resource does not exist or has been deleted.
not_found
Resource not found.
Example response
{
"error": {
"type": "not_found_error",
"code": "not_found",
"message": "Resource not found."
}
}Common causes
- The URL path is wrong — missing the /v1 prefix, using the Anthropic-compat path on an OpenAI route, or a typo like /v1/chat/completion (no s).
- You are calling a Dashboard endpoint at the wrong base URL, or an API endpoint with the /api/v1 prefix (API keys use /v1/*, not /api/v1/*).
- The HTTP method is wrong — Ominigate only uses GET for reads and POST for writes; PUT/DELETE/PATCH always return 404.
- The resource id in the path does not belong to your account (e.g. fetching another user's API key or video task).
How to handle
- Compare your URL against the reference: OpenAI-compat under /v1/chat/completions, Anthropic-compat under /v1/messages, images under /v1/images, videos under /v1/videos/{provider}/{action}.
- Confirm the base URL: https://api.ominigate.ai for API calls (sk-omg- auth), and the Dashboard host for /api/v1/* (JWT auth).
- Switch any PUT/DELETE/PATCH request to POST with an explicit action in the path.
- If the path looks correct, contact support@ominigate.ai with the X-Request-Id response header.
model_not_found
Model not found.
Example response
{
"error": {
"type": "not_found_error",
"code": "model_not_found",
"message": "Model not found."
}
}Common causes
- The slug format is valid but the model has never been listed on Ominigate — check the public catalog at /api/v1/models.
- The model was delisted from the catalog and is no longer routable through the gateway.
- Your API key has a model allowlist that excludes this model, and the gateway reports it as not found instead of forbidden.
- The model is a preview/beta that requires an explicit opt-in on the account that has not been granted yet.
How to handle
- Call GET /api/v1/models and confirm the slug appears with status "active"; only active models are routable.
- If the model is listed but you still get 404, open Dashboard → API Keys and remove any model-allowlist restriction on this key.
- For delisted models, switch to the successor recommended in the model detail page (GET /api/v1/models/{slug}).
- To request beta access, email support@ominigate.ai with the model slug and your account id.
image_model_not_found
Image model not found.
Example response
{
"error": {
"type": "not_found_error",
"code": "image_model_not_found",
"message": "Image model not found.",
"param": "model"
}
}Common causes
- You called /v1/images with a chat or embedding model such as openai/gpt-4o or openai/text-embedding-3-large — these cannot generate images.
- The slug is correct but refers to a video model (e.g. google/veo-2); /v1/images only accepts image-generation models.
- The model is a legacy image model that has been removed from Ominigate's catalog.
- You passed a provider slug (e.g. openai) without the specific model name.
How to handle
- List image-capable models with GET /api/v1/models?type=image and pick one — examples: openai/gpt-image-1, google/imagen-4.
- If you need to generate a video, switch to POST /v1/videos/{provider}/create; for chat, use /v1/chat/completions.
- Read the model detail page (GET /api/v1/models/{slug}) and confirm output_modalities contains "image".
- Always send the full provider/model slug — provider-only identifiers are never accepted.
video_model_not_found
Video model not found.
Example response
{
"error": {
"type": "not_found_error",
"code": "video_model_not_found",
"message": "Video model not found.",
"param": "model"
}
}Common causes
- You called /v1/videos/{provider}/{action} with a chat or image model such as openai/gpt-4o or openai/gpt-image-1.
- The provider segment in the path does not match the model's provider — e.g. POST /v1/videos/openai/create with google/veo-2 as the model.
- The video model was delisted or is temporarily disabled due to upstream outage.
- The slug is valid but the action (create / status / result) is wrong for this model's workflow.
How to handle
- List video-capable models with GET /api/v1/models?type=video and copy the slug; examples: google/veo-2, runway/gen-4-turbo.
- Ensure the {provider} path segment matches the model prefix: use /v1/videos/google/create for google/veo-2, /v1/videos/runway/create for runway/*.
- After POST /v1/videos/{provider}/create, poll POST /v1/videos/{provider}/status with the returned task id until status is "completed", then call /result.
- If the model was working before and is now returning this error, check status.ominigate.ai or contact support@ominigate.ai with the X-Request-Id header.
billing_error
HTTP 402The request was rejected for billing reasons — typically insufficient balance. Top up and retry.
insufficient_balance
Your balance is insufficient. Please top up your account.
Example response
{
"error": {
"type": "billing_error",
"code": "insufficient_balance",
"message": "Your balance is insufficient. Please top up your account."
}
}Common causes
- Your account balance has dropped to zero or below after real-time billing for tokens, images, or video seconds.
- A long-running video polling job continued to accrue charges until the balance was exhausted.
- A recent top-up has not yet cleared, so the available balance is still at zero.
- Multiple concurrent requests drained the balance between the time you last checked and the time this request was billed.
How to handle
- Open Dashboard → Billing (/dashboard/billing) and top up before retrying the request.
- After topping up, wait a few seconds for the balance to refresh, then retry; the gateway reads the latest balance on each call.
- Enable usage caps on individual API Keys to prevent one workload from draining the whole account again.
- If you have topped up but the balance still shows 0, contact support@ominigate.ai with the X-Request-Id response header.
usage_cap_exceeded
HTTP 402 / 429A configured spending cap (per-key or per-account) has been hit. Requests are blocked until the next reset, or until you raise the limit.
key_daily_limit
API Key has exceeded its daily spending limit.
Example response
{
"error": {
"type": "usage_cap_exceeded",
"code": "key_daily_limit",
"message": "API Key has exceeded its daily spending limit."
}
}Common causes
- This API Key has hit the per-key daily spending limit you configured on its detail page in the Dashboard.
- A burst of requests from this key (for example a batch job or a buggy client loop) consumed the daily budget earlier than expected.
- The response body contains limit, used, and resets_at so you can see exactly which budget was hit and when it resets.
- The daily window is UTC-based and resets at UTC 00:00, not your local midnight.
How to handle
- Raise or remove this key's daily limit at Dashboard → API Keys (/dashboard/keys) by clicking the usage icon next to the key.
- Switch the workload to a different API Key that still has daily budget remaining, or wait until resets_at before retrying.
- Before resuming, check Dashboard → Billing to confirm there is enough account balance, otherwise you will next hit insufficient_balance.
- Add client-side backoff so a runaway loop cannot exhaust a whole day's budget in minutes.
key_monthly_limit
API Key has exceeded its monthly spending limit.
Example response
{
"error": {
"type": "usage_cap_exceeded",
"code": "key_monthly_limit",
"message": "API Key has exceeded its monthly spending limit."
}
}Common causes
- This API Key has reached the monthly spending cap you set for it in the Dashboard.
- Traffic ramped up faster than the monthly budget anticipated, for example because of a new production rollout.
- The response body includes limit, used, and resets_at so you can confirm the monthly bucket and the reset timestamp.
- The monthly window resets at UTC 00:00 on the first day of each month, not on the key's creation anniversary.
How to handle
- Increase the monthly limit for this key at Dashboard → API Keys (/dashboard/keys) if the higher spend is expected.
- If the spike is unexpected, audit recent usage under the key to find the caller and pause it before raising the cap.
- For critical workloads, issue a separate API Key with its own monthly budget so unrelated traffic cannot block production.
- Wait until resets_at to retry automatically, or route traffic to another key until then.
account_daily_limit
Account has exceeded its daily spending limit.
Example response
{
"error": {
"type": "usage_cap_exceeded",
"code": "account_daily_limit",
"message": "Account has exceeded its daily spending limit."
}
}Common causes
- The combined spend across all API Keys on this account has hit the account-wide daily limit configured at Dashboard → Billing.
- One or more keys without per-key caps consumed the bulk of the shared daily budget.
- The response body includes limit, used, and resets_at, scoped to the whole account.
- The account daily window resets at UTC 00:00, independent of any per-key windows.
How to handle
- Raise or remove the account daily limit at Dashboard → Billing (/dashboard/billing) if the aggregate spend is legitimate.
- Set per-key daily limits at Dashboard → API Keys so a single key cannot monopolise the account budget again.
- Identify the top-spending key in the usage breakdown and throttle or disable it before lifting the account cap.
- Wait until resets_at before retrying, and re-evaluate the daily cap to match real demand.
request_cost_limit
Single request cost exceeds the configured limit.
Example response
{
"error": {
"type": "usage_cap_exceeded",
"code": "request_cost_limit",
"message": "Single request cost exceeds the configured limit."
}
}Common causes
- The estimated or actual cost of this single request exceeds the per-request cost cap configured for this API Key.
- A very large prompt, long max_tokens, high-resolution image, or long video duration pushed one call over the per-request threshold.
- The response body includes limit (the cap) and used (the request's cost) so you can see how far over you went.
- The cap applies to the full request cost, including cache writes, reasoning tokens, and any tool calls counted by the model.
How to handle
- Lower the cost of the request: reduce prompt length, shrink max_tokens, use a smaller model, or request a shorter video or lower-resolution image.
- If legitimate large requests are expected, raise this key's per-request cost limit at Dashboard → API Keys (/dashboard/keys).
- Split a single heavy job into multiple smaller requests that each fit under the cap.
- For video endpoints, cancel the polling loop on this error — the job will not be billed further but the current call is already rejected.
Daily caps include extra metadata (limit / used / resets_at). See the Usage Caps guide for the full response format and configuration details.
rate_limit_error
HTTP 429Rate limits protect the platform from bursts. Back off (honor Retry-After if present) and retry with exponential backoff.
rate_limited
Rate limit exceeded.
Example response
{
"error": {
"type": "rate_limit_error",
"code": "rate_limited",
"message": "Rate limit exceeded."
}
}Common causes
- You hit a platform-wide throttle that protects the gateway from aggregate overload.
- A sudden traffic spike from multiple keys on the same account crossed the combined ceiling.
- Upstream providers are temporarily degraded, so Ominigate has tightened limits to keep the service stable.
- Automated retries without backoff are amplifying the original spike and keeping you in the limited state.
How to handle
- Back off using exponential delay (for example 1s, 2s, 4s, 8s with jitter) and respect any Retry-After response header.
- Spread bursty workloads over a longer window, or queue requests client-side instead of firing them in parallel.
- If the limit is triggered by a single key, check whether you also have rpm_rate_limited or tpm_rate_limited errors and raise those first.
- If the error persists at normal traffic levels, contact support@ominigate.ai with the X-Request-Id response header.
rpm_rate_limited
Rate limit exceeded: requests per minute.
Example response
{
"error": {
"type": "rate_limit_error",
"code": "rpm_rate_limited",
"message": "Rate limit exceeded: requests per minute."
}
}Common causes
- This API Key has exceeded its requests-per-minute (RPM) limit configured in the Dashboard.
- Many small, low-token requests were fired in parallel, so RPM was hit even though TPM is still fine.
- A polling loop (for example on the video endpoints) is polling too aggressively and consuming the RPM budget.
- Client code is retrying 4xx errors immediately, turning one failure into dozens of requests per minute.
How to handle
- Increase the RPM for this key at Dashboard → API Keys (/dashboard/keys) if the traffic pattern is expected.
- Add client-side concurrency control: limit in-flight requests and queue the rest instead of firing them simultaneously.
- For video polling, follow the interval hinted by the API and apply exponential backoff; do not poll tighter than the recommended cadence.
- Retry with backoff (for example 1s, 2s, 4s with jitter) and honour any Retry-After header returned with the 429.
tpm_rate_limited
Rate limit exceeded: tokens per minute.
Example response
{
"error": {
"type": "rate_limit_error",
"code": "tpm_rate_limited",
"message": "Rate limit exceeded: tokens per minute."
}
}Common causes
- This API Key has exceeded its tokens-per-minute (TPM) limit configured in the Dashboard.
- Long prompts or large max_tokens values pushed the per-minute token volume over the cap, even with few requests.
- A streaming chat with a very large context was counted in full against TPM as tokens flowed back.
- Batch jobs that feed large documents into the model are concentrating too much token volume into a 60-second window.
How to handle
- Raise the TPM for this key at Dashboard → API Keys (/dashboard/keys) if the higher token throughput is intentional.
- Reduce token usage per call: trim prompts, enable prompt caching where supported, or lower max_tokens.
- Spread large jobs out over time so token consumption stays under TPM within any rolling minute.
- Retry with exponential backoff and honour any Retry-After header; immediate retries will simply hit TPM again.
upstream_error
HTTP 502 / 504The gateway reached the upstream provider, but the provider returned an error, timed out, or is unreachable. Usually transient — retry with backoff.
upstream_failed
Upstream request failed.
Example response
{
"error": {
"type": "upstream_error",
"code": "upstream_failed",
"message": "Upstream request failed."
}
}Common causes
- The gateway sent the request to the upstream provider, but the single call failed (connection reset, TLS handshake error, or a 5xx response from the provider).
- A transient network hiccup between the gateway and the provider interrupted the request mid-flight.
- The upstream provider briefly rate-limited or throttled the gateway's egress IP.
- If you were using stream=true, the SSE connection dropped before the final chunk arrived.
How to handle
- Retry the request with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3-5 attempts). Most calls succeed on the second attempt.
- For streaming requests that dropped mid-response, record the last chunk you received, then resend the full original request rather than trying to resume — the gateway does not support mid-stream resume.
- Capture the X-Request-Id response header and include it when reporting the incident to support@ominigate.ai.
- If retries keep failing for the same model, switch to an equivalent model from a different provider as a fallback.
image_generation_failed
Image generation failed.
Example response
{
"error": {
"type": "upstream_error",
"code": "image_generation_failed",
"message": "Image generation failed."
}
}Common causes
- The upstream image-generation endpoint returned an error (invalid prompt per their policy, model-side rendering failure, or a 5xx from the provider).
- The prompt or reference image was rejected by the provider's content safety filter.
- The requested image size, count (n), or format is not supported by the chosen model.
- The provider experienced a transient failure on this specific generation request.
How to handle
- Retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts) — most single-shot failures recover on retry.
- Some providers charge on the first attempt even when generation fails. If you retry and succeed, check your usage log to reconcile any double-charge, then email support@ominigate.ai with the X-Request-Id for adjustment.
- If the failure reproduces, simplify the prompt, reduce the image size, or lower the count (n) and try again.
- If only one specific model fails repeatedly, try a different image model listed in /v1/models.
image_generation_timeout
Image generation timed out.
Example response
{
"error": {
"type": "upstream_error",
"code": "image_generation_timeout",
"message": "Image generation timed out."
}
}Common causes
- The upstream image-generation job exceeded the gateway's timeout threshold, but the job itself may still be running on the provider side.
- The provider is under heavy load and rendering queues are backed up.
- The chosen model or requested resolution is computationally expensive and legitimately takes longer than the timeout window.
- A slow network path between the gateway and the provider caused the response to arrive after the deadline.
How to handle
- Do NOT retry immediately — the upstream task may still be running and a blind retry can result in duplicate billing. Treat this as 'unknown outcome', not 'failed'.
- Email support@ominigate.ai with the X-Request-Id from the response header; we will look up the upstream task status and confirm whether it completed.
- Once support confirms the task did not finish, retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts).
- To reduce timeout risk, lower the image resolution or count (n), or pick a faster model from /v1/models.
video_generation_failed
Video generation failed.
Example response
{
"error": {
"type": "upstream_error",
"code": "video_generation_failed",
"message": "Video generation failed."
}
}Common causes
- The upstream video-generation provider rejected the submit request (invalid prompt, unsupported parameter combination, or a 5xx from their API).
- The provider's safety filter rejected the prompt, reference image, or seed video.
- The requested duration, resolution, or aspect ratio is not supported by the selected model.
- A transient network or upstream failure occurred during task submission.
How to handle
- Retry the submit call with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts) — the task was never created upstream, so retrying will not double-charge.
- If the failure reproduces, simplify the prompt, shorten the duration, or lower the resolution to stay within the model's supported range.
- Capture the X-Request-Id and send it to support@ominigate.ai if retries keep failing with the same payload.
- Consider switching to a different video model from /v1/models if one provider is consistently unreliable.
video_task_id_missing
Upstream returned success but task ID is missing.
Example response
{
"error": {
"type": "upstream_error",
"code": "video_task_id_missing",
"message": "Upstream returned success but task ID is missing."
}
}Common causes
- The upstream video provider returned HTTP 200 for the submit call but omitted the task_id field in the response body.
- The provider changed their response schema without notice and the gateway has not yet adapted.
- A transient upstream bug silently dropped the task_id during serialization.
- The provider accepted the submission but the task was created in an unexpected internal state with no addressable ID.
How to handle
- Resubmit the request — without a task_id there is no way to poll or cancel, so you need a fresh submission to get a usable handle.
- Retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3 attempts); include a short delay between submissions to avoid creating orphan tasks upstream.
- Send the X-Request-Id and the affected provider/model to support@ominigate.ai so we can file an incident with the upstream provider.
- If the same provider produces this error repeatedly within a short window, switch to a different video model until the upstream issue is resolved.
internal_error
HTTP 500Something went wrong on the OminiGate side. These should be rare; please report with the X-Request-Id response header.
internal_error
Internal server error.
Example response
{
"error": {
"type": "internal_error",
"code": "internal_error",
"message": "Internal server error."
}
}Common causes
- The gateway itself encountered an unexpected bug or unhandled exception (not a business-logic rejection — this is an internal defect).
- A dependency inside the gateway (database, cache, internal RPC) briefly failed during request handling.
- A recent deployment introduced a regression that affects the specific code path your request exercised.
- Resource pressure (CPU, memory, connection pool) on the gateway caused the request to fail mid-flight.
How to handle
- Retry with exponential backoff (initial=1s, factor=2, jitter, max=30s, up to 3-5 attempts) — the majority of internal errors are transient and succeed on the next attempt.
- Always capture the X-Request-Id from the response header and send it to support@ominigate.ai; we use it to trace the exact stack and fix the root cause.
- If the error reproduces consistently for the same payload, include a minimal curl repro in your support email — this short-circuits our debugging.
- Monitor https://status.ominigate.ai (or your private status channel) for any active incidents that match the timing of your failures.
video_billing_error
Failed to calculate video cost.
Example response
{
"error": {
"type": "internal_error",
"code": "video_billing_error",
"message": "Failed to calculate video cost."
}
}Common causes
- The gateway's billing module failed to compute or persist the cost record for a completed video task (database timeout, decimal overflow, or missing pricing entry).
- The video task finished successfully upstream, but the cost calculator encountered an unexpected model/parameter combination it could not price.
- A transient database or cache failure prevented the billing transaction from being committed.
- A pricing-config deployment race left the model without a valid price entry at the moment the task completed.
How to handle
- Do NOT resubmit the video task — it has likely completed successfully upstream and a resubmission will charge you again for the same output.
- Email support@ominigate.ai immediately with the X-Request-Id and the video task ID; we will reconcile the billing record manually and confirm the output is available.
- While waiting for reconciliation, you can usually still fetch the completed video via the standard task-polling endpoint using the task ID you already hold.
- For new video requests, proceed as normal — this error is specific to the billing post-processing step and does not block subsequent submissions.
Can't find your error code?
Email support@ominigate.ai with the full response body and the X-Request-Id response header — we'll get back within one business day.