Use OminiGate in OpenAI Codex
Add a custom model_provider in config.toml and Codex CLI will route every request through OminiGate — OpenAI, Anthropic, and Gemini models included.
Codex
Overview
OpenAI's Codex CLI reads provider definitions from ~/.codex/config.toml. Each provider has a name, a base_url, an auth env var, and a wire protocol. That means Codex can call any OpenAI-compatible gateway — including OminiGate — without a patch or fork.
For OminiGate, pick wire_api = "chat" (the OpenAI chat-completions format). Codex will send requests to /v1/chat/completions on your base URL and attach the OMINIGATE_API_KEY header for you.
- One config file, zero code changes.
- Switch between OminiGate and OpenAI with a single flag.
- Works with every model OminiGate exposes on the chat-completions endpoint.
Prerequisites
You'll need Codex CLI installed and an OminiGate API key.
1. Install Codex CLI
Codex ships as an npm package (@openai/codex). Node 18+ is required. macOS and Linux are supported natively; Windows runs through WSL2.
npm i -g @openai/codex2. Get an OminiGate API key
Create a key in the dashboard. The key will be referenced via the OMINIGATE_API_KEY environment variable, not stored in config.toml.
Open API keys→Configure Codex CLI
Codex loads configuration at startup from ~/.codex/config.toml. We'll register OminiGate as a custom provider and wrap it in a profile — profiles let OminiGate coexist with Codex's built-in OpenAI usage instead of replacing it.
A lingering OPENAI_API_KEY or OPENAI_BASE_URL in your shell can clash with the Codex profile, especially when switching between OminiGate and vanilla OpenAI. Unset them before moving on.
unset OPENAI_API_KEY
unset OPENAI_BASE_URL1. Register provider + profile
Append the block below to ~/.codex/config.toml (create the file if it doesn't exist). The [profiles.ominigate] section means your existing Codex workflow stays intact — activate this profile only when you want to hit OminiGate.
# ~/.codex/config.toml
[model_providers.ominigate]
name = "OminiGate"
base_url = "https://api.ominigate.ai/v1"
env_key = "OMINIGATE_API_KEY"
wire_api = "chat"
requires_openai_auth = false
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000
[profiles.ominigate]
model = "openai/gpt-5.4-pro"
model_provider = "ominigate"Why each field matters: wire_api="chat" tells Codex to use OpenAI chat-completions (not the Responses API). requires_openai_auth=false disables the default OpenAI OAuth flow, which doesn't apply to third-party providers. request_max_retries / stream_max_retries / stream_idle_timeout_ms are Codex's production-grade retry and long-stream timeouts — copy these values unless you have a reason not to.
2. Export your API key
Codex reads the key from the env var you named in env_key. Add this to your shell profile so it survives new sessions.
export OMINIGATE_API_KEY="sk-omg-your-api-key"Heads up: OpenAI has announced that wire_api = "chat" is deprecated and will be removed from Codex CLI in 2026. OminiGate currently exposes /v1/chat/completions only; we're evaluating a Responses API endpoint so Codex keeps working after the deprecation. Watch the changelog. For now, "chat" still works and you may see a deprecation warning on startup.
Run Codex
Activate the profile you just defined. Codex loads the OminiGate model_provider plus the default model slug, then starts the session:
# Use the profile you defined in config.toml
codex --profile ominigate "Refactor the main.rs to use async/await"Override the default model for one invocation:
codex --profile ominigate --model anthropic/claude-opus-4.6 "Explain this repo"Running plain codex (no --profile) still uses Codex's built-in OpenAI setup — the OminiGate profile is opt-in per command.
Recommended models
Codex uses the OpenAI chat-completions format, so any model on OminiGate that supports that protocol works. These four are good starting points.
openai/gpt-5.4-proTop-tier OpenAI reasoning model. Best for complex refactors and multi-file edits.
openai/gpt-5.3-codexOpenAI's Codex-tuned variant. Optimized for code generation, multi-file edits, and tool-use loops — the natural default when running Codex CLI.
anthropic/claude-opus-4.6Anthropic's highest tier, also exposed on the chat-completions endpoint. Strong at long-context code review.
google/gemini-3.1-pro-previewGoogle's flagship preview model — large context window, multimodal inputs.
Browse the full catalog at /models.
Troubleshooting
Codex keeps calling api.openai.com instead of OminiGate.
Check that model_provider = "ominigate" is set at the top level of config.toml, not inside a [profiles.*] block. Run codex --verbose [prompt] to see which provider Codex resolved.
I get 404 Not Found on every request.
The most common cause is wire_api = "responses". OminiGate exposes /v1/chat/completions, not the OpenAI Responses API. Switch to wire_api = "chat" and restart.
Codex complains that OMINIGATE_API_KEY is missing.
Codex reads env vars from the shell that launched it. If you set the var in a new terminal, relaunch Codex from that shell — or add the export line to your ~/.zshrc / ~/.bashrc.
Next steps
Explore the full API reference or browse more models you can call from Codex.