Use OminiGate in Hermes Agent
Nous Research's self-improving agent accepts any OpenAI-compatible endpoint. Point it at OminiGate with two env vars and keep every skill, memory, and workflow intact.
Hermes Agent
- Claude Code
- Codex
- OpenClaw
- Hermes Agent
- LangChain
- Vercel AI SDK
Overview
Hermes Agent is an autonomous, tool-using agent from Nous Research with persistent memory and a self-improving skills system. It works with OpenRouter, OpenAI, or any endpoint that speaks OpenAI's chat-completions protocol.
OminiGate exposes exactly that protocol at /v1/chat/completions. Hermes routes traffic through OPENAI_BASE_URL plus OPENAI_API_KEY whenever provider = "custom" or a base_url is set — which is all you need to plug in OminiGate.
- Every Hermes skill, tool, and MCP integration works unchanged.
- Switch models per conversation with hermes chat --model [slug].
- Session memory, skills, and SQLite state live locally — OminiGate only serves the model calls.
Prerequisites
Install Hermes Agent and have an OminiGate API key ready.
1. Install Hermes
Hermes ships as a single install script — the only method the official README recommends. It handles platform-specific setup for Linux, macOS, WSL2, and Android (Termux). After it finishes, run source ~/.bashrc (or ~/.zshrc) to pick up the new PATH.
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash2. Get an OminiGate API key
Create a key in the dashboard. Hermes saves provider configuration (including the key) to ~/.hermes/config.yaml.
Open API keys→Configure Hermes
One command does it all. hermes model walks you through provider, base URL, key, and default model — and saves the result to ~/.hermes/config.yaml.
# One command sets the provider, base URL, key, and default model
hermes modelWhen hermes model asks you to choose a provider, pick "Custom endpoint (enter URL manually)". Hermes verifies the base URL, lists every model OminiGate serves, and saves a custom provider entry to ~/.hermes/config.yaml.
What the wizard shows
A real run of hermes model after picking "Custom endpoint" — Hermes verifies the URL, fetches /v1/models, and lets you pick a default.
~ hermes model
API base URL [e.g. https://api.example.com/v1]: https://api.ominigate.ai/v1
API key [sk-omg-8...]:
Verified endpoint via https://api.ominigate.ai/v1/models (295 model(s) visible)
Available models:
1. qwen/qwen3.6-27b
2. openai/gpt-5.5-pro
3. openai/gpt-5.5
4. deepseek/deepseek-v4-pro
5. deepseek/deepseek-v4-flash
6. xiaomi/mimo-v2.5-pro
7. xiaomi/mimo-v2.5
8. moonshotai/kimi-k2.6
9. anthropic/claude-opus-4.7
10. anthropic/claude-opus-4.6-fast
11. z-ai/glm-5.1
...
Select model [1-295] or type name: 5
Context length in tokens [leave blank for auto-detect]:
Display name [Api.ominigate.ai]: OminiGate
Default model set to: deepseek/deepseek-v4-flash (via https://api.ominigate.ai/v1)
💾 Saved to custom providers as "OminiGate" (edit in config.yaml)Run Hermes
Start a chat session — Hermes will route every tool-use round trip through OminiGate:
hermes chat "Summarize the latest commit log and draft release notes"Pick a different model per invocation:
hermes chat --model anthropic/claude-opus-4.6 "Refactor hermes/skills/web.py"Recommended models
Hermes Agent uses the OpenAI chat-completions format. Any model OminiGate exposes on that endpoint is fair game. These are strong defaults.
openai/gpt-5.4-proOpenAI's flagship — great baseline for multi-step agent plans.
anthropic/claude-opus-4.6Anthropic's highest tier. Preferred when skills involve long-context reasoning.
anthropic/claude-sonnet-4.6Anthropic Sonnet — balanced speed and quality for most agent workloads.
google/gemini-3.1-pro-previewGemini 3 Pro — strong at multimodal inputs and tool-use planning.
Browse the full catalog at /models.
Troubleshooting
I ran hermes model and it asked me to log into ChatGPT — what happened?
hermes model is the provider wizard. Selecting OpenAI Codex triggers ChatGPT OAuth, which sends traffic through your Codex account instead of OminiGate. Re-run hermes model and choose "Custom endpoint (enter URL manually)", then enter https://api.ominigate.ai/v1 and your sk-omg- key.
Hermes keeps calling another provider (e.g. OpenRouter) instead of OminiGate.
Another saved provider is still the active default. Re-run hermes model and select your OminiGate entry — Hermes keeps every custom provider as a named entry in ~/.hermes/config.yaml, switchable anytime.
I get invalid_api_key errors even though my key works in curl.
Each custom provider stores its key inside its entry in ~/.hermes/config.yaml. Re-run hermes model, pick your OminiGate entry (or "Custom endpoint" to start over), and re-enter the key when prompted.
Which models does the Hermes vision/extraction tool call?
Hermes auxiliary tools (vision, web extraction) reuse the active provider's base URL and key when no task-specific override is set. As long as your default is your OminiGate entry, they'll route through OminiGate — just pick a model with the right modality.
Next steps
Explore the full API reference or browse more models you can run through Hermes.