INTEGRATION · VERCEL AI SDK

Vercel AI SDK meets every model.

OminiGate exposes an OpenAI-compatible API at https://api.ominigate.ai/v1, so createOpenAI from @ai-sdk/openai becomes a single-line drop-in. streamText, useChat, generateObject and tool calling all light up against the full OminiGate catalogue.

Why teams ship Vercel AI SDK apps on OminiGate

Every AI SDK primitive you already use — backed by a single, predictable gateway.

Drop-in for createOpenAI

Pass baseURL: https://api.ominigate.ai/v1 to createOpenAI from @ai-sdk/openai and your streamText, generateText, and generateObject calls keep working unchanged — only the model slug shifts to the OminiGate convention.

Streaming and tool calling, end to end

streamText().toDataStreamResponse(), useChat, tool(...), and maxSteps multi-step agents all flow through OminiGate transparently. The OpenAI wire format is preserved across every supported model.

One key, one balance

A single sk-omg- key powers chat, embeddings, image, and video models from one balance. No more juggling per-provider keys across your AI SDK app.

Edge runtime compatible

OminiGate is a plain HTTPS API, so it works inside Edge Route Handlers and Edge Functions. Set export const runtime = "edge" and your streamText handler ships globally with no changes.

Real Vercel AI SDK examples

Copy these snippets verbatim, swap in your own sk-omg- key, and they run.

1. Server-side streamText in a Route Handler

The shortest path: configure createOpenAI with the OminiGate baseURL, then return result.toDataStreamResponse() from a Next.js Route Handler. Pairs directly with the useChat hook on the client.

// app/api/chat/route.ts
import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";

const ominigate = createOpenAI({
  baseURL: "https://api.ominigate.ai/v1",
  apiKey: process.env.OMINIGATE_API_KEY,
});

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: ominigate("openai/gpt-5"),
    system: "You are a helpful assistant.",
    messages,
  });

  return result.toDataStreamResponse();
}

2. Client-side useChat in the App Router

Import useChat from @ai-sdk/react (the AI SDK 4.x location) and point it at your /api/chat route. Streaming, message state, and form handling are all built in.

// app/chat/page.tsx
"use client";

import { useChat } from "@ai-sdk/react";

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: "/api/chat",
  });

  return (
    <div className="mx-auto flex max-w-md flex-col gap-4 py-12">
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role === "user" ? "You" : "AI"}: </strong>
          {m.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Say something..."
        />
      </form>
    </div>
  );
}

3. Structured output with generateObject + zod

Use a zod schema to extract typed objects from any chat model. OminiGate forwards JSON-mode requests transparently, so the same code works across providers.

import { createOpenAI } from "@ai-sdk/openai";
import { generateObject } from "ai";
import { z } from "zod";

const ominigate = createOpenAI({
  baseURL: "https://api.ominigate.ai/v1",
  apiKey: process.env.OMINIGATE_API_KEY,
});

const { object } = await generateObject({
  model: ominigate("openai/gpt-5"),
  schema: z.object({
    title: z.string(),
    summary: z.string(),
    tags: z.array(z.string()),
  }),
  prompt: "Summarise the OminiGate value proposition for a TypeScript developer.",
});

console.log(object);
// { title: "...", summary: "...", tags: ["..."] }

4. Multi-step agent with tool calling

Define tools with the tool() helper, set maxSteps for multi-turn execution, and let the model call your functions automatically. Works against any model that supports OpenAI-style tool calls.

import { createOpenAI } from "@ai-sdk/openai";
import { generateText, tool } from "ai";
import { z } from "zod";

const ominigate = createOpenAI({
  baseURL: "https://api.ominigate.ai/v1",
  apiKey: process.env.OMINIGATE_API_KEY,
});

const { text } = await generateText({
  model: ominigate("anthropic/claude-sonnet-4"),
  tools: {
    weather: tool({
      description: "Get the current weather for a city.",
      parameters: z.object({
        city: z.string().describe("The city to look up."),
      }),
      execute: async ({ city }) => {
        // Replace with a real weather call in production.
        return { city, tempF: 72, conditions: "sunny" };
      },
    }),
  },
  maxSteps: 5,
  prompt: "What is the weather in San Francisco today?",
});

console.log(text);

What teams build on this

Common Vercel AI SDK patterns that benefit from a unified gateway.

Streaming chat apps in Next.js

Pair streamText in a Route Handler with useChat on the client. Switch the underlying model — openai/gpt-5, anthropic/claude-sonnet-4, google/gemini-2.5-pro — by changing one slug, with no client-side code changes.

Edge-deployed AI endpoints

Run streamText handlers on Vercel Edge Functions with export const runtime = "edge". OminiGate's HTTP API is fetch-based, so cold starts stay under 100ms and global routing keeps latency low.

Structured data extraction

Use generateObject with zod schemas to turn unstructured input into typed objects — invoice parsing, classification, entity extraction. Switch models by capability without touching the schema.

Multi-step agents with tools

Build agents that browse, query APIs, or run business logic via tool(...) and maxSteps. One sk-omg- key authenticates the chat model, the embeddings model, and any image or video tool the agent invokes.

Migrate an existing AI SDK project

Swap the provider factory and your streamText, generateObject, and useChat code keeps running through OminiGate.

From OpenAI direct

Replace the openai import with createOpenAI(baseURL, apiKey), and prefix the model slug with the provider namespace OminiGate uses (e.g. openai/gpt-5).

diff
- import { openai } from "@ai-sdk/openai";
-
- const result = streamText({
-   model: openai("gpt-4o"),
-   prompt,
- });

+ import { createOpenAI } from "@ai-sdk/openai";
+
+ const ominigate = createOpenAI({
+   baseURL: "https://api.ominigate.ai/v1",
+   apiKey: process.env.OMINIGATE_API_KEY,
+ });
+
+ const result = streamText({
+   model: ominigate("openai/gpt-5"),
+   prompt,
+ });

From OpenRouter

Replace the baseURL and API key. Model slugs follow the same provider/model convention OpenRouter uses, so most apps keep working unchanged.

diff
- const openrouter = createOpenAI({
-   baseURL: "https://openrouter.ai/api/v1",
-   apiKey: process.env.OPENROUTER_API_KEY,
- });
-
- const result = streamText({
-   model: openrouter("openai/gpt-4o"),
-   prompt,
- });

+ const ominigate = createOpenAI({
+   baseURL: "https://api.ominigate.ai/v1",
+   apiKey: process.env.OMINIGATE_API_KEY,
+ });
+
+ const result = streamText({
+   model: ominigate("openai/gpt-5"),
+   prompt,
+ });

Frequently asked questions

How do I point Vercel AI SDK at OminiGate?

Import createOpenAI from @ai-sdk/openai, pass baseURL: "https://api.ominigate.ai/v1" and your sk-omg- API key, and use the returned factory wherever you currently call openai(...). Every existing streamText, generateText, and generateObject call keeps working.

Does streaming work with toDataStreamResponse?

Yes. streamText(...).toDataStreamResponse() works exactly as documented. OminiGate forwards the OpenAI streaming format transparently, so the response can be consumed by useChat on the client without any extra adapters.

Are tool calling and generateObject compatible?

Yes. The tool() helper, maxSteps multi-step execution, and generateObject with zod schemas all flow through unchanged for any OminiGate model that supports OpenAI-format tools or JSON mode. That includes the OpenAI, Anthropic, and Google chat families.

Is OminiGate compatible with Edge runtime?

Yes. OminiGate is a plain HTTPS API consumed via fetch, so it runs inside Vercel Edge Functions and Edge Route Handlers. Add export const runtime = "edge" to your handler and the rest of the code stays identical.

How should I store my API key?

Put your sk-omg- key in process.env.OMINIGATE_API_KEY and only read it inside Server Components, Server Actions, or Route Handlers — never in client components. The AI SDK enforces this naturally because createOpenAI is a server-side factory.

How do I see token usage and cost?

Every request appears in the OminiGate dashboard under Usage with model slug, input/output tokens, and cost. You can also read result.usage from streamText / generateText return values to surface per-request token counts inside your own analytics.

How are errors and retries handled?

Errors follow the OpenAI error envelope, so AI SDK's built-in error handling and the experimental_telemetry retry options work without modification. 401 means an invalid key, 429 means rate-limited — the standard backoff strategies apply.

How do I switch between OminiGate models?

Model IDs follow the provider/model format — openai/gpt-5, anthropic/claude-sonnet-4, google/gemini-2.5-pro, and so on. Pass the slug as the only argument to your ominigate(...) factory and the rest of your AI SDK code stays unchanged.

Ship your next AI SDK app on OminiGate

Create a key in the dashboard, point createOpenAI at https://api.ominigate.ai/v1, and your streamText, useChat, and generateObject code lights up against every model in the catalogue.