LLM Gateway Quickstart
Turn on governance for your LLM calls in 5 minutes. Works with OpenAI, Anthropic, Google Gemini, Groq, Mistral, Cohere, and 55 more providers out of the box.1. Install the SDK
2. Get an Igris API key
Sign up at app.igris.dev/signup, then create a new API key under Settings → API Keys. Your key starts withigris_sk_.
3. Create a virtual key for your LLM provider
Virtual keys are encrypted credential vaults. They bind your upstream provider API key to an Igris slug you can reference from code without ever exposing the real key.- Go to Virtual Keys in the dashboard
- Click New Virtual Key
- Select type LLM
- Pick your provider — e.g. OpenAI
- Paste your provider API key
- Optionally restrict to specific models (e.g.
gpt-4o,gpt-4o-mini) - Save and note the slug (e.g.
vk_openai_prod)
4. Make your first call
The model field uses@slug/model syntax. Igris resolves the slug, injects credentials, enforces
policies, and logs everything — before forwarding to the upstream provider.
5. See it in the audit trail
Back in the dashboard, open LLM → Audit Trail. You’ll see your call with:- Provider + model used
- Input and output token counts
- Cost in USD (calculated server-side)
- End-to-end latency
- User identity passed via
userfield - Policy action (allowed / denied / alerted)
Streaming
Passstream: true to get an AsyncIterable of SSE chunks. The gateway passes the stream directly
to your client — no buffering, no added latency.
Passing metadata for policy conditions
Theuser field and arbitrary metadata can be passed via extra request headers. The SDK’s
connectLlm() method is the cleanest way to attach per-request context:
What’s next?
Concepts
Virtual keys, providers, policies, audit trail, cost tracking, anomaly detection.
SDK Reference
Full API surface — chat, embeddings, connectLlm, subpath adapters, typed errors.
Providers
All 59 supported providers with slugs, base URLs, and auth styles.
Policies
Model allowlists, rate limits, token guards, and content guards.