SDK Reference — LLM Gateway
Install the SDK:new Igris(config)
Create an SDK client. All LLM resources hang off this instance.
apiKey is empty.
igris.chat.completions.create(request)
Send a chat completion request through the gateway. The model field must use @slug/model syntax.
Non-streaming
Promise<ChatCompletionResponse>.
Streaming
Passstream: true to receive an AsyncIterable<ChatCompletionChunk>:
Request type
Response type
Streaming chunk type
igris.embeddings.create(request)
Create embeddings for one or more input strings.
Request type
Response type
igris.llmProviders.list()
Fetch the list of all registered provider slugs, names, base URLs, auth styles, and supported
endpoints. Useful for populating a provider picker in your UI or validating slugs at startup.
Return type
igris.connectLlm(slug, options?)
Build an LlmConnection object for use with any OpenAI-compatible SDK client. This is the escape
hatch for when you want to keep using the official OpenAI, Anthropic, or Mistral SDK while routing
through Igris.
Options
Return type
Subpath Adapters
For zero-migration integration with existing SDK clients, Igris ships three adapters that mutate a provider SDK client in-place to route through the gateway.OpenAI adapter
Anthropic adapter
Google adapter (stub)
Google’s@google/generative-ai SDK does not expose a unified baseURL override. The Google
adapter returns the LlmConnection config for manual use. The recommended path for Google is the
SDK-native style with @vk_google/gemini-2.0-flash model prefix or raw HTTP.
withIgris signature (all adapters)
Typed Errors
All LLM SDK methods throw typed errors on non-2xx responses. Pattern-match withinstanceof:
Error class hierarchy
@igris-security/sdk.
Environment Variables
| Variable | Default | Description |
|---|---|---|
IGRIS_BASE_URL | https://api.igrisecurity.com | Override gateway base URL (self-hosted) |
baseUrl constructor option takes precedence over the environment variable.