Skip to main content

SDK Reference — LLM Gateway

Install the SDK:
bun add @igris-security/sdk

new Igris(config)

Create an SDK client. All LLM resources hang off this instance.
import { Igris } from "@igris-security/sdk";

const igris = new Igris({
	apiKey: "igris_sk_...",       // required — from Settings → API Keys
	baseUrl: "https://...",       // optional — defaults to https://api.igrisecurity.com
	                               //            or IGRIS_BASE_URL env var
});
Config shape:
interface IgrisConfig {
	apiKey: string;
	baseUrl?: string;
}
Throws synchronously if apiKey is empty.

igris.chat.completions.create(request)

Send a chat completion request through the gateway. The model field must use @slug/model syntax.

Non-streaming

const response = await igris.chat.completions.create({
	model: "@vk_openai_prod/gpt-4o",
	messages: [
		{ role: "system", content: "You are a helpful assistant." },
		{ role: "user", content: "What is the capital of France?" },
	],
	max_tokens: 200,
	temperature: 0.7,
});

console.log(response.choices[0].message.content);
console.log(response.usage.total_tokens);
Returns Promise<ChatCompletionResponse>.

Streaming

Pass stream: true to receive an AsyncIterable<ChatCompletionChunk>:
const stream = await igris.chat.completions.create({
	model: "@vk_openai_prod/gpt-4o",
	messages: [{ role: "user", content: "Write a short story" }],
	stream: true,
});

for await (const chunk of stream) {
	const delta = chunk.choices[0]?.delta?.content;
	if (delta) process.stdout.write(delta);
}
The gateway passes the SSE stream directly to your client. Token usage is extracted from the terminal chunk and recorded in the audit trail.

Request type

interface ChatCompletionRequest {
	model: string;                    // "@slug/model" format
	messages: ChatMessage[];
	max_tokens?: number;
	temperature?: number;
	top_p?: number;
	n?: number;
	stream?: boolean;
	stop?: string | string[];
	presence_penalty?: number;
	frequency_penalty?: number;
	user?: string;                    // forwarded to X-Igris-User if not set elsewhere
	tools?: unknown[];
	tool_choice?: unknown;
	response_format?: unknown;
	[key: string]: unknown;           // any additional fields pass through to provider
}

interface ChatMessage {
	role: "system" | "user" | "assistant" | "tool";
	content: string | Array<{ type: string; text?: string; [key: string]: unknown }>;
	name?: string;
	tool_call_id?: string;
	tool_calls?: unknown[];
}

Response type

interface ChatCompletionResponse {
	id: string;
	object: "chat.completion";
	created: number;
	model: string;
	choices: ChatCompletionChoice[];
	usage: ChatCompletionUsage;
}

interface ChatCompletionChoice {
	index: number;
	message: ChatMessage;
	finish_reason: string;
}

interface ChatCompletionUsage {
	prompt_tokens: number;
	completion_tokens: number;
	total_tokens: number;
	prompt_tokens_details?: { cached_tokens?: number };
}

Streaming chunk type

interface ChatCompletionChunk {
	id: string;
	object: "chat.completion.chunk";
	created: number;
	model: string;
	choices: Array<{
		index: number;
		delta: Partial<ChatMessage>;
		finish_reason: string | null;
	}>;
	usage?: ChatCompletionUsage;      // present on the final chunk
}

igris.embeddings.create(request)

Create embeddings for one or more input strings.
const response = await igris.embeddings.create({
	model: "@vk_openai_prod/text-embedding-3-small",
	input: ["Hello world", "Goodbye world"],
});

console.log(response.data[0].embedding.length); // e.g. 1536

Request type

interface EmbeddingRequest {
	model: string;                    // "@slug/model" format
	input: string | string[];
	encoding_format?: "float" | "base64";
	dimensions?: number;
	user?: string;
}

Response type

interface EmbeddingResponse {
	object: "list";
	data: Array<{
		object: "embedding";
		index: number;
		embedding: number[];
	}>;
	model: string;
	usage: {
		prompt_tokens: number;
		total_tokens: number;
	};
}

igris.llmProviders.list()

Fetch the list of all registered provider slugs, names, base URLs, auth styles, and supported endpoints. Useful for populating a provider picker in your UI or validating slugs at startup.
const providers = await igris.llmProviders.list();
// providers: LlmProviderInfo[]

for (const p of providers) {
	console.log(`${p.slug}${p.name} (${p.authStyle})`);
}

Return type

interface LlmProviderInfo {
	slug: string;
	name: string;
	baseUrl: string | null;           // null = self-hosted, requires customBaseUrl on virtual key
	authStyle: "bearer" | "x-api-key" | "query-param" | "none";
	supportedEndpoints: string[];
}

igris.connectLlm(slug, options?)

Build an LlmConnection object for use with any OpenAI-compatible SDK client. This is the escape hatch for when you want to keep using the official OpenAI, Anthropic, or Mistral SDK while routing through Igris.
const conn = igris.connectLlm("vk_openai_prod", {
	user: "alice@corp.com",
	traceId: "trace-abc123",
	metadata: { role: "developer", team: "platform" },
});

console.log(conn.baseUrl);    // https://api.igrisecurity.com/llm/vk_openai_prod/v1
console.log(conn.apiKey);     // your Igris API key
console.log(conn.headers);    // X-Igris-User, X-Igris-Trace-Id, X-Igris-Metadata-* headers

Options

interface ConnectLlmOptions {
	user?: string;
	traceId?: string;
	metadata?: Record<string, string>;
}

Return type

interface LlmConnection {
	baseUrl: string;                  // gateway route for this virtual key
	apiKey: string;                   // Igris API key (Bearer auth)
	headers: Record<string, string>;  // populated X-Igris-* headers
}

Subpath Adapters

For zero-migration integration with existing SDK clients, Igris ships three adapters that mutate a provider SDK client in-place to route through the gateway.

OpenAI adapter

import OpenAI from "openai";
import { Igris } from "@igris-security/sdk";
import { withIgris } from "@igris-security/sdk/adapters/openai";

const igris = new Igris({ apiKey: "igris_sk_..." });

// Mutates the OpenAI client to route through Igris — same client, zero other changes
const openai = withIgris(
	new OpenAI({ apiKey: "placeholder" }),
	igris,
	"vk_openai_prod",
	{ user: "alice@corp.com" },
);

// Works exactly like before
await openai.chat.completions.create({ model: "gpt-4o", messages: [...] });

Anthropic adapter

import Anthropic from "@anthropic-ai/sdk";
import { Igris } from "@igris-security/sdk";
import { withIgris } from "@igris-security/sdk/adapters/anthropic";

const igris = new Igris({ apiKey: "igris_sk_..." });
const anthropic = withIgris(new Anthropic({ apiKey: "placeholder" }), igris, "vk_anthropic_prod");

await anthropic.messages.create({
	model: "claude-3-5-sonnet-20241022",
	max_tokens: 1024,
	messages: [{ role: "user", content: "Hello!" }],
});

Google adapter (stub)

Google’s @google/generative-ai SDK does not expose a unified baseURL override. The Google adapter returns the LlmConnection config for manual use. The recommended path for Google is the SDK-native style with @vk_google/gemini-2.0-flash model prefix or raw HTTP.
import { withIgris } from "@igris-security/sdk/adapters/google";

const conn = withIgris(null, igris, "vk_google_prod");
// conn.baseUrl / conn.apiKey / conn.headers — apply manually

withIgris signature (all adapters)

function withIgris<T extends object>(
	client: T,
	igris: Igris,
	virtualKeySlug: string,
	options?: ConnectLlmOptions,
): T; // returns the same mutated client

Typed Errors

All LLM SDK methods throw typed errors on non-2xx responses. Pattern-match with instanceof:
import {
	IgrisError,
	IgrisAuthError,
	IgrisPolicyDeniedError,
	IgrisRateLimitError,
} from "@igris-security/sdk";

try {
	await igris.chat.completions.create({ model: "@vk_openai_prod/gpt-4o", messages: [...] });
} catch (err) {
	if (err instanceof IgrisPolicyDeniedError) {
		// HTTP 403 — a policy rule denied this request
		console.error("Blocked by policy:", err.message);
	} else if (err instanceof IgrisRateLimitError) {
		// HTTP 429 — rate limit exceeded (from a policy limit rule or provider)
		console.error("Rate limited:", err.message);
	} else if (err instanceof IgrisAuthError) {
		// HTTP 401 — invalid or expired Igris API key
		console.error("Auth error:", err.message);
	} else if (err instanceof IgrisError) {
		// Any other gateway or provider error
		console.error("Gateway error:", err.message, err.response);
	}
}

Error class hierarchy

class IgrisError extends Error {
	readonly response?: unknown;      // parsed response body if available
}

class IgrisAuthError extends IgrisError {}          // HTTP 401
class IgrisPolicyDeniedError extends IgrisError {}  // HTTP 403
class IgrisRateLimitError extends IgrisError {}     // HTTP 429
All error classes are exported from @igris-security/sdk.

Environment Variables

VariableDefaultDescription
IGRIS_BASE_URLhttps://api.igrisecurity.comOverride gateway base URL (self-hosted)
The baseUrl constructor option takes precedence over the environment variable.