Connect to multiple LLM providers through a unified interface β’ Stream responses β’ Function calling β’ Vision support β’ MCP tools support β’ Pydantic validation
Installation β’ Quick Start β’ Examples β’ License
- π Inference Gateway Python SDK
To install the SDK, use pip:
pip install inference-gatewayRequires Python 3.12+.
To create a client, instantiate InferenceGatewayClient:
from inference_gateway import InferenceGatewayClient, Message
client = InferenceGatewayClient("http://localhost:8080/v1")The client also supports authentication, custom timeouts, and an optional httpx backend:
# With authentication
client = InferenceGatewayClient(
"http://localhost:8080/v1",
token="your-api-token",
timeout=60.0,
)
# Using httpx instead of the default requests backend
client = InferenceGatewayClient(
"http://localhost:8080/v1",
use_httpx=True,
)
# Use as a context manager to ensure the underlying HTTP client is closed
with InferenceGatewayClient("http://localhost:8080/v1") as client:
models = client.list_models()To list available models, use the list_models method:
# List all models from all providers
models = client.list_models()
print("All available models:", models)
# List models for a specific provider
openai_models = client.list_models(provider="openai")
print("OpenAI models:", openai_models)To list available MCP (Model Context Protocol) tools, use the list_tools method. This functionality is only available when MCP_ENABLE and MCP_EXPOSE are set on the Inference Gateway server:
tools = client.list_tools()
print(f"Found {len(tools.data)} MCP tools:")
for tool in tools.data:
print(f"- {tool.name}: {tool.description} (Server: {tool.server})")Note: The MCP tools endpoint requires authentication and is only accessible when the server has
MCP_EXPOSE=trueconfigured.
Server-Side Tool Management
The SDK currently supports listing available MCP tools, which is particularly useful for UI applications that need to display connected tools to users. The key advantage is that tools are managed server-side:
- Automatic Tool Injection: Tools are automatically inferred and injected into requests by the Inference Gateway server
- Simplified Client Code: No need to manually manage or configure tools in your client application
- Transparent Tool Calls: During streaming chat completions with configured MCP servers, tool calls appear in the response stream β no special handling required except optionally displaying them to users
To generate content using a model, use the create_chat_completion method:
Note: Some models support reasoning capabilities. You can use the
reasoning_formatparameter to control how reasoning is provided in the response. The model's reasoning will be available in thereasoningorreasoning_contentfields of the response message.
from inference_gateway import InferenceGatewayClient, Message
client = InferenceGatewayClient("http://localhost:8080/v1")
response = client.create_chat_completion(
model="ollama/llama2",
messages=[
Message(role="system", content="You are a helpful assistant."),
Message(role="user", content="What is Python?"),
],
)
print(response.choices[0].message.content.root)
# If reasoning was requested and the model supports it
if response.choices[0].message.reasoning:
print("Reasoning:", response.choices[0].message.reasoning)The SDK supports multimodal messages with images for vision-capable models like GPT-4o. You can include images via URLs or base64-encoded data URLs.
from inference_gateway import InferenceGatewayClient, Message
client = InferenceGatewayClient("http://localhost:8080/v1")
response = client.create_chat_completion(
model="openai/gpt-4o",
messages=[Message(role="user", content="What is the Python programming language?")],
)from inference_gateway import (
InferenceGatewayClient,
Message,
TextContentPart,
ImageContentPart,
ImageURL,
)
client = InferenceGatewayClient("http://localhost:8080/v1")
response = client.create_chat_completion(
model="openai/gpt-4o",
messages=[
Message(
role="user",
content=[
TextContentPart(type="text", text="What is in this image?"),
ImageContentPart(
type="image_url",
image_url=ImageURL(
url="https://example.com/image.jpg",
detail="auto",
),
),
],
)
],
)from inference_gateway import ImageContentPart, ImageURL
ImageContentPart(
type="image_url",
image_url=ImageURL(
url="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD...",
detail="high", # better quality, more expensive
),
)Message(
role="user",
content=[
TextContentPart(type="text", text="Compare these images:"),
ImageContentPart(type="image_url", image_url=ImageURL(url="https://example.com/image1.jpg")),
ImageContentPart(type="image_url", image_url=ImageURL(url="https://example.com/image2.jpg")),
],
)Image Detail Levels:
"auto": Automatic detail level (default)"low": Lower resolution, faster and cheaper"high": Higher resolution, better quality but more expensive
For a complete example, see the chat example.
You can enable reasoning capabilities by setting the reasoning_format parameter in your request:
from inference_gateway import InferenceGatewayClient, Message
client = InferenceGatewayClient("http://localhost:8080/v1")
response = client.create_chat_completion(
model="anthropic/claude-3-opus-20240229",
messages=[
Message(role="system", content="You are a helpful assistant. Please include your reasoning for complex questions."),
Message(role="user", content="What is the square root of 144 and why?"),
],
reasoning_format="parsed", # "raw" or "parsed" β defaults to "parsed"
)
print("Content:", response.choices[0].message.content.root)
if response.choices[0].message.reasoning:
print("Reasoning:", response.choices[0].message.reasoning)To generate content using streaming mode, use the create_chat_completion_stream method. It yields SSEvent objects:
import json
from pydantic import ValidationError
from inference_gateway import InferenceGatewayClient, Message
from inference_gateway.models import CreateChatCompletionStreamResponse
client = InferenceGatewayClient("http://localhost:8080/v1")
for chunk in client.create_chat_completion_stream(
model="ollama/llama2",
messages=[
Message(role="system", content="You are a helpful assistant."),
Message(role="user", content="Tell me a story."),
],
):
if not chunk.data:
continue
try:
data = json.loads(chunk.data)
stream_response = CreateChatCompletionStreamResponse.model_validate(data)
except (json.JSONDecodeError, ValidationError):
continue
for choice in stream_response.choices:
# Reasoning content (both reasoning and reasoning_content fields)
if choice.delta.reasoning:
print(f"π Reasoning: {choice.delta.reasoning}")
if choice.delta.reasoning_content:
print(f"π Reasoning: {choice.delta.reasoning_content}")
if choice.delta.content:
print(choice.delta.content, end="", flush=True)To use tools with the SDK, define a tool with the type-safe Pydantic models and pass it to the request:
from inference_gateway import InferenceGatewayClient, Message
from inference_gateway.models import ChatCompletionTool, FunctionObject, FunctionParameters
client = InferenceGatewayClient("http://localhost:8080/v1")
tools = [
ChatCompletionTool(
type="function",
function=FunctionObject(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=FunctionParameters(
type="object",
properties={
"location": {
"type": "string",
"enum": ["san francisco", "new york", "london", "tokyo", "sydney"],
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use",
},
},
required=["location"],
),
),
),
ChatCompletionTool(
type="function",
function=FunctionObject(
name="get_current_time",
description="Get the current time in a given location",
parameters=FunctionParameters(
type="object",
properties={
"location": {
"type": "string",
"enum": ["san francisco", "new york", "london", "tokyo", "sydney"],
"description": "The city and state, e.g. San Francisco, CA",
},
},
required=["location"],
),
),
),
]
response = client.create_chat_completion(
model="openai/gpt-4o",
messages=[
Message(role="system", content="You are a helpful assistant with access to weather and time information."),
Message(role="user", content="What is the weather like in New York?"),
],
tools=tools,
)
# Inspect any tool calls made by the model
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
print(f"Tool called: {tool_call.function.name}")
print(f"Arguments: {tool_call.function.arguments}")Some providers attach opaque, per-call metadata that must be echoed back on follow-up requests. The most notable case is Google Gemini's reasoning models, which return a thought_signature on each tool call β the next request must round-trip it verbatim or the provider will reject it.
The SDK preserves this automatically as long as you append the assistant message back to the conversation as a model object (rather than reconstructing it from a dict):
response = client.create_chat_completion(
model="google/gemini-3-pro",
messages=messages,
tools=tools,
)
assistant_message = response.choices[0].message
messages.append(assistant_message) # preserves extra_content.google.thought_signature
# ... append your tool results, then send the follow-up request ...If you need to construct one explicitly:
from inference_gateway import Google, ToolCallExtraContent
extra = ToolCallExtraContent(google=Google(thought_signature="..."))The field is fully optional β providers that don't use it ignore it entirely, and model_dump(exclude_none=True) strips it from the wire when unset.
To proxy a raw request directly to a provider's API through the gateway, use proxy_request:
response = client.proxy_request(
provider="openai",
path="/v1/models",
method="GET",
)
print("OpenAI models:", response)To check if the API is healthy:
if client.health_check():
print("API is healthy")
else:
print("API is unavailable")The SDK provides several exception types:
from inference_gateway import (
InferenceGatewayError,
InferenceGatewayAPIError,
InferenceGatewayValidationError,
)
try:
response = client.create_chat_completion(...)
except InferenceGatewayAPIError as e:
print(f"API Error: {e} (Status: {e.status_code})")
print("Response:", e.response_data)
except InferenceGatewayValidationError as e:
print(f"Validation Error: {e}")
except InferenceGatewayError as e:
print(f"General Error: {e}")For more detailed examples and use cases, check out the examples directory. The examples include:
- List Example - How to list available models
- Chat Example - Basic and advanced chat completion examples
- Tools Example - Function calling and tool usage
- MCP Example - Model Context Protocol integration examples
Each example includes its own README with specific instructions and explanations.
The SDK supports the following LLM providers:
- Ollama (
"ollama") - Ollama Cloud (
"ollama_cloud") - Groq (
"groq") - OpenAI (
"openai") - DeepSeek (
"deepseek") - Cloudflare (
"cloudflare") - Cohere (
"cohere") - Anthropic (
"anthropic") - Google (
"google") - Mistral AI (
"mistral") - Moonshot (
"moonshot")
This SDK is distributed under the MIT License, see LICENSE for more information.