Expand description
§Inference Provider Configuration
This module defines the configuration types for AI inference providers used by AIMX. It supports multiple AI APIs with configurable models, capabilities, and performance characteristics.
§Overview
The provider system allows AIMX workflows to interact with different AI inference services through a unified interface. Providers can be configured for various use cases:
- Fast inference: Quick responses for simple tasks
- Standard inference: Balanced performance for general use
- Planning inference: Advanced reasoning for complex tasks
§Supported APIs
- Ollama: Local model inference (e.g.,
http://localhost:11434) - OpenAI: Cloud-based inference (e.g., OpenAI API, OpenRouter)
§Example Usage
use aimx::{Provider, Api, Model, Capability};
// Create a provider for local Ollama
let provider = Provider {
api: Api::Ollama,
url: "http://localhost:11434".to_string(),
key: "".to_string(), // No API key needed for local Ollama
model: Model::Standard,
capability: Capability::Standard,
fast: "mistral:latest".to_string(),
standard: "llama2:latest".to_string(),
planning: "codellama:latest".to_string(),
temperature: 0.7,
max_tokens: 2048,
connection_timeout_ms: 30000,
request_timeout_ms: 120000,
};Structs§
- Provider
- AI inference provider configuration
Enums§
- Api
- Supported AI inference APIs
- Capability
- Model capability levels
- Model
- Model performance types