Simple AI Provider
A professional, type-safe TypeScript package that provides a unified interface for multiple AI providers. Currently supports Claude (Anthropic), OpenAI, Google Gemini, and OpenWebUI with a consistent API across all providers.
✨ Features
- 🔗 Unified Interface: Same API for Claude, OpenAI, Gemini, and OpenWebUI
- 🎯 Type Safety: Full TypeScript support with comprehensive type definitions
- 🚀 Streaming Support: Real-time response streaming for all providers
- 🛡️ Error Handling: Standardized error types with provider-specific details
- 🏭 Factory Pattern: Easy provider creation and management
- 🔧 Configurable: Extensive configuration options for each provider
- 📦 Zero Dependencies: Lightweight with minimal external dependencies
- 🌐 Local Support: OpenWebUI integration for local/private AI models
- 🎨 Structured Output: Define custom response types for type-safe AI outputs
- 🏗️ Provider Registry: Dynamic provider registration and creation system
- ✅ Comprehensive Testing: Full test coverage with Bun test framework
- 🔍 Advanced Validation: Input validation with detailed error messages
🏗️ Architecture
The library is built on solid design principles:
- Template Method Pattern: Base provider defines the workflow, subclasses implement specifics
- Factory Pattern: Clean provider creation and management
- Strategy Pattern: Unified interface across different AI providers
- Type Safety: Comprehensive TypeScript support throughout
- Error Normalization: Consistent error handling across all providers
- Validation First: Input validation before processing
- Extensibility: Easy to add new providers via registry system
🚀 Quick Start
npm install simple-ai-provider
# or
bun add simple-ai-provider
Basic Usage
import { ClaudeProvider, OpenAIProvider, GeminiProvider, OpenWebUIProvider } from 'simple-ai-provider';
// Claude
const claude = new ClaudeProvider({
apiKey: process.env.ANTHROPIC_API_KEY!,
defaultModel: 'claude-3-5-sonnet-20241022'
});
// OpenAI
const openai = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!,
defaultModel: 'gpt-4o'
});
// Google Gemini
const gemini = new GeminiProvider({
apiKey: process.env.GOOGLE_AI_API_KEY!,
defaultModel: 'gemini-1.5-flash'
});
// OpenWebUI (local)
const openwebui = new OpenWebUIProvider({
apiKey: 'ollama', // Often not required
baseUrl: 'http://localhost:3000',
defaultModel: 'llama2'
});
// Initialize and use any provider
await claude.initialize();
const response = await claude.complete({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain TypeScript in one sentence.' }
],
maxTokens: 100,
temperature: 0.7
});
console.log(response.content);
🏭 Factory Functions
Create providers using factory functions for cleaner code:
import { createProvider, createClaudeProvider, createOpenAIProvider, createGeminiProvider, createOpenWebUIProvider } from 'simple-ai-provider';
// Method 1: Specific factory functions
const claude = createClaudeProvider('your-key', { defaultModel: 'claude-3-5-sonnet-20241022' });
const openai = createOpenAIProvider('your-key', { defaultModel: 'gpt-4o' });
const gemini = createGeminiProvider('your-key', { defaultModel: 'gemini-1.5-flash' });
const openwebui = createOpenWebUIProvider({ apiKey: 'your-key', baseUrl: 'http://localhost:3000' });
// Method 2: Generic factory
const provider = createProvider('claude', { apiKey: 'your-key' });
Provider Registry
For dynamic provider creation and registration:
import { ProviderRegistry, ClaudeProvider } from 'simple-ai-provider';
// Register a custom provider
ProviderRegistry.register('my-claude', ClaudeProvider);
// Create provider dynamically
const provider = ProviderRegistry.create('my-claude', { apiKey: 'your-key' });
// Check available providers
const availableProviders = ProviderRegistry.getRegisteredProviders();
console.log(availableProviders); // ['claude', 'openai', 'gemini', 'openwebui', 'my-claude']
🎨 Structured Response Types
Define custom response types for type-safe, structured AI outputs. The library automatically parses the AI's response into your desired type.
import { createResponseType, createClaudeProvider } from 'simple-ai-provider';
// 1. Define your response type
interface ProductAnalysis {
productName: string;
priceRange: 'budget' | 'mid-range' | 'premium';
pros: string[];
cons: string[];
overallRating: number; // 1-10 scale
recommendation: 'buy' | 'consider' | 'avoid';
}
// 2. Create a ResponseType object
const productAnalysisType = createResponseType<ProductAnalysis>(
'A comprehensive product analysis with pros, cons, rating, and recommendation'
);
// 3. Use with any provider
const claude = createClaudeProvider({ apiKey: 'your-key' });
await claude.initialize();
const response = await claude.complete<ProductAnalysis>({
messages: [
{ role: 'user', content: 'Analyze the iPhone 15 Pro from a consumer perspective.' }
],
responseType: productAnalysisType,
maxTokens: 800
});
// 4. Get the fully typed and parsed response
const analysis = response.content;
console.log(`Product: ${analysis.productName}`);
console.log(`Recommendation: ${analysis.recommendation}`);
console.log(`Rating: ${analysis.overallRating}/10`);
Key Benefits
- Automatic Parsing: The AI's JSON response is automatically parsed into your specified type.
- Type Safety: Get fully typed responses from AI providers with IntelliSense.
- Automatic Prompting: System prompts are automatically generated to guide the AI.
- Validation: Built-in response validation and parsing logic.
- Consistency: Ensures AI outputs match your expected format.
- Developer Experience: Catch errors at compile-time instead of runtime.
Streaming with Response Types
You can also use response types with streaming. The raw stream provides real-time text, and you can parse the final string once the stream is complete.
import { parseAndValidateResponseType } from 'simple-ai-provider';
const stream = claude.stream({
messages: [{ role: 'user', content: 'Analyze the Tesla Model 3.' }],
responseType: productAnalysisType,
maxTokens: 600
});
let fullResponse = '';
for await (const chunk of stream) {
if (!chunk.isComplete) {
process.stdout.write(chunk.content);
fullResponse += chunk.content;
} else {
console.log('\n\nStream complete!');
// Validate the complete streamed response
try {
const analysis = parseAndValidateResponseType(fullResponse, productAnalysisType);
console.log('Validation successful!');
console.log(`Product: ${analysis.productName}`);
} catch (e) {
console.error('Validation failed:', (e as Error).message);
}
}
}
📝 Environment Variables
Set up your API keys:
# Required for respective providers
export ANTHROPIC_API_KEY="your-claude-api-key"
export OPENAI_API_KEY="your-openai-api-key"
export GOOGLE_AI_API_KEY="your-gemini-api-key"
# OpenWebUI Bearer Token (get from Settings > Account in OpenWebUI)
export OPENWEBUI_API_KEY="your-bearer-token"
🔧 Provider-Specific Configuration
Claude Configuration
const claude = new ClaudeProvider({
apiKey: 'your-api-key',
defaultModel: 'claude-3-5-sonnet-20241022',
version: '2023-06-01',
maxRetries: 3,
timeout: 30000
});
OpenAI Configuration
const openai = new OpenAIProvider({
apiKey: 'your-api-key',
defaultModel: 'gpt-4o',
organization: 'your-org-id',
project: 'your-project-id',
maxRetries: 3,
timeout: 30000
});
Gemini Configuration
const gemini = new GeminiProvider({
apiKey: 'your-api-key',
defaultModel: 'gemini-1.5-flash',
safetySettings: [
{
category: 'HARM_CATEGORY_HARASSMENT',
threshold: 'BLOCK_MEDIUM_AND_ABOVE'
}
],
generationConfig: {
temperature: 0.7,
topP: 0.8,
topK: 40,
maxOutputTokens: 1000
}
});
OpenWebUI Configuration
const openwebui = new OpenWebUIProvider({
apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account
baseUrl: 'http://localhost:3000', // Your OpenWebUI instance
defaultModel: 'llama3.1',
useOllamaProxy: false, // Use OpenWebUI's chat API (recommended)
// useOllamaProxy: true, // Use Ollama API proxy for direct model access
dangerouslyAllowInsecureConnections: true, // For local HTTPS
timeout: 60000, // Longer timeout for local inference
maxRetries: 2
});
🌊 Streaming Support
All providers support real-time streaming:
const stream = provider.stream({
messages: [{ role: 'user', content: 'Count from 1 to 10' }],
maxTokens: 100
});
for await (const chunk of stream) {
if (!chunk.isComplete) {
process.stdout.write(chunk.content);
} else {
console.log('\nDone! Usage:', chunk.usage);
}
}
🔀 Multi-Provider Usage
Use multiple providers seamlessly:
const providers = {
claude: new ClaudeProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
openai: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
gemini: new GeminiProvider({ apiKey: process.env.GOOGLE_AI_API_KEY! }),
openwebui: new OpenWebUIProvider({
apiKey: 'ollama',
baseUrl: 'http://localhost:3000'
})
};
// Initialize all providers
await Promise.all(Object.values(providers).map(p => p.initialize()));
// Use the same interface for all
const prompt = {
messages: [{ role: 'user', content: 'Hello!' }],
maxTokens: 50
};
for (const [name, provider] of Object.entries(providers)) {
try {
const response = await provider.complete(prompt);
console.log(`${name}: ${response.content}`);
} catch (error) {
console.log(`${name} failed: ${error.message}`);
}
}
📊 Provider Comparison
| Provider | Context Length | Streaming | Vision | Function Calling | Local Execution | Best For |
|---|---|---|---|---|---|---|
| Claude | 200K tokens | ✅ | ✅ | ✅ | ❌ | Reasoning, Analysis, Code Review |
| OpenAI | 128K tokens | ✅ | ✅ | ✅ | ❌ | General Purpose, Function Calling |
| Gemini | 1M tokens | ✅ | ✅ | ✅ | ❌ | Large Documents, Multimodal |
| OpenWebUI | 32K tokens | ✅ | ❌ | ❌ | ✅ | Privacy, Custom Models, Local |
Detailed Capabilities
Each provider offers unique capabilities:
Claude (Anthropic)
- Advanced reasoning and analysis
- Excellent code review capabilities
- Strong safety features
- System message support
OpenAI
- Broad model selection
- Function calling support
- JSON mode for structured outputs
- Vision capabilities
Gemini (Google)
- Largest context window (1M tokens)
- Multimodal capabilities
- Cost-effective pricing
- Strong multilingual support
OpenWebUI
- Complete privacy (local execution)
- Custom model support
- No API costs
- RAG (Retrieval Augmented Generation) support
🎯 Model Selection
Getting Available Models
Instead of maintaining a static list, you can programmatically get available models:
// Get provider information including available models
const info = provider.getInfo();
console.log('Available models:', info.models);
// Example output:
// Claude: ['claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022', ...]
// OpenAI: ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', ...]
// Gemini: ['gemini-1.5-flash', 'gemini-1.5-pro', ...]
// OpenWebUI: ['llama3.1:latest', 'mistral:latest', ...]
Model Selection Guidelines
For Claude (Anthropic):
- Check Anthropic's model documentation for latest models
For OpenAI:
- Check OpenAI's model documentation for latest models
For Gemini (Google):
- Check Google AI's model documentation for latest models
For OpenWebUI:
- Models depend on your local installation
- Check your OpenWebUI instance for available models
🚨 Error Handling
The package provides comprehensive, standardized error handling with detailed error types:
import { AIProviderError, AIErrorType } from 'simple-ai-provider';
try {
const response = await provider.complete({
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
if (error instanceof AIProviderError) {
switch (error.type) {
case AIErrorType.AUTHENTICATION:
console.log('Invalid API key or authentication failed');
break;
case AIErrorType.RATE_LIMIT:
console.log('Rate limited, try again later');
break;
case AIErrorType.MODEL_NOT_FOUND:
console.log('Model not available or not found');
break;
case AIErrorType.INVALID_REQUEST:
console.log('Invalid request parameters');
break;
case AIErrorType.NETWORK:
console.log('Network/connection issue');
break;
case AIErrorType.TIMEOUT:
console.log('Request timed out');
break;
case AIErrorType.UNKNOWN:
console.log('Unknown error:', error.message);
break;
default:
console.log('Error:', error.message);
}
// Access additional error details
console.log('Status Code:', error.statusCode);
console.log('Original Error:', error.originalError);
}
}
Error Types
- AUTHENTICATION: Invalid API keys or authentication failures
- RATE_LIMIT: API rate limits exceeded
- INVALID_REQUEST: Malformed requests or invalid parameters
- MODEL_NOT_FOUND: Requested model is not available
- NETWORK: Connection issues or server errors
- TIMEOUT: Request timeout exceeded
- UNKNOWN: Unclassified errors
🔧 Advanced Usage
Custom Base URLs
// OpenAI-compatible endpoint
const customOpenAI = new OpenAIProvider({
apiKey: 'your-key',
baseUrl: 'https://api.custom-provider.com/v1'
});
// Custom OpenWebUI instance
const remoteOpenWebUI = new OpenWebUIProvider({
apiKey: 'your-key',
baseUrl: 'https://my-openwebui.example.com',
apiPath: '/api/v1'
});
Provider Information
const info = provider.getInfo();
console.log(`Provider: ${info.name} v${info.version}`);
console.log(`Models: ${info.models.join(', ')}`);
console.log(`Max Context: ${info.maxContextLength} tokens`);
console.log(`Supports Streaming: ${info.supportsStreaming}`);
console.log('Capabilities:', info.capabilities);
OpenWebUI-Specific Features
OpenWebUI offers unique advantages for local AI deployment:
const openwebui = new OpenWebUIProvider({
apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account
baseUrl: 'http://localhost:3000',
defaultModel: 'llama3.1',
useOllamaProxy: false, // Use chat completions API (recommended)
// Longer timeout for local inference
timeout: 120000,
// Allow self-signed certificates for local development
dangerouslyAllowInsecureConnections: true
});
// Test connection and list available models
try {
await openwebui.initialize();
console.log('Connected to local OpenWebUI instance');
// Use either chat completions or Ollama proxy
const response = await openwebui.complete({
messages: [{ role: 'user', content: 'Hello!' }],
maxTokens: 100
});
} catch (error) {
console.log('OpenWebUI not available:', error.message);
// Gracefully fallback to cloud providers
}
OpenWebUI API Modes:
- Chat Completions (
useOllamaProxy: false): OpenWebUI's native API with full features - Ollama Proxy (
useOllamaProxy: true): Direct access to Ollama API for raw model interaction
📦 TypeScript Support
Full TypeScript support with comprehensive type definitions:
import type {
CompletionParams,
CompletionResponse,
CompletionChunk,
ProviderInfo,
ClaudeConfig,
OpenAIConfig,
GeminiConfig,
OpenWebUIConfig,
AIMessage,
ResponseType,
TokenUsage
} from 'simple-ai-provider';
// Type-safe configuration
const config: ClaudeConfig = {
apiKey: 'your-key',
defaultModel: 'claude-3-5-sonnet-20241022',
// TypeScript will validate all options
};
// Type-safe responses
const response: CompletionResponse = await provider.complete(params);
// Type-safe messages with metadata
const messages: AIMessage[] = [
{
role: 'user',
content: 'Hello',
metadata: { timestamp: Date.now() }
}
];
// Type-safe response types
interface UserProfile {
name: string;
age: number;
}
const responseType: ResponseType<UserProfile> = createResponseType(
'A user profile with name and age',
{ name: 'John', age: 30 }
);
Advanced Type Features
- Generic Response Types: Type-safe structured outputs
- Message Metadata: Support for custom message properties
- Provider-Specific Configs: Type-safe configuration for each provider
- Error Types: Comprehensive error type definitions
- Factory Functions: Type-safe provider creation
🧪 Testing
The package includes comprehensive tests using Bun test framework:
# Run all tests
bun test
# Run tests for specific provider
bun test tests/claude.test.ts
bun test tests/openai.test.ts
bun test tests/gemini.test.ts
bun test tests/openwebui.test.ts
# Run tests with coverage
bun test --coverage
Test Coverage
- ✅ Provider initialization and configuration
- ✅ Message validation and conversion
- ✅ Error handling and normalization
- ✅ Response formatting
- ✅ Streaming functionality
- ✅ Structured response types
- ✅ Factory functions
- ✅ Provider registry
🛠️ Development
Prerequisites
- Node.js 18.0.0 or higher
- Bun (recommended) or npm/yarn
- TypeScript 5.0 or higher
Setup
# Clone the repository
git clone https://gitea.jleibl.net/jleibl/simple-ai-provider.git
cd simple-ai-provider
# Install dependencies
bun install
# Build the project
bun run build
# Run tests
bun test
# Run examples
bun run examples/basic-usage.ts
bun run examples/structured-response-types.ts
bun run examples/multi-provider.ts
Project Structure
src/
├── index.ts # Main entry point
├── types/
│ └── index.ts # Type definitions and utilities
├── providers/
│ ├── base.ts # Abstract base provider
│ ├── claude.ts # Claude provider implementation
│ ├── openai.ts # OpenAI provider implementation
│ ├── gemini.ts # Gemini provider implementation
│ ├── openwebui.ts # OpenWebUI provider implementation
│ └── index.ts # Provider exports
└── utils/
└── factory.ts # Factory functions and registry
examples/
├── basic-usage.ts # Basic usage examples
├── structured-response-types.ts # Structured output examples
└── multi-provider.ts # Multi-provider examples
tests/
├── claude.test.ts # Claude provider tests
├── openai.test.ts # OpenAI provider tests
├── gemini.test.ts # Gemini provider tests
└── openwebui.test.ts # OpenWebUI provider tests
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
Development Guidelines
- Code Style: Follow the existing TypeScript patterns
- Testing: Add tests for new features
- Documentation: Update README for new features
- Type Safety: Maintain comprehensive type definitions
- Error Handling: Use standardized error types
📄 License
MIT License - see the LICENSE file for details.
🔗 Links
⭐ Star this repo if you find it helpful!