# Simple AI Provider A professional, type-safe TypeScript package that provides a unified interface for multiple AI providers. Currently supports **Claude (Anthropic)**, **OpenAI**, **Google Gemini**, and **OpenWebUI** with a consistent API across all providers. ## โœจ Features - ๐Ÿ”— **Unified Interface**: Same API for Claude, OpenAI, Gemini, and OpenWebUI - ๐ŸŽฏ **Type Safety**: Full TypeScript support with comprehensive type definitions - ๐Ÿš€ **Streaming Support**: Real-time response streaming for all providers - ๐Ÿ›ก๏ธ **Error Handling**: Standardized error types with provider-specific details - ๐Ÿญ **Factory Pattern**: Easy provider creation and management - ๐Ÿ”ง **Configurable**: Extensive configuration options for each provider - ๐Ÿ“ฆ **Zero Dependencies**: Lightweight with minimal external dependencies - ๐ŸŒ **Local Support**: OpenWebUI integration for local/private AI models - ๐ŸŽจ **Structured Output**: Define custom response types for type-safe AI outputs - ๐Ÿ—๏ธ **Provider Registry**: Dynamic provider registration and creation system - โœ… **Comprehensive Testing**: Full test coverage with Bun test framework - ๐Ÿ” **Advanced Validation**: Input validation with detailed error messages ## ๐Ÿ—๏ธ Architecture The library is built on solid design principles: - **Template Method Pattern**: Base provider defines the workflow, subclasses implement specifics - **Factory Pattern**: Clean provider creation and management - **Strategy Pattern**: Unified interface across different AI providers - **Type Safety**: Comprehensive TypeScript support throughout - **Error Normalization**: Consistent error handling across all providers - **Validation First**: Input validation before processing - **Extensibility**: Easy to add new providers via registry system ## ๐Ÿš€ Quick Start ```bash npm install simple-ai-provider # or bun add simple-ai-provider ``` ### Basic Usage ```typescript import { ClaudeProvider, OpenAIProvider, GeminiProvider, OpenWebUIProvider } from 'simple-ai-provider'; // Claude const claude = new ClaudeProvider({ apiKey: process.env.ANTHROPIC_API_KEY!, defaultModel: 'claude-3-5-sonnet-20241022' }); // OpenAI const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY!, defaultModel: 'gpt-4o' }); // Google Gemini const gemini = new GeminiProvider({ apiKey: process.env.GOOGLE_AI_API_KEY!, defaultModel: 'gemini-1.5-flash' }); // OpenWebUI (local) const openwebui = new OpenWebUIProvider({ apiKey: 'ollama', // Often not required baseUrl: 'http://localhost:3000', defaultModel: 'llama2' }); // Initialize and use any provider await claude.initialize(); const response = await claude.complete({ messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Explain TypeScript in one sentence.' } ], maxTokens: 100, temperature: 0.7 }); console.log(response.content); ``` ## ๐Ÿญ Factory Functions Create providers using factory functions for cleaner code: ```typescript import { createProvider, createClaudeProvider, createOpenAIProvider, createGeminiProvider, createOpenWebUIProvider } from 'simple-ai-provider'; // Method 1: Specific factory functions const claude = createClaudeProvider('your-key', { defaultModel: 'claude-3-5-sonnet-20241022' }); const openai = createOpenAIProvider('your-key', { defaultModel: 'gpt-4o' }); const gemini = createGeminiProvider('your-key', { defaultModel: 'gemini-1.5-flash' }); const openwebui = createOpenWebUIProvider({ apiKey: 'your-key', baseUrl: 'http://localhost:3000' }); // Method 2: Generic factory const provider = createProvider('claude', { apiKey: 'your-key' }); ``` ### Provider Registry For dynamic provider creation and registration: ```typescript import { ProviderRegistry, ClaudeProvider } from 'simple-ai-provider'; // Register a custom provider ProviderRegistry.register('my-claude', ClaudeProvider); // Create provider dynamically const provider = ProviderRegistry.create('my-claude', { apiKey: 'your-key' }); // Check available providers const availableProviders = ProviderRegistry.getRegisteredProviders(); console.log(availableProviders); // ['claude', 'openai', 'gemini', 'openwebui', 'my-claude'] ``` ## ๐ŸŽจ Structured Response Types Define custom response types for type-safe, structured AI outputs. The library automatically parses the AI's response into your desired type. ```typescript import { createResponseType, createClaudeProvider } from 'simple-ai-provider'; // 1. Define your response type interface ProductAnalysis { productName: string; priceRange: 'budget' | 'mid-range' | 'premium'; pros: string[]; cons: string[]; overallRating: number; // 1-10 scale recommendation: 'buy' | 'consider' | 'avoid'; } // 2. Create a ResponseType object const productAnalysisType = createResponseType( 'A comprehensive product analysis with pros, cons, rating, and recommendation' ); // 3. Use with any provider const claude = createClaudeProvider({ apiKey: 'your-key' }); await claude.initialize(); const response = await claude.complete({ messages: [ { role: 'user', content: 'Analyze the iPhone 15 Pro from a consumer perspective.' } ], responseType: productAnalysisType, maxTokens: 800 }); // 4. Get the fully typed and parsed response const analysis = response.content; console.log(`Product: ${analysis.productName}`); console.log(`Recommendation: ${analysis.recommendation}`); console.log(`Rating: ${analysis.overallRating}/10`); ``` ### Key Benefits - **Automatic Parsing**: The AI's JSON response is automatically parsed into your specified type. - **Type Safety**: Get fully typed responses from AI providers with IntelliSense. - **Automatic Prompting**: System prompts are automatically generated to guide the AI. - **Validation**: Built-in response validation and parsing logic. - **Consistency**: Ensures AI outputs match your expected format. - **Developer Experience**: Catch errors at compile-time instead of runtime. ### Streaming with Response Types You can also use response types with streaming. The raw stream provides real-time text, and you can parse the final string once the stream is complete. ```typescript import { parseAndValidateResponseType } from 'simple-ai-provider'; const stream = claude.stream({ messages: [{ role: 'user', content: 'Analyze the Tesla Model 3.' }], responseType: productAnalysisType, maxTokens: 600 }); let fullResponse = ''; for await (const chunk of stream) { if (!chunk.isComplete) { process.stdout.write(chunk.content); fullResponse += chunk.content; } else { console.log('\n\nStream complete!'); // Validate the complete streamed response try { const analysis = parseAndValidateResponseType(fullResponse, productAnalysisType); console.log('Validation successful!'); console.log(`Product: ${analysis.productName}`); } catch (e) { console.error('Validation failed:', (e as Error).message); } } } ``` ## ๐Ÿ“ Environment Variables Set up your API keys: ```bash # Required for respective providers export ANTHROPIC_API_KEY="your-claude-api-key" export OPENAI_API_KEY="your-openai-api-key" export GOOGLE_AI_API_KEY="your-gemini-api-key" # OpenWebUI Bearer Token (get from Settings > Account in OpenWebUI) export OPENWEBUI_API_KEY="your-bearer-token" ``` ## ๐Ÿ”ง Provider-Specific Configuration ### Claude Configuration ```typescript const claude = new ClaudeProvider({ apiKey: 'your-api-key', defaultModel: 'claude-3-5-sonnet-20241022', version: '2023-06-01', maxRetries: 3, timeout: 30000 }); ``` ### OpenAI Configuration ```typescript const openai = new OpenAIProvider({ apiKey: 'your-api-key', defaultModel: 'gpt-4o', organization: 'your-org-id', project: 'your-project-id', maxRetries: 3, timeout: 30000 }); ``` ### Gemini Configuration ```typescript const gemini = new GeminiProvider({ apiKey: 'your-api-key', defaultModel: 'gemini-1.5-flash', safetySettings: [ { category: 'HARM_CATEGORY_HARASSMENT', threshold: 'BLOCK_MEDIUM_AND_ABOVE' } ], generationConfig: { temperature: 0.7, topP: 0.8, topK: 40, maxOutputTokens: 1000 } }); ``` ### OpenWebUI Configuration ```typescript const openwebui = new OpenWebUIProvider({ apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account baseUrl: 'http://localhost:3000', // Your OpenWebUI instance defaultModel: 'llama3.1', useOllamaProxy: false, // Use OpenWebUI's chat API (recommended) // useOllamaProxy: true, // Use Ollama API proxy for direct model access dangerouslyAllowInsecureConnections: true, // For local HTTPS timeout: 60000, // Longer timeout for local inference maxRetries: 2 }); ``` ## ๐ŸŒŠ Streaming Support All providers support real-time streaming: ```typescript const stream = provider.stream({ messages: [{ role: 'user', content: 'Count from 1 to 10' }], maxTokens: 100 }); for await (const chunk of stream) { if (!chunk.isComplete) { process.stdout.write(chunk.content); } else { console.log('\nDone! Usage:', chunk.usage); } } ``` ## ๐Ÿ”€ Multi-Provider Usage Use multiple providers seamlessly: ```typescript const providers = { claude: new ClaudeProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }), openai: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }), gemini: new GeminiProvider({ apiKey: process.env.GOOGLE_AI_API_KEY! }), openwebui: new OpenWebUIProvider({ apiKey: 'ollama', baseUrl: 'http://localhost:3000' }) }; // Initialize all providers await Promise.all(Object.values(providers).map(p => p.initialize())); // Use the same interface for all const prompt = { messages: [{ role: 'user', content: 'Hello!' }], maxTokens: 50 }; for (const [name, provider] of Object.entries(providers)) { try { const response = await provider.complete(prompt); console.log(`${name}: ${response.content}`); } catch (error) { console.log(`${name} failed: ${error.message}`); } } ``` ## ๐Ÿ“Š Provider Comparison | Provider | Context Length | Streaming | Vision | Function Calling | Local Execution | Best For | |----------|---------------|-----------|--------|------------------|-----------------|----------| | **Claude** | 200K tokens | โœ… | โœ… | โœ… | โŒ | Reasoning, Analysis, Code Review | | **OpenAI** | 128K tokens | โœ… | โœ… | โœ… | โŒ | General Purpose, Function Calling | | **Gemini** | 1M tokens | โœ… | โœ… | โœ… | โŒ | Large Documents, Multimodal | | **OpenWebUI** | 32K tokens | โœ… | โŒ | โŒ | โœ… | Privacy, Custom Models, Local | ### Detailed Capabilities Each provider offers unique capabilities: #### Claude (Anthropic) - Advanced reasoning and analysis - Excellent code review capabilities - Strong safety features - System message support #### OpenAI - Broad model selection - Function calling support - JSON mode for structured outputs - Vision capabilities #### Gemini (Google) - Largest context window (1M tokens) - Multimodal capabilities - Cost-effective pricing - Strong multilingual support #### OpenWebUI - Complete privacy (local execution) - Custom model support - No API costs - RAG (Retrieval Augmented Generation) support ## ๐ŸŽฏ Model Selection ### Getting Available Models Instead of maintaining a static list, you can programmatically get available models: ```typescript // Get provider information including available models const info = provider.getInfo(); console.log('Available models:', info.models); // Example output: // Claude: ['claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022', ...] // OpenAI: ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', ...] // Gemini: ['gemini-1.5-flash', 'gemini-1.5-pro', ...] // OpenWebUI: ['llama3.1:latest', 'mistral:latest', ...] ``` ### Model Selection Guidelines **For Claude (Anthropic):** - Check [Anthropic's model documentation](https://docs.anthropic.com/claude/docs/models-overview) for latest models **For OpenAI:** - Check [OpenAI's model documentation](https://platform.openai.com/docs/models) for latest models **For Gemini (Google):** - Check [Google AI's model documentation](https://ai.google.dev/docs/models) for latest models **For OpenWebUI:** - Models depend on your local installation - Check your OpenWebUI instance for available models ## ๐Ÿšจ Error Handling The package provides comprehensive, standardized error handling with detailed error types: ```typescript import { AIProviderError, AIErrorType } from 'simple-ai-provider'; try { const response = await provider.complete({ messages: [{ role: 'user', content: 'Hello' }] }); } catch (error) { if (error instanceof AIProviderError) { switch (error.type) { case AIErrorType.AUTHENTICATION: console.log('Invalid API key or authentication failed'); break; case AIErrorType.RATE_LIMIT: console.log('Rate limited, try again later'); break; case AIErrorType.MODEL_NOT_FOUND: console.log('Model not available or not found'); break; case AIErrorType.INVALID_REQUEST: console.log('Invalid request parameters'); break; case AIErrorType.NETWORK: console.log('Network/connection issue'); break; case AIErrorType.TIMEOUT: console.log('Request timed out'); break; case AIErrorType.UNKNOWN: console.log('Unknown error:', error.message); break; default: console.log('Error:', error.message); } // Access additional error details console.log('Status Code:', error.statusCode); console.log('Original Error:', error.originalError); } } ``` ### Error Types - **AUTHENTICATION**: Invalid API keys or authentication failures - **RATE_LIMIT**: API rate limits exceeded - **INVALID_REQUEST**: Malformed requests or invalid parameters - **MODEL_NOT_FOUND**: Requested model is not available - **NETWORK**: Connection issues or server errors - **TIMEOUT**: Request timeout exceeded - **UNKNOWN**: Unclassified errors ## ๐Ÿ”ง Advanced Usage ### Custom Base URLs ```typescript // OpenAI-compatible endpoint const customOpenAI = new OpenAIProvider({ apiKey: 'your-key', baseUrl: 'https://api.custom-provider.com/v1' }); // Custom OpenWebUI instance const remoteOpenWebUI = new OpenWebUIProvider({ apiKey: 'your-key', baseUrl: 'https://my-openwebui.example.com', apiPath: '/api/v1' }); ``` ### Provider Information ```typescript const info = provider.getInfo(); console.log(`Provider: ${info.name} v${info.version}`); console.log(`Models: ${info.models.join(', ')}`); console.log(`Max Context: ${info.maxContextLength} tokens`); console.log(`Supports Streaming: ${info.supportsStreaming}`); console.log('Capabilities:', info.capabilities); ``` ### OpenWebUI-Specific Features OpenWebUI offers unique advantages for local AI deployment: ```typescript const openwebui = new OpenWebUIProvider({ apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account baseUrl: 'http://localhost:3000', defaultModel: 'llama3.1', useOllamaProxy: false, // Use chat completions API (recommended) // Longer timeout for local inference timeout: 120000, // Allow self-signed certificates for local development dangerouslyAllowInsecureConnections: true }); // Test connection and list available models try { await openwebui.initialize(); console.log('Connected to local OpenWebUI instance'); // Use either chat completions or Ollama proxy const response = await openwebui.complete({ messages: [{ role: 'user', content: 'Hello!' }], maxTokens: 100 }); } catch (error) { console.log('OpenWebUI not available:', error.message); // Gracefully fallback to cloud providers } ``` **OpenWebUI API Modes:** - **Chat Completions** (`useOllamaProxy: false`): OpenWebUI's native API with full features - **Ollama Proxy** (`useOllamaProxy: true`): Direct access to Ollama API for raw model interaction ## ๐Ÿ“ฆ TypeScript Support Full TypeScript support with comprehensive type definitions: ```typescript import type { CompletionParams, CompletionResponse, CompletionChunk, ProviderInfo, ClaudeConfig, OpenAIConfig, GeminiConfig, OpenWebUIConfig, AIMessage, ResponseType, TokenUsage } from 'simple-ai-provider'; // Type-safe configuration const config: ClaudeConfig = { apiKey: 'your-key', defaultModel: 'claude-3-5-sonnet-20241022', // TypeScript will validate all options }; // Type-safe responses const response: CompletionResponse = await provider.complete(params); // Type-safe messages with metadata const messages: AIMessage[] = [ { role: 'user', content: 'Hello', metadata: { timestamp: Date.now() } } ]; // Type-safe response types interface UserProfile { name: string; age: number; } const responseType: ResponseType = createResponseType( 'A user profile with name and age', { name: 'John', age: 30 } ); ``` ### Advanced Type Features - **Generic Response Types**: Type-safe structured outputs - **Message Metadata**: Support for custom message properties - **Provider-Specific Configs**: Type-safe configuration for each provider - **Error Types**: Comprehensive error type definitions - **Factory Functions**: Type-safe provider creation ## ๐Ÿงช Testing The package includes comprehensive tests using Bun test framework: ```bash # Run all tests bun test # Run tests for specific provider bun test tests/claude.test.ts bun test tests/openai.test.ts bun test tests/gemini.test.ts bun test tests/openwebui.test.ts # Run tests with coverage bun test --coverage ``` ### Test Coverage - โœ… Provider initialization and configuration - โœ… Message validation and conversion - โœ… Error handling and normalization - โœ… Response formatting - โœ… Streaming functionality - โœ… Structured response types - โœ… Factory functions - โœ… Provider registry ## ๐Ÿ› ๏ธ Development ### Prerequisites - Node.js 18.0.0 or higher - Bun (recommended) or npm/yarn - TypeScript 5.0 or higher ### Setup ```bash # Clone the repository git clone https://gitea.jleibl.net/jleibl/simple-ai-provider.git cd simple-ai-provider # Install dependencies bun install # Build the project bun run build # Run tests bun test # Run examples bun run examples/basic-usage.ts bun run examples/structured-response-types.ts bun run examples/multi-provider.ts ``` ### Project Structure ```text src/ โ”œโ”€โ”€ index.ts # Main entry point โ”œโ”€โ”€ types/ โ”‚ โ””โ”€โ”€ index.ts # Type definitions and utilities โ”œโ”€โ”€ providers/ โ”‚ โ”œโ”€โ”€ base.ts # Abstract base provider โ”‚ โ”œโ”€โ”€ claude.ts # Claude provider implementation โ”‚ โ”œโ”€โ”€ openai.ts # OpenAI provider implementation โ”‚ โ”œโ”€โ”€ gemini.ts # Gemini provider implementation โ”‚ โ”œโ”€โ”€ openwebui.ts # OpenWebUI provider implementation โ”‚ โ””โ”€โ”€ index.ts # Provider exports โ””โ”€โ”€ utils/ โ””โ”€โ”€ factory.ts # Factory functions and registry examples/ โ”œโ”€โ”€ basic-usage.ts # Basic usage examples โ”œโ”€โ”€ structured-response-types.ts # Structured output examples โ””โ”€โ”€ multi-provider.ts # Multi-provider examples tests/ โ”œโ”€โ”€ claude.test.ts # Claude provider tests โ”œโ”€โ”€ openai.test.ts # OpenAI provider tests โ”œโ”€โ”€ gemini.test.ts # Gemini provider tests โ””โ”€โ”€ openwebui.test.ts # OpenWebUI provider tests ``` ## ๐Ÿค Contributing Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change. ### Development Guidelines 1. **Code Style**: Follow the existing TypeScript patterns 2. **Testing**: Add tests for new features 3. **Documentation**: Update README for new features 4. **Type Safety**: Maintain comprehensive type definitions 5. **Error Handling**: Use standardized error types ## ๐Ÿ“„ License MIT License - see the [LICENSE](LICENSE) file for details. ## ๐Ÿ”— Links - [Anthropic Claude API](https://docs.anthropic.com/claude/reference/) - [OpenAI API](https://platform.openai.com/docs/) - [Google Gemini API](https://ai.google.dev/) - [OpenWebUI](https://openwebui.com/) - [Gitea Repository](https://gitea.jleibl.net/jleibl/simple-ai-provider) --- โญ **Star this repo if you find it helpful!**