Files
simple-ai-provider/README.md

11 KiB

Simple AI Provider

A professional, type-safe TypeScript package that provides a unified interface for multiple AI providers. Currently supports Claude (Anthropic), OpenAI, Google Gemini, and OpenWebUI with a consistent API across all providers.

Features

  • 🔗 Unified Interface: Same API for Claude, OpenAI, Gemini, and OpenWebUI
  • 🎯 Type Safety: Full TypeScript support with comprehensive type definitions
  • 🚀 Streaming Support: Real-time response streaming for all providers
  • 🛡️ Error Handling: Standardized error types with provider-specific details
  • 🏭 Factory Pattern: Easy provider creation and management
  • 🔧 Configurable: Extensive configuration options for each provider
  • 📦 Zero Dependencies: Lightweight with minimal external dependencies
  • 🌐 Local Support: OpenWebUI integration for local/private AI models

🚀 Quick Start

npm install simple-ai-provider
# or
bun add simple-ai-provider

Basic Usage

import { ClaudeProvider, OpenAIProvider, GeminiProvider, OpenWebUIProvider } from 'simple-ai-provider';

// Claude
const claude = new ClaudeProvider({
  apiKey: process.env.ANTHROPIC_API_KEY!,
  defaultModel: 'claude-3-5-sonnet-20241022'
});

// OpenAI  
const openai = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY!,
  defaultModel: 'gpt-4o'
});

// Google Gemini
const gemini = new GeminiProvider({
  apiKey: process.env.GOOGLE_AI_API_KEY!,
  defaultModel: 'gemini-1.5-flash'
});

// OpenWebUI (local)
const openwebui = new OpenWebUIProvider({
  apiKey: 'ollama', // Often not required
  baseUrl: 'http://localhost:3000',
  defaultModel: 'llama2'
});

// Initialize and use any provider
await claude.initialize();

const response = await claude.complete({
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain TypeScript in one sentence.' }
  ],
  maxTokens: 100,
  temperature: 0.7
});

console.log(response.content);

🏭 Factory Functions

Create providers using factory functions for cleaner code:

import { createProvider, createClaudeProvider, createOpenAIProvider, createGeminiProvider, createOpenWebUIProvider } from 'simple-ai-provider';

// Method 1: Specific factory functions
const claude = createClaudeProvider({ apiKey: 'your-key' });
const openai = createOpenAIProvider({ apiKey: 'your-key' });
const gemini = createGeminiProvider({ apiKey: 'your-key' });
const openwebui = createOpenWebUIProvider({ apiKey: 'your-key', baseUrl: 'http://localhost:3000' });

// Method 2: Generic factory
const provider = createProvider('claude', { apiKey: 'your-key' });

📝 Environment Variables

Set up your API keys:

# Required for respective providers
export ANTHROPIC_API_KEY="your-claude-api-key"
export OPENAI_API_KEY="your-openai-api-key"  
export GOOGLE_AI_API_KEY="your-gemini-api-key"

# OpenWebUI Bearer Token (get from Settings > Account in OpenWebUI)
export OPENWEBUI_API_KEY="your-bearer-token"

🔧 Provider-Specific Configuration

Claude Configuration

const claude = new ClaudeProvider({
  apiKey: 'your-api-key',
  defaultModel: 'claude-3-5-sonnet-20241022',
  version: '2023-06-01',
  maxRetries: 3,
  timeout: 30000
});

OpenAI Configuration

const openai = new OpenAIProvider({
  apiKey: 'your-api-key',
  defaultModel: 'gpt-4o',
  organization: 'your-org-id',
  project: 'your-project-id',
  maxRetries: 3,
  timeout: 30000
});

Gemini Configuration

const gemini = new GeminiProvider({
  apiKey: 'your-api-key',
  defaultModel: 'gemini-1.5-flash',
  safetySettings: [
    {
      category: 'HARM_CATEGORY_HARASSMENT',
      threshold: 'BLOCK_MEDIUM_AND_ABOVE'
    }
  ],
  generationConfig: {
    temperature: 0.7,
    topP: 0.8,
    topK: 40,
    maxOutputTokens: 1000
  }
});

OpenWebUI Configuration

const openwebui = new OpenWebUIProvider({
  apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account
  baseUrl: 'http://localhost:3000', // Your OpenWebUI instance
  defaultModel: 'llama3.1',
  useOllamaProxy: false, // Use OpenWebUI's chat API (recommended)
  // useOllamaProxy: true, // Use Ollama API proxy for direct model access
  dangerouslyAllowInsecureConnections: true, // For local HTTPS
  timeout: 60000, // Longer timeout for local inference
  maxRetries: 2
});

🌊 Streaming Support

All providers support real-time streaming:

const stream = provider.stream({
  messages: [{ role: 'user', content: 'Count from 1 to 10' }],
  maxTokens: 100
});

for await (const chunk of stream) {
  if (!chunk.isComplete) {
    process.stdout.write(chunk.content);
  } else {
    console.log('\nDone! Usage:', chunk.usage);
  }
}

🔀 Multi-Provider Usage

Use multiple providers seamlessly:

const providers = {
  claude: new ClaudeProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
  openai: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
  gemini: new GeminiProvider({ apiKey: process.env.GOOGLE_AI_API_KEY! }),
  openwebui: new OpenWebUIProvider({ 
    apiKey: 'ollama', 
    baseUrl: 'http://localhost:3000'
  })
};

// Initialize all providers
await Promise.all(Object.values(providers).map(p => p.initialize()));

// Use the same interface for all
const prompt = { 
  messages: [{ role: 'user', content: 'Hello!' }],
  maxTokens: 50 
};

for (const [name, provider] of Object.entries(providers)) {
  try {
    const response = await provider.complete(prompt);
    console.log(`${name}: ${response.content}`);
  } catch (error) {
    console.log(`${name} failed: ${error.message}`);
  }
}

📊 Provider Comparison

Provider Context Length Streaming Vision Function Calling Local Execution Best For
Claude 200K tokens Reasoning, Analysis, Code Review
OpenAI 128K tokens General Purpose, Function Calling
Gemini 1M tokens Large Documents, Multimodal
OpenWebUI 8K-32K tokens Varies Limited Privacy, Custom Models, Local

🎯 Available Models

Claude Models

  • claude-3-5-sonnet-20241022 (recommended)
  • claude-3-5-haiku-20241022
  • claude-3-opus-20240229
  • claude-3-sonnet-20240229
  • claude-3-haiku-20240307

OpenAI Models

  • gpt-4o (recommended)
  • gpt-4o-mini
  • gpt-4-turbo
  • gpt-4
  • gpt-3.5-turbo

Gemini Models

  • gemini-1.5-flash (recommended, fast)
  • gemini-1.5-flash-8b (fastest)
  • gemini-1.5-pro (most capable)
  • gemini-1.0-pro
  • gemini-1.0-pro-vision

OpenWebUI Models

Available models depend on your local installation:

  • llama3.1, llama3.1:8b, llama3.1:70b
  • llama3.2, llama3.2:1b, llama3.2:3b
  • codellama, codellama:7b, codellama:13b, codellama:34b
  • mistral, mistral:7b
  • mixtral, mixtral:8x7b
  • phi3, phi3:mini
  • gemma2, gemma2:2b, gemma2:9b
  • qwen2.5, granite3.1-dense:8b
  • Custom models as installed

🚨 Error Handling

The package provides standardized error handling:

import { AIProviderError, AIErrorType } from 'simple-ai-provider';

try {
  const response = await provider.complete({ 
    messages: [{ role: 'user', content: 'Hello' }] 
  });
} catch (error) {
  if (error instanceof AIProviderError) {
    switch (error.type) {
      case AIErrorType.AUTHENTICATION:
        console.log('Invalid API key');
        break;
      case AIErrorType.RATE_LIMIT:
        console.log('Rate limited, try again later');
        break;
      case AIErrorType.MODEL_NOT_FOUND:
        console.log('Model not available');
        break;
      case AIErrorType.NETWORK:
        console.log('Network/connection issue');
        break;
      default:
        console.log('Unknown error:', error.message);
    }
  }
}

🔧 Advanced Usage

Custom Base URLs

// OpenAI-compatible endpoint
const customOpenAI = new OpenAIProvider({
  apiKey: 'your-key',
  baseUrl: 'https://api.custom-provider.com/v1'
});

// Custom OpenWebUI instance
const remoteOpenWebUI = new OpenWebUIProvider({
  apiKey: 'your-key',
  baseUrl: 'https://my-openwebui.example.com',
  apiPath: '/api/v1'
});

Provider Information

const info = provider.getInfo();
console.log(`Provider: ${info.name} v${info.version}`);
console.log(`Models: ${info.models.join(', ')}`);
console.log(`Max Context: ${info.maxContextLength} tokens`);
console.log(`Supports Streaming: ${info.supportsStreaming}`);
console.log('Capabilities:', info.capabilities);

OpenWebUI-Specific Features

OpenWebUI offers unique advantages for local AI deployment:

const openwebui = new OpenWebUIProvider({
  apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account
  baseUrl: 'http://localhost:3000',
  defaultModel: 'llama3.1',
  useOllamaProxy: false, // Use chat completions API (recommended)
  // Longer timeout for local inference
  timeout: 120000,
  // Allow self-signed certificates for local development
  dangerouslyAllowInsecureConnections: true
});

// Test connection and list available models
try {
  await openwebui.initialize();
  console.log('Connected to local OpenWebUI instance');
  
  // Use either chat completions or Ollama proxy
  const response = await openwebui.complete({
    messages: [{ role: 'user', content: 'Hello!' }],
    maxTokens: 100
  });
} catch (error) {
  console.log('OpenWebUI not available:', error.message);
  // Gracefully fallback to cloud providers
}

OpenWebUI API Modes:

  • Chat Completions (useOllamaProxy: false): OpenWebUI's native API with full features
  • Ollama Proxy (useOllamaProxy: true): Direct access to Ollama API for raw model interaction

📦 TypeScript Support

Full TypeScript support with comprehensive type definitions:

import type { 
  CompletionParams, 
  CompletionResponse, 
  CompletionChunk,
  ProviderInfo,
  ClaudeConfig,
  OpenAIConfig,
  GeminiConfig,
  OpenWebUIConfig
} from 'simple-ai-provider';

// Type-safe configuration
const config: ClaudeConfig = {
  apiKey: 'your-key',
  defaultModel: 'claude-3-5-sonnet-20241022',
  // TypeScript will validate all options
};

// Type-safe responses
const response: CompletionResponse = await provider.complete(params);

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

📄 License

MIT License - see the LICENSE file for details.


Star this repo if you find it helpful!