feat(docs): update README with OpenWebUI support details
This commit is contained in:
760
README.md
760
README.md
@ -1,440 +1,135 @@
|
||||
# Simple AI Provider
|
||||
|
||||
A professional, extensible TypeScript package for integrating multiple AI providers into your applications with a unified interface. Currently supports **Claude (Anthropic)**, **OpenAI (GPT)**, and **Google Gemini** with plans to add more providers.
|
||||
A professional, type-safe TypeScript package that provides a unified interface for multiple AI providers. Currently supports **Claude (Anthropic)**, **OpenAI**, **Google Gemini**, and **OpenWebUI** with a consistent API across all providers.
|
||||
|
||||
## Features
|
||||
## ✨ Features
|
||||
|
||||
- 🎯 **Unified Interface**: Same API across all AI providers
|
||||
- 🔒 **Type-Safe**: Full TypeScript support with comprehensive type definitions
|
||||
- 🚀 **Easy to Use**: Simple factory functions and intuitive configuration
|
||||
- 📡 **Streaming Support**: Real-time streaming responses where supported
|
||||
- 🛡️ **Error Handling**: Robust error handling with categorized error types
|
||||
- 🔧 **Extensible**: Easy to add new AI providers
|
||||
- 📦 **Modern**: Built with ES modules and modern JavaScript features
|
||||
- 🌐 **Multi-Provider**: Switch between Claude, OpenAI, and Gemini seamlessly
|
||||
- 🔗 **Unified Interface**: Same API for Claude, OpenAI, Gemini, and OpenWebUI
|
||||
- 🎯 **Type Safety**: Full TypeScript support with comprehensive type definitions
|
||||
- 🚀 **Streaming Support**: Real-time response streaming for all providers
|
||||
- 🛡️ **Error Handling**: Standardized error types with provider-specific details
|
||||
- 🏭 **Factory Pattern**: Easy provider creation and management
|
||||
- 🔧 **Configurable**: Extensive configuration options for each provider
|
||||
- 📦 **Zero Dependencies**: Lightweight with minimal external dependencies
|
||||
- 🌐 **Local Support**: OpenWebUI integration for local/private AI models
|
||||
|
||||
## Installation
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
npm install simple-ai-provider
|
||||
# or
|
||||
yarn add simple-ai-provider
|
||||
# or
|
||||
bun add simple-ai-provider
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Usage with Claude
|
||||
### Basic Usage
|
||||
|
||||
```typescript
|
||||
import { createClaudeProvider } from 'simple-ai-provider';
|
||||
import { ClaudeProvider, OpenAIProvider, GeminiProvider, OpenWebUIProvider } from 'simple-ai-provider';
|
||||
|
||||
// Create a Claude provider
|
||||
const claude = createClaudeProvider('your-anthropic-api-key');
|
||||
// Claude
|
||||
const claude = new ClaudeProvider({
|
||||
apiKey: process.env.ANTHROPIC_API_KEY!,
|
||||
defaultModel: 'claude-3-5-sonnet-20241022'
|
||||
});
|
||||
|
||||
// Initialize the provider
|
||||
// OpenAI
|
||||
const openai = new OpenAIProvider({
|
||||
apiKey: process.env.OPENAI_API_KEY!,
|
||||
defaultModel: 'gpt-4o'
|
||||
});
|
||||
|
||||
// Google Gemini
|
||||
const gemini = new GeminiProvider({
|
||||
apiKey: process.env.GOOGLE_AI_API_KEY!,
|
||||
defaultModel: 'gemini-1.5-flash'
|
||||
});
|
||||
|
||||
// OpenWebUI (local)
|
||||
const openwebui = new OpenWebUIProvider({
|
||||
apiKey: 'ollama', // Often not required
|
||||
baseUrl: 'http://localhost:3000',
|
||||
defaultModel: 'llama2'
|
||||
});
|
||||
|
||||
// Initialize and use any provider
|
||||
await claude.initialize();
|
||||
|
||||
// Generate a completion
|
||||
const response = await claude.complete({
|
||||
messages: [
|
||||
{ role: 'user', content: 'Hello! How are you today?' }
|
||||
],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
|
||||
console.log(response.content);
|
||||
```
|
||||
|
||||
### Basic Usage with OpenAI
|
||||
|
||||
```typescript
|
||||
import { createOpenAIProvider } from 'simple-ai-provider';
|
||||
|
||||
// Create an OpenAI provider
|
||||
const openai = createOpenAIProvider('your-openai-api-key');
|
||||
|
||||
// Initialize the provider
|
||||
await openai.initialize();
|
||||
|
||||
// Generate a completion
|
||||
const response = await openai.complete({
|
||||
messages: [
|
||||
{ role: 'user', content: 'Hello! How are you today?' }
|
||||
],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
|
||||
console.log(response.content);
|
||||
```
|
||||
|
||||
### Basic Usage with Gemini
|
||||
|
||||
```typescript
|
||||
import { createGeminiProvider } from 'simple-ai-provider';
|
||||
|
||||
// Create a Gemini provider
|
||||
const gemini = createGeminiProvider('your-google-ai-api-key');
|
||||
|
||||
// Initialize the provider
|
||||
await gemini.initialize();
|
||||
|
||||
// Generate a completion
|
||||
const response = await gemini.complete({
|
||||
messages: [
|
||||
{ role: 'user', content: 'Hello! How are you today?' }
|
||||
],
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
|
||||
console.log(response.content);
|
||||
```
|
||||
|
||||
### Multi-Provider Usage
|
||||
|
||||
```typescript
|
||||
import { createProvider, createClaudeProvider, createOpenAIProvider, createGeminiProvider } from 'simple-ai-provider';
|
||||
|
||||
// Method 1: Using specific factory functions
|
||||
const claude = createClaudeProvider('your-anthropic-api-key');
|
||||
const openai = createOpenAIProvider('your-openai-api-key');
|
||||
const gemini = createGeminiProvider('your-google-ai-api-key');
|
||||
|
||||
// Method 2: Using generic factory
|
||||
const claude2 = createProvider('claude', { apiKey: 'your-anthropic-api-key' });
|
||||
const openai2 = createProvider('openai', { apiKey: 'your-openai-api-key' });
|
||||
const gemini2 = createProvider('gemini', { apiKey: 'your-google-ai-api-key' });
|
||||
|
||||
// Initialize all
|
||||
await Promise.all([claude.initialize(), openai.initialize(), gemini.initialize()]);
|
||||
|
||||
// Use the same interface for all providers
|
||||
const prompt = { messages: [{ role: 'user', content: 'Explain AI' }] };
|
||||
|
||||
const claudeResponse = await claude.complete(prompt);
|
||||
const openaiResponse = await openai.complete(prompt);
|
||||
const geminiResponse = await gemini.complete(prompt);
|
||||
```
|
||||
|
||||
### Streaming Responses
|
||||
|
||||
```typescript
|
||||
import { createGeminiProvider } from 'simple-ai-provider';
|
||||
|
||||
const gemini = createGeminiProvider('your-google-ai-api-key');
|
||||
await gemini.initialize();
|
||||
|
||||
// Stream a completion
|
||||
for await (const chunk of gemini.stream({
|
||||
messages: [
|
||||
{ role: 'user', content: 'Write a short story about a robot.' }
|
||||
],
|
||||
maxTokens: 500
|
||||
})) {
|
||||
if (!chunk.isComplete) {
|
||||
process.stdout.write(chunk.content);
|
||||
} else {
|
||||
console.log('\n\nUsage:', chunk.usage);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
```typescript
|
||||
import { ClaudeProvider, OpenAIProvider, GeminiProvider } from 'simple-ai-provider';
|
||||
|
||||
// Claude with custom configuration
|
||||
const claude = new ClaudeProvider({
|
||||
apiKey: 'your-anthropic-api-key',
|
||||
defaultModel: 'claude-3-5-sonnet-20241022',
|
||||
timeout: 30000,
|
||||
maxRetries: 3,
|
||||
baseUrl: 'https://api.anthropic.com' // optional custom endpoint
|
||||
});
|
||||
|
||||
// OpenAI with organization and project
|
||||
const openai = new OpenAIProvider({
|
||||
apiKey: 'your-openai-api-key',
|
||||
defaultModel: 'gpt-4',
|
||||
organization: 'org-your-org-id',
|
||||
project: 'proj-your-project-id',
|
||||
timeout: 60000,
|
||||
maxRetries: 5
|
||||
});
|
||||
|
||||
// Gemini with safety settings and generation config
|
||||
const gemini = new GeminiProvider({
|
||||
apiKey: 'your-google-ai-api-key',
|
||||
defaultModel: 'gemini-1.5-pro',
|
||||
safetySettings: [], // Configure content filtering
|
||||
generationConfig: {
|
||||
temperature: 0.8,
|
||||
topP: 0.9,
|
||||
topK: 40,
|
||||
maxOutputTokens: 2048
|
||||
},
|
||||
timeout: 45000
|
||||
});
|
||||
|
||||
await Promise.all([claude.initialize(), openai.initialize(), gemini.initialize()]);
|
||||
|
||||
const response = await gemini.complete({
|
||||
messages: [
|
||||
{ role: 'system', content: 'You are a helpful assistant.' },
|
||||
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
|
||||
{ role: 'user', content: 'Explain TypeScript in one sentence.' }
|
||||
],
|
||||
model: 'gemini-1.5-flash',
|
||||
maxTokens: 300,
|
||||
temperature: 0.5,
|
||||
topP: 0.9,
|
||||
stopSequences: ['\n\n']
|
||||
maxTokens: 100,
|
||||
temperature: 0.7
|
||||
});
|
||||
|
||||
console.log(response.content);
|
||||
```
|
||||
|
||||
## API Reference
|
||||
## 🏭 Factory Functions
|
||||
|
||||
### Core Types
|
||||
Create providers using factory functions for cleaner code:
|
||||
|
||||
#### `AIMessage`
|
||||
```typescript
|
||||
interface AIMessage {
|
||||
role: 'system' | 'user' | 'assistant';
|
||||
content: string;
|
||||
metadata?: Record<string, any>;
|
||||
}
|
||||
import { createProvider, createClaudeProvider, createOpenAIProvider, createGeminiProvider, createOpenWebUIProvider } from 'simple-ai-provider';
|
||||
|
||||
// Method 1: Specific factory functions
|
||||
const claude = createClaudeProvider({ apiKey: 'your-key' });
|
||||
const openai = createOpenAIProvider({ apiKey: 'your-key' });
|
||||
const gemini = createGeminiProvider({ apiKey: 'your-key' });
|
||||
const openwebui = createOpenWebUIProvider({ apiKey: 'your-key', baseUrl: 'http://localhost:3000' });
|
||||
|
||||
// Method 2: Generic factory
|
||||
const provider = createProvider('claude', { apiKey: 'your-key' });
|
||||
```
|
||||
|
||||
#### `CompletionParams`
|
||||
```typescript
|
||||
interface CompletionParams {
|
||||
messages: AIMessage[];
|
||||
model?: string;
|
||||
maxTokens?: number;
|
||||
temperature?: number;
|
||||
topP?: number;
|
||||
stopSequences?: string[];
|
||||
stream?: boolean;
|
||||
}
|
||||
## 📝 Environment Variables
|
||||
|
||||
Set up your API keys:
|
||||
|
||||
```bash
|
||||
# Required for respective providers
|
||||
export ANTHROPIC_API_KEY="your-claude-api-key"
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
export GOOGLE_AI_API_KEY="your-gemini-api-key"
|
||||
|
||||
# OpenWebUI Bearer Token (get from Settings > Account in OpenWebUI)
|
||||
export OPENWEBUI_API_KEY="your-bearer-token"
|
||||
```
|
||||
|
||||
#### `CompletionResponse`
|
||||
```typescript
|
||||
interface CompletionResponse {
|
||||
content: string;
|
||||
model: string;
|
||||
usage: TokenUsage;
|
||||
id: string;
|
||||
metadata?: Record<string, any>;
|
||||
}
|
||||
```
|
||||
## 🔧 Provider-Specific Configuration
|
||||
|
||||
### Factory Functions
|
||||
|
||||
#### `createClaudeProvider(apiKey, options?)`
|
||||
Creates a Claude provider with simplified configuration.
|
||||
### Claude Configuration
|
||||
|
||||
```typescript
|
||||
const claude = createClaudeProvider('your-api-key', {
|
||||
const claude = new ClaudeProvider({
|
||||
apiKey: 'your-api-key',
|
||||
defaultModel: 'claude-3-5-sonnet-20241022',
|
||||
version: '2023-06-01',
|
||||
maxRetries: 3,
|
||||
timeout: 30000
|
||||
});
|
||||
```
|
||||
|
||||
#### `createOpenAIProvider(apiKey, options?)`
|
||||
Creates an OpenAI provider with simplified configuration.
|
||||
### OpenAI Configuration
|
||||
|
||||
```typescript
|
||||
const openai = createOpenAIProvider('your-api-key', {
|
||||
defaultModel: 'gpt-4',
|
||||
organization: 'org-123',
|
||||
timeout: 60000
|
||||
});
|
||||
```
|
||||
|
||||
#### `createGeminiProvider(apiKey, options?)`
|
||||
Creates a Gemini provider with simplified configuration.
|
||||
|
||||
```typescript
|
||||
const gemini = createGeminiProvider('your-api-key', {
|
||||
defaultModel: 'gemini-1.5-pro',
|
||||
safetySettings: [],
|
||||
generationConfig: {
|
||||
temperature: 0.8,
|
||||
topK: 40
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
#### `createProvider(type, config)`
|
||||
Generic factory function for creating any provider type.
|
||||
|
||||
```typescript
|
||||
const claude = createProvider('claude', {
|
||||
const openai = new OpenAIProvider({
|
||||
apiKey: 'your-api-key',
|
||||
defaultModel: 'claude-3-5-sonnet-20241022'
|
||||
defaultModel: 'gpt-4o',
|
||||
organization: 'your-org-id',
|
||||
project: 'your-project-id',
|
||||
maxRetries: 3,
|
||||
timeout: 30000
|
||||
});
|
||||
```
|
||||
|
||||
const openai = createProvider('openai', {
|
||||
### Gemini Configuration
|
||||
|
||||
```typescript
|
||||
const gemini = new GeminiProvider({
|
||||
apiKey: 'your-api-key',
|
||||
defaultModel: 'gpt-4'
|
||||
});
|
||||
|
||||
const gemini = createProvider('gemini', {
|
||||
apiKey: 'your-api-key',
|
||||
defaultModel: 'gemini-1.5-flash'
|
||||
});
|
||||
```
|
||||
|
||||
### Provider Methods
|
||||
|
||||
#### `initialize(): Promise<void>`
|
||||
Initializes the provider and validates the configuration.
|
||||
|
||||
#### `complete(params): Promise<CompletionResponse>`
|
||||
Generates a completion based on the provided parameters.
|
||||
|
||||
#### `stream(params): AsyncIterable<CompletionChunk>`
|
||||
Generates a streaming completion.
|
||||
|
||||
#### `getInfo(): ProviderInfo`
|
||||
Returns information about the provider and its capabilities.
|
||||
|
||||
#### `isInitialized(): boolean`
|
||||
Checks if the provider has been initialized.
|
||||
|
||||
## Error Handling
|
||||
|
||||
The package provides comprehensive error handling with categorized error types:
|
||||
|
||||
```typescript
|
||||
import { AIProviderError, AIErrorType } from 'simple-ai-provider';
|
||||
|
||||
try {
|
||||
const response = await openai.complete({
|
||||
messages: [{ role: 'user', content: 'Hello!' }]
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof AIProviderError) {
|
||||
switch (error.type) {
|
||||
case AIErrorType.AUTHENTICATION:
|
||||
console.error('Invalid API key');
|
||||
break;
|
||||
case AIErrorType.RATE_LIMIT:
|
||||
console.error('Rate limit exceeded');
|
||||
break;
|
||||
case AIErrorType.INVALID_REQUEST:
|
||||
console.error('Invalid request parameters');
|
||||
break;
|
||||
default:
|
||||
console.error('Unknown error:', error.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Supported Models
|
||||
|
||||
### Claude (Anthropic)
|
||||
- `claude-3-5-sonnet-20241022` (default)
|
||||
- `claude-3-5-haiku-20241022`
|
||||
- `claude-3-opus-20240229`
|
||||
- `claude-3-sonnet-20240229`
|
||||
- `claude-3-haiku-20240307`
|
||||
|
||||
### OpenAI (GPT)
|
||||
- `gpt-4` (default)
|
||||
- `gpt-4-turbo`
|
||||
- `gpt-4-turbo-preview`
|
||||
- `gpt-4-0125-preview`
|
||||
- `gpt-4-1106-preview`
|
||||
- `gpt-3.5-turbo`
|
||||
- `gpt-3.5-turbo-0125`
|
||||
- `gpt-3.5-turbo-1106`
|
||||
|
||||
### Google Gemini
|
||||
- `gemini-1.5-flash` (default)
|
||||
- `gemini-1.5-flash-8b`
|
||||
- `gemini-1.5-pro`
|
||||
- `gemini-1.0-pro`
|
||||
- `gemini-1.0-pro-vision`
|
||||
|
||||
## Environment Variables
|
||||
|
||||
You can set your API keys as environment variables:
|
||||
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY="your-anthropic-api-key"
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
export GOOGLE_AI_API_KEY="your-google-ai-api-key"
|
||||
```
|
||||
|
||||
```typescript
|
||||
const claude = createClaudeProvider(process.env.ANTHROPIC_API_KEY!);
|
||||
const openai = createOpenAIProvider(process.env.OPENAI_API_KEY!);
|
||||
const gemini = createGeminiProvider(process.env.GOOGLE_AI_API_KEY!);
|
||||
```
|
||||
|
||||
## Provider Comparison
|
||||
|
||||
| Feature | Claude | OpenAI | Gemini |
|
||||
|---------|--------|--------|--------|
|
||||
| **Models** | 5 models | 8+ models | 5 models |
|
||||
| **Max Context** | 200K tokens | 128K tokens | 1M tokens |
|
||||
| **Streaming** | ✅ | ✅ | ✅ |
|
||||
| **Vision** | ✅ | ✅ | ✅ |
|
||||
| **Function Calling** | ✅ | ✅ | ✅ |
|
||||
| **JSON Mode** | ❌ | ✅ | ❌ |
|
||||
| **System Messages** | ✅ (separate) | ✅ (inline) | ✅ (separate) |
|
||||
| **Multimodal** | ✅ | ✅ | ✅ |
|
||||
| **Safety Controls** | Basic | Basic | Advanced |
|
||||
| **Special Features** | Advanced reasoning | JSON mode, plugins | Largest context, advanced safety |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always initialize providers** before using them
|
||||
2. **Handle errors gracefully** with proper error types
|
||||
3. **Use appropriate models** for your use case (speed vs. capability vs. context)
|
||||
4. **Set reasonable timeouts** for your application
|
||||
5. **Implement retry logic** for production applications
|
||||
6. **Monitor token usage** to control costs
|
||||
7. **Use environment variables** for API keys
|
||||
8. **Consider provider-specific features** when choosing
|
||||
9. **Configure safety settings** appropriately for Gemini
|
||||
10. **Leverage large context** capabilities of Gemini for complex tasks
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Provider Registry
|
||||
|
||||
```typescript
|
||||
import { ProviderRegistry } from 'simple-ai-provider';
|
||||
|
||||
// List all registered providers
|
||||
console.log(ProviderRegistry.getRegisteredProviders()); // ['claude', 'openai', 'gemini']
|
||||
|
||||
// Create provider by name
|
||||
const provider = ProviderRegistry.create('gemini', {
|
||||
apiKey: 'your-api-key'
|
||||
});
|
||||
|
||||
// Check if provider is registered
|
||||
if (ProviderRegistry.isRegistered('gemini')) {
|
||||
console.log('Gemini is available!');
|
||||
}
|
||||
```
|
||||
|
||||
### Gemini-Specific Features
|
||||
|
||||
```typescript
|
||||
import { createGeminiProvider } from 'simple-ai-provider';
|
||||
|
||||
const gemini = createGeminiProvider('your-api-key', {
|
||||
defaultModel: 'gemini-1.5-pro',
|
||||
defaultModel: 'gemini-1.5-flash',
|
||||
safetySettings: [
|
||||
{
|
||||
category: 'HARM_CATEGORY_HARASSMENT',
|
||||
@ -442,45 +137,268 @@ const gemini = createGeminiProvider('your-api-key', {
|
||||
}
|
||||
],
|
||||
generationConfig: {
|
||||
temperature: 0.9,
|
||||
temperature: 0.7,
|
||||
topP: 0.8,
|
||||
topK: 40,
|
||||
maxOutputTokens: 2048,
|
||||
stopSequences: ['END', 'STOP']
|
||||
maxOutputTokens: 1000
|
||||
}
|
||||
});
|
||||
|
||||
await gemini.initialize();
|
||||
|
||||
// Large context example (up to 1M tokens)
|
||||
const response = await gemini.complete({
|
||||
messages: [
|
||||
{ role: 'system', content: 'You are analyzing a large document.' },
|
||||
{ role: 'user', content: 'Your very large text here...' }
|
||||
],
|
||||
maxTokens: 2048
|
||||
});
|
||||
```
|
||||
|
||||
## Contributing
|
||||
### OpenWebUI Configuration
|
||||
|
||||
```typescript
|
||||
const openwebui = new OpenWebUIProvider({
|
||||
apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account
|
||||
baseUrl: 'http://localhost:3000', // Your OpenWebUI instance
|
||||
defaultModel: 'llama3.1',
|
||||
useOllamaProxy: false, // Use OpenWebUI's chat API (recommended)
|
||||
// useOllamaProxy: true, // Use Ollama API proxy for direct model access
|
||||
dangerouslyAllowInsecureConnections: true, // For local HTTPS
|
||||
timeout: 60000, // Longer timeout for local inference
|
||||
maxRetries: 2
|
||||
});
|
||||
```
|
||||
|
||||
## 🌊 Streaming Support
|
||||
|
||||
All providers support real-time streaming:
|
||||
|
||||
```typescript
|
||||
const stream = provider.stream({
|
||||
messages: [{ role: 'user', content: 'Count from 1 to 10' }],
|
||||
maxTokens: 100
|
||||
});
|
||||
|
||||
for await (const chunk of stream) {
|
||||
if (!chunk.isComplete) {
|
||||
process.stdout.write(chunk.content);
|
||||
} else {
|
||||
console.log('\nDone! Usage:', chunk.usage);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🔀 Multi-Provider Usage
|
||||
|
||||
Use multiple providers seamlessly:
|
||||
|
||||
```typescript
|
||||
const providers = {
|
||||
claude: new ClaudeProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
|
||||
openai: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }),
|
||||
gemini: new GeminiProvider({ apiKey: process.env.GOOGLE_AI_API_KEY! }),
|
||||
openwebui: new OpenWebUIProvider({
|
||||
apiKey: 'ollama',
|
||||
baseUrl: 'http://localhost:3000'
|
||||
})
|
||||
};
|
||||
|
||||
// Initialize all providers
|
||||
await Promise.all(Object.values(providers).map(p => p.initialize()));
|
||||
|
||||
// Use the same interface for all
|
||||
const prompt = {
|
||||
messages: [{ role: 'user', content: 'Hello!' }],
|
||||
maxTokens: 50
|
||||
};
|
||||
|
||||
for (const [name, provider] of Object.entries(providers)) {
|
||||
try {
|
||||
const response = await provider.complete(prompt);
|
||||
console.log(`${name}: ${response.content}`);
|
||||
} catch (error) {
|
||||
console.log(`${name} failed: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Provider Comparison
|
||||
|
||||
| Provider | Context Length | Streaming | Vision | Function Calling | Local Execution | Best For |
|
||||
|----------|---------------|-----------|--------|------------------|-----------------|----------|
|
||||
| **Claude** | 200K tokens | ✅ | ✅ | ✅ | ❌ | Reasoning, Analysis, Code Review |
|
||||
| **OpenAI** | 128K tokens | ✅ | ✅ | ✅ | ❌ | General Purpose, Function Calling |
|
||||
| **Gemini** | 1M tokens | ✅ | ✅ | ✅ | ❌ | Large Documents, Multimodal |
|
||||
| **OpenWebUI** | 8K-32K tokens | ✅ | Varies | Limited | ✅ | Privacy, Custom Models, Local |
|
||||
|
||||
## 🎯 Available Models
|
||||
|
||||
### Claude Models
|
||||
- `claude-3-5-sonnet-20241022` (recommended)
|
||||
- `claude-3-5-haiku-20241022`
|
||||
- `claude-3-opus-20240229`
|
||||
- `claude-3-sonnet-20240229`
|
||||
- `claude-3-haiku-20240307`
|
||||
|
||||
### OpenAI Models
|
||||
- `gpt-4o` (recommended)
|
||||
- `gpt-4o-mini`
|
||||
- `gpt-4-turbo`
|
||||
- `gpt-4`
|
||||
- `gpt-3.5-turbo`
|
||||
|
||||
### Gemini Models
|
||||
- `gemini-1.5-flash` (recommended, fast)
|
||||
- `gemini-1.5-flash-8b` (fastest)
|
||||
- `gemini-1.5-pro` (most capable)
|
||||
- `gemini-1.0-pro`
|
||||
- `gemini-1.0-pro-vision`
|
||||
|
||||
### OpenWebUI Models
|
||||
*Available models depend on your local installation:*
|
||||
- `llama3.1`, `llama3.1:8b`, `llama3.1:70b`
|
||||
- `llama3.2`, `llama3.2:1b`, `llama3.2:3b`
|
||||
- `codellama`, `codellama:7b`, `codellama:13b`, `codellama:34b`
|
||||
- `mistral`, `mistral:7b`
|
||||
- `mixtral`, `mixtral:8x7b`
|
||||
- `phi3`, `phi3:mini`
|
||||
- `gemma2`, `gemma2:2b`, `gemma2:9b`
|
||||
- `qwen2.5`, `granite3.1-dense:8b`
|
||||
- *Custom models as installed*
|
||||
|
||||
## 🚨 Error Handling
|
||||
|
||||
The package provides standardized error handling:
|
||||
|
||||
```typescript
|
||||
import { AIProviderError, AIErrorType } from 'simple-ai-provider';
|
||||
|
||||
try {
|
||||
const response = await provider.complete({
|
||||
messages: [{ role: 'user', content: 'Hello' }]
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof AIProviderError) {
|
||||
switch (error.type) {
|
||||
case AIErrorType.AUTHENTICATION:
|
||||
console.log('Invalid API key');
|
||||
break;
|
||||
case AIErrorType.RATE_LIMIT:
|
||||
console.log('Rate limited, try again later');
|
||||
break;
|
||||
case AIErrorType.MODEL_NOT_FOUND:
|
||||
console.log('Model not available');
|
||||
break;
|
||||
case AIErrorType.NETWORK:
|
||||
console.log('Network/connection issue');
|
||||
break;
|
||||
default:
|
||||
console.log('Unknown error:', error.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🔧 Advanced Usage
|
||||
|
||||
### Custom Base URLs
|
||||
|
||||
```typescript
|
||||
// OpenAI-compatible endpoint
|
||||
const customOpenAI = new OpenAIProvider({
|
||||
apiKey: 'your-key',
|
||||
baseUrl: 'https://api.custom-provider.com/v1'
|
||||
});
|
||||
|
||||
// Custom OpenWebUI instance
|
||||
const remoteOpenWebUI = new OpenWebUIProvider({
|
||||
apiKey: 'your-key',
|
||||
baseUrl: 'https://my-openwebui.example.com',
|
||||
apiPath: '/api/v1'
|
||||
});
|
||||
```
|
||||
|
||||
### Provider Information
|
||||
|
||||
```typescript
|
||||
const info = provider.getInfo();
|
||||
console.log(`Provider: ${info.name} v${info.version}`);
|
||||
console.log(`Models: ${info.models.join(', ')}`);
|
||||
console.log(`Max Context: ${info.maxContextLength} tokens`);
|
||||
console.log(`Supports Streaming: ${info.supportsStreaming}`);
|
||||
console.log('Capabilities:', info.capabilities);
|
||||
```
|
||||
|
||||
### OpenWebUI-Specific Features
|
||||
|
||||
OpenWebUI offers unique advantages for local AI deployment:
|
||||
|
||||
```typescript
|
||||
const openwebui = new OpenWebUIProvider({
|
||||
apiKey: 'your-bearer-token', // Get from OpenWebUI Settings > Account
|
||||
baseUrl: 'http://localhost:3000',
|
||||
defaultModel: 'llama3.1',
|
||||
useOllamaProxy: false, // Use chat completions API (recommended)
|
||||
// Longer timeout for local inference
|
||||
timeout: 120000,
|
||||
// Allow self-signed certificates for local development
|
||||
dangerouslyAllowInsecureConnections: true
|
||||
});
|
||||
|
||||
// Test connection and list available models
|
||||
try {
|
||||
await openwebui.initialize();
|
||||
console.log('Connected to local OpenWebUI instance');
|
||||
|
||||
// Use either chat completions or Ollama proxy
|
||||
const response = await openwebui.complete({
|
||||
messages: [{ role: 'user', content: 'Hello!' }],
|
||||
maxTokens: 100
|
||||
});
|
||||
} catch (error) {
|
||||
console.log('OpenWebUI not available:', error.message);
|
||||
// Gracefully fallback to cloud providers
|
||||
}
|
||||
```
|
||||
|
||||
**OpenWebUI API Modes:**
|
||||
- **Chat Completions** (`useOllamaProxy: false`): OpenWebUI's native API with full features
|
||||
- **Ollama Proxy** (`useOllamaProxy: true`): Direct access to Ollama API for raw model interaction
|
||||
|
||||
## 📦 TypeScript Support
|
||||
|
||||
Full TypeScript support with comprehensive type definitions:
|
||||
|
||||
```typescript
|
||||
import type {
|
||||
CompletionParams,
|
||||
CompletionResponse,
|
||||
CompletionChunk,
|
||||
ProviderInfo,
|
||||
ClaudeConfig,
|
||||
OpenAIConfig,
|
||||
GeminiConfig,
|
||||
OpenWebUIConfig
|
||||
} from 'simple-ai-provider';
|
||||
|
||||
// Type-safe configuration
|
||||
const config: ClaudeConfig = {
|
||||
apiKey: 'your-key',
|
||||
defaultModel: 'claude-3-5-sonnet-20241022',
|
||||
// TypeScript will validate all options
|
||||
};
|
||||
|
||||
// Type-safe responses
|
||||
const response: CompletionResponse = await provider.complete(params);
|
||||
```
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
|
||||
|
||||
## License
|
||||
## 📄 License
|
||||
|
||||
MIT
|
||||
MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## Changelog
|
||||
## 🔗 Links
|
||||
|
||||
### 1.0.0
|
||||
- Initial release
|
||||
- Claude provider implementation
|
||||
- OpenAI provider implementation
|
||||
- Gemini provider implementation
|
||||
- Streaming support for all providers
|
||||
- Comprehensive error handling
|
||||
- TypeScript support
|
||||
- Provider registry system
|
||||
- Multi-provider examples
|
||||
- Large context support (Gemini)
|
||||
- Advanced safety controls (Gemini)
|
||||
- [Anthropic Claude API](https://docs.anthropic.com/claude/reference/)
|
||||
- [OpenAI API](https://platform.openai.com/docs/)
|
||||
- [Google Gemini API](https://ai.google.dev/)
|
||||
- [OpenWebUI](https://openwebui.com/)
|
||||
- [GitHub Repository](https://github.com/your-username/simple-ai-provider)
|
||||
|
||||
---
|
||||
|
||||
⭐ **Star this repo if you find it helpful!**
|
||||
|
Reference in New Issue
Block a user