docs: update README with new features and architecture details

This commit is contained in:
2025-10-01 14:32:50 +02:00
parent 989261916c
commit 0b1a7d90da
2 changed files with 254 additions and 40 deletions

292
README.md
View File

@@ -13,6 +13,21 @@ A professional, type-safe TypeScript package that provides a unified interface f
- 📦 **Zero Dependencies**: Lightweight with minimal external dependencies
- 🌐 **Local Support**: OpenWebUI integration for local/private AI models
- 🎨 **Structured Output**: Define custom response types for type-safe AI outputs
- 🏗️ **Provider Registry**: Dynamic provider registration and creation system
-**Comprehensive Testing**: Full test coverage with Bun test framework
- 🔍 **Advanced Validation**: Input validation with detailed error messages
## 🏗️ Architecture
The library is built on solid design principles:
- **Template Method Pattern**: Base provider defines the workflow, subclasses implement specifics
- **Factory Pattern**: Clean provider creation and management
- **Strategy Pattern**: Unified interface across different AI providers
- **Type Safety**: Comprehensive TypeScript support throughout
- **Error Normalization**: Consistent error handling across all providers
- **Validation First**: Input validation before processing
- **Extensibility**: Easy to add new providers via registry system
## 🚀 Quick Start
@@ -75,15 +90,33 @@ Create providers using factory functions for cleaner code:
import { createProvider, createClaudeProvider, createOpenAIProvider, createGeminiProvider, createOpenWebUIProvider } from 'simple-ai-provider';
// Method 1: Specific factory functions
const claude = createClaudeProvider({ apiKey: 'your-key' });
const openai = createOpenAIProvider({ apiKey: 'your-key' });
const gemini = createGeminiProvider({ apiKey: 'your-key' });
const claude = createClaudeProvider('your-key', { defaultModel: 'claude-3-5-sonnet-20241022' });
const openai = createOpenAIProvider('your-key', { defaultModel: 'gpt-4o' });
const gemini = createGeminiProvider('your-key', { defaultModel: 'gemini-1.5-flash' });
const openwebui = createOpenWebUIProvider({ apiKey: 'your-key', baseUrl: 'http://localhost:3000' });
// Method 2: Generic factory
const provider = createProvider('claude', { apiKey: 'your-key' });
```
### Provider Registry
For dynamic provider creation and registration:
```typescript
import { ProviderRegistry, ClaudeProvider } from 'simple-ai-provider';
// Register a custom provider
ProviderRegistry.register('my-claude', ClaudeProvider);
// Create provider dynamically
const provider = ProviderRegistry.create('my-claude', { apiKey: 'your-key' });
// Check available providers
const availableProviders = ProviderRegistry.getRegisteredProviders();
console.log(availableProviders); // ['claude', 'openai', 'gemini', 'openwebui', 'my-claude']
```
## 🎨 Structured Response Types
Define custom response types for type-safe, structured AI outputs. The library automatically parses the AI's response into your desired type.
@@ -303,46 +336,76 @@ for (const [name, provider] of Object.entries(providers)) {
| **Claude** | 200K tokens | ✅ | ✅ | ✅ | ❌ | Reasoning, Analysis, Code Review |
| **OpenAI** | 128K tokens | ✅ | ✅ | ✅ | ❌ | General Purpose, Function Calling |
| **Gemini** | 1M tokens | ✅ | ✅ | ✅ | ❌ | Large Documents, Multimodal |
| **OpenWebUI** | 8K-32K tokens | ✅ | Varies | Limited | ✅ | Privacy, Custom Models, Local |
| **OpenWebUI** | 32K tokens | ✅ | ❌ | ❌ | ✅ | Privacy, Custom Models, Local |
## 🎯 Available Models
### Detailed Capabilities
### Claude Models
- `claude-3-5-sonnet-20241022` (recommended)
- `claude-3-5-haiku-20241022`
- `claude-3-opus-20240229`
- `claude-3-sonnet-20240229`
- `claude-3-haiku-20240307`
Each provider offers unique capabilities:
### OpenAI Models
- `gpt-4o` (recommended)
- `gpt-4o-mini`
- `gpt-4-turbo`
- `gpt-4`
- `gpt-3.5-turbo`
#### Claude (Anthropic)
### Gemini Models
- `gemini-1.5-flash` (recommended, fast)
- `gemini-1.5-flash-8b` (fastest)
- `gemini-1.5-pro` (most capable)
- `gemini-1.0-pro`
- `gemini-1.0-pro-vision`
- Advanced reasoning and analysis
- Excellent code review capabilities
- Strong safety features
- System message support
### OpenWebUI Models
*Available models depend on your local installation:*
- `llama3.1`, `llama3.1:8b`, `llama3.1:70b`
- `llama3.2`, `llama3.2:1b`, `llama3.2:3b`
- `codellama`, `codellama:7b`, `codellama:13b`, `codellama:34b`
- `mistral`, `mistral:7b`
- `mixtral`, `mixtral:8x7b`
- `phi3`, `phi3:mini`
- `gemma2`, `gemma2:2b`, `gemma2:9b`
- `qwen2.5`, `granite3.1-dense:8b`
- *Custom models as installed*
#### OpenAI
- Broad model selection
- Function calling support
- JSON mode for structured outputs
- Vision capabilities
#### Gemini (Google)
- Largest context window (1M tokens)
- Multimodal capabilities
- Cost-effective pricing
- Strong multilingual support
#### OpenWebUI
- Complete privacy (local execution)
- Custom model support
- No API costs
- RAG (Retrieval Augmented Generation) support
## 🎯 Model Selection
### Getting Available Models
Instead of maintaining a static list, you can programmatically get available models:
```typescript
// Get provider information including available models
const info = provider.getInfo();
console.log('Available models:', info.models);
// Example output:
// Claude: ['claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022', ...]
// OpenAI: ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', ...]
// Gemini: ['gemini-1.5-flash', 'gemini-1.5-pro', ...]
// OpenWebUI: ['llama3.1:latest', 'mistral:latest', ...]
```
### Model Selection Guidelines
**For Claude (Anthropic):**
- Check [Anthropic's model documentation](https://docs.anthropic.com/claude/docs/models-overview) for latest models
**For OpenAI:**
- Check [OpenAI's model documentation](https://platform.openai.com/docs/models) for latest models
**For Gemini (Google):**
- Check [Google AI's model documentation](https://ai.google.dev/docs/models) for latest models
**For OpenWebUI:**
- Models depend on your local installation
- Check your OpenWebUI instance for available models
## 🚨 Error Handling
The package provides standardized error handling:
The package provides comprehensive, standardized error handling with detailed error types:
```typescript
import { AIProviderError, AIErrorType } from 'simple-ai-provider';
@@ -355,24 +418,47 @@ try {
if (error instanceof AIProviderError) {
switch (error.type) {
case AIErrorType.AUTHENTICATION:
console.log('Invalid API key');
console.log('Invalid API key or authentication failed');
break;
case AIErrorType.RATE_LIMIT:
console.log('Rate limited, try again later');
break;
case AIErrorType.MODEL_NOT_FOUND:
console.log('Model not available');
console.log('Model not available or not found');
break;
case AIErrorType.INVALID_REQUEST:
console.log('Invalid request parameters');
break;
case AIErrorType.NETWORK:
console.log('Network/connection issue');
break;
default:
case AIErrorType.TIMEOUT:
console.log('Request timed out');
break;
case AIErrorType.UNKNOWN:
console.log('Unknown error:', error.message);
break;
default:
console.log('Error:', error.message);
}
// Access additional error details
console.log('Status Code:', error.statusCode);
console.log('Original Error:', error.originalError);
}
}
```
### Error Types
- **AUTHENTICATION**: Invalid API keys or authentication failures
- **RATE_LIMIT**: API rate limits exceeded
- **INVALID_REQUEST**: Malformed requests or invalid parameters
- **MODEL_NOT_FOUND**: Requested model is not available
- **NETWORK**: Connection issues or server errors
- **TIMEOUT**: Request timeout exceeded
- **UNKNOWN**: Unclassified errors
## 🔧 Advanced Usage
### Custom Base URLs
@@ -436,6 +522,7 @@ try {
```
**OpenWebUI API Modes:**
- **Chat Completions** (`useOllamaProxy: false`): OpenWebUI's native API with full features
- **Ollama Proxy** (`useOllamaProxy: true`): Direct access to Ollama API for raw model interaction
@@ -452,7 +539,10 @@ import type {
ClaudeConfig,
OpenAIConfig,
GeminiConfig,
OpenWebUIConfig
OpenWebUIConfig,
AIMessage,
ResponseType,
TokenUsage
} from 'simple-ai-provider';
// Type-safe configuration
@@ -464,12 +554,136 @@ const config: ClaudeConfig = {
// Type-safe responses
const response: CompletionResponse = await provider.complete(params);
// Type-safe messages with metadata
const messages: AIMessage[] = [
{
role: 'user',
content: 'Hello',
metadata: { timestamp: Date.now() }
}
];
// Type-safe response types
interface UserProfile {
name: string;
age: number;
}
const responseType: ResponseType<UserProfile> = createResponseType(
'A user profile with name and age',
{ name: 'John', age: 30 }
);
```
### Advanced Type Features
- **Generic Response Types**: Type-safe structured outputs
- **Message Metadata**: Support for custom message properties
- **Provider-Specific Configs**: Type-safe configuration for each provider
- **Error Types**: Comprehensive error type definitions
- **Factory Functions**: Type-safe provider creation
## 🧪 Testing
The package includes comprehensive tests using Bun test framework:
```bash
# Run all tests
bun test
# Run tests for specific provider
bun test tests/claude.test.ts
bun test tests/openai.test.ts
bun test tests/gemini.test.ts
bun test tests/openwebui.test.ts
# Run tests with coverage
bun test --coverage
```
### Test Coverage
- ✅ Provider initialization and configuration
- ✅ Message validation and conversion
- ✅ Error handling and normalization
- ✅ Response formatting
- ✅ Streaming functionality
- ✅ Structured response types
- ✅ Factory functions
- ✅ Provider registry
## 🛠️ Development
### Prerequisites
- Node.js 18.0.0 or higher
- Bun (recommended) or npm/yarn
- TypeScript 5.0 or higher
### Setup
```bash
# Clone the repository
git clone https://gitea.jleibl.net/jleibl/simple-ai-provider.git
cd simple-ai-provider
# Install dependencies
bun install
# Build the project
bun run build
# Run tests
bun test
# Run examples
bun run examples/basic-usage.ts
bun run examples/structured-response-types.ts
bun run examples/multi-provider.ts
```
### Project Structure
```text
src/
├── index.ts # Main entry point
├── types/
│ └── index.ts # Type definitions and utilities
├── providers/
│ ├── base.ts # Abstract base provider
│ ├── claude.ts # Claude provider implementation
│ ├── openai.ts # OpenAI provider implementation
│ ├── gemini.ts # Gemini provider implementation
│ ├── openwebui.ts # OpenWebUI provider implementation
│ └── index.ts # Provider exports
└── utils/
└── factory.ts # Factory functions and registry
examples/
├── basic-usage.ts # Basic usage examples
├── structured-response-types.ts # Structured output examples
└── multi-provider.ts # Multi-provider examples
tests/
├── claude.test.ts # Claude provider tests
├── openai.test.ts # OpenAI provider tests
├── gemini.test.ts # Gemini provider tests
└── openwebui.test.ts # OpenWebUI provider tests
```
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
### Development Guidelines
1. **Code Style**: Follow the existing TypeScript patterns
2. **Testing**: Add tests for new features
3. **Documentation**: Update README for new features
4. **Type Safety**: Maintain comprehensive type definitions
5. **Error Handling**: Use standardized error types
## 📄 License
MIT License - see the [LICENSE](LICENSE) file for details.

View File

@@ -58,4 +58,4 @@ export const SUPPORTED_PROVIDERS = ['claude', 'openai', 'gemini', 'openwebui'] a
/**
* Package version
*/
export const VERSION = '1.2.0';
export const VERSION = '1.3.1';