|
f0106a3…
|
leo
|
1 |
# Provider System |
|
f0106a3…
|
leo
|
2 |
|
|
f0106a3…
|
leo
|
3 |
## Overview |
|
f0106a3…
|
leo
|
4 |
|
|
0981a08…
|
noreply
|
5 |
PlanOpticon supports multiple AI providers through a unified abstraction layer. Default models favor cost-effective options (Haiku, GPT-4o-mini, Gemini Flash) for routine tasks, with more capable models available when needed. |
|
f0106a3…
|
leo
|
6 |
|
|
f0106a3…
|
leo
|
7 |
## Supported providers |
|
f0106a3…
|
leo
|
8 |
|
|
0981a08…
|
noreply
|
9 |
| Provider | Chat | Vision | Transcription | Env Variable | |
|
0981a08…
|
noreply
|
10 |
|----------|------|--------|--------------|--------------| |
|
0981a08…
|
noreply
|
11 |
| OpenAI | GPT-4o-mini, GPT-4o | GPT-4o-mini, GPT-4o | Whisper-1 | `OPENAI_API_KEY` | |
|
0981a08…
|
noreply
|
12 |
| Anthropic | Claude Haiku, Sonnet, Opus | Claude Haiku, Sonnet, Opus | — | `ANTHROPIC_API_KEY` | |
|
0981a08…
|
noreply
|
13 |
| Google Gemini | Gemini Flash, Pro | Gemini Flash, Pro | Gemini Flash | `GEMINI_API_KEY` | |
|
0981a08…
|
noreply
|
14 |
| Azure OpenAI | GPT-4o-mini, GPT-4o | GPT-4o-mini, GPT-4o | Whisper-1 | `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT` | |
|
0981a08…
|
noreply
|
15 |
| Together AI | Llama, Mixtral, etc. | Llava | — | `TOGETHER_API_KEY` | |
|
0981a08…
|
noreply
|
16 |
| Fireworks AI | Llama, Mixtral, etc. | Llava | — | `FIREWORKS_API_KEY` | |
|
0981a08…
|
noreply
|
17 |
| Cerebras | Llama (fast inference) | — | — | `CEREBRAS_API_KEY` | |
|
0981a08…
|
noreply
|
18 |
| xAI | Grok | Grok | — | `XAI_API_KEY` | |
|
0981a08…
|
noreply
|
19 |
| Ollama (local) | Any installed model | llava, moondream, etc. | — (use local Whisper) | `OLLAMA_HOST` | |
|
0981a08…
|
noreply
|
20 |
|
|
0981a08…
|
noreply
|
21 |
## Default models |
|
0981a08…
|
noreply
|
22 |
|
|
0981a08…
|
noreply
|
23 |
PlanOpticon defaults to cheap, fast models for cost efficiency: |
|
0981a08…
|
noreply
|
24 |
|
|
0981a08…
|
noreply
|
25 |
| Task | Default model | |
|
0981a08…
|
noreply
|
26 |
|------|--------------| |
|
0981a08…
|
noreply
|
27 |
| Vision (diagrams) | Gemini Flash | |
|
0981a08…
|
noreply
|
28 |
| Chat (analysis) | Claude Haiku | |
|
0981a08…
|
noreply
|
29 |
| Transcription | Local Whisper (fallback: Whisper-1) | |
|
0981a08…
|
noreply
|
30 |
|
|
0981a08…
|
noreply
|
31 |
Use `--vision-model` and `--chat-model` to override with more capable models when needed (e.g., `--chat-model claude-sonnet-4-20250514` for complex analysis). |
|
a0146a5…
|
noreply
|
32 |
|
|
a0146a5…
|
noreply
|
33 |
## Ollama (offline mode) |
|
a0146a5…
|
noreply
|
34 |
|
|
a0146a5…
|
noreply
|
35 |
[Ollama](https://ollama.com) enables fully offline operation with no API keys required. PlanOpticon connects via Ollama's OpenAI-compatible API. |
|
a0146a5…
|
noreply
|
36 |
|
|
a0146a5…
|
noreply
|
37 |
```bash |
|
a0146a5…
|
noreply
|
38 |
# Install and start Ollama |
|
a0146a5…
|
noreply
|
39 |
ollama serve |
|
a0146a5…
|
noreply
|
40 |
|
|
a0146a5…
|
noreply
|
41 |
# Pull a chat model |
|
a0146a5…
|
noreply
|
42 |
ollama pull llama3.2 |
|
a0146a5…
|
noreply
|
43 |
|
|
a0146a5…
|
noreply
|
44 |
# Pull a vision model (for diagram analysis) |
|
a0146a5…
|
noreply
|
45 |
ollama pull llava |
|
a0146a5…
|
noreply
|
46 |
``` |
|
a0146a5…
|
noreply
|
47 |
|
|
a0146a5…
|
noreply
|
48 |
PlanOpticon auto-detects Ollama when it's running. To force Ollama: |
|
a0146a5…
|
noreply
|
49 |
|
|
a0146a5…
|
noreply
|
50 |
```bash |
|
a0146a5…
|
noreply
|
51 |
planopticon analyze -i video.mp4 -o ./out --provider ollama |
|
a0146a5…
|
noreply
|
52 |
``` |
|
a0146a5…
|
noreply
|
53 |
|
|
a0146a5…
|
noreply
|
54 |
Configure a non-default host via `OLLAMA_HOST`: |
|
a0146a5…
|
noreply
|
55 |
|
|
a0146a5…
|
noreply
|
56 |
```bash |
|
a0146a5…
|
noreply
|
57 |
export OLLAMA_HOST=http://192.168.1.100:11434 |
|
a0146a5…
|
noreply
|
58 |
``` |
|
f0106a3…
|
leo
|
59 |
|
|
f0106a3…
|
leo
|
60 |
## Auto-discovery |
|
f0106a3…
|
leo
|
61 |
|
|
a0146a5…
|
noreply
|
62 |
On startup, `ProviderManager` checks which API keys are configured, queries each provider's API, and checks for a running Ollama server to discover available models: |
|
f0106a3…
|
leo
|
63 |
|
|
f0106a3…
|
leo
|
64 |
```python |
|
f0106a3…
|
leo
|
65 |
from video_processor.providers.manager import ProviderManager |
|
f0106a3…
|
leo
|
66 |
|
|
f0106a3…
|
leo
|
67 |
pm = ProviderManager() |
|
a0146a5…
|
noreply
|
68 |
# Automatically discovers models from all configured providers + Ollama |
|
f0106a3…
|
leo
|
69 |
``` |
|
f0106a3…
|
leo
|
70 |
|
|
f0106a3…
|
leo
|
71 |
## Routing preferences |
|
f0106a3…
|
leo
|
72 |
|
|
0981a08…
|
noreply
|
73 |
Each task type has a default preference order (cheapest first): |
|
f0106a3…
|
leo
|
74 |
|
|
f0106a3…
|
leo
|
75 |
| Task | Preference | |
|
f0106a3…
|
leo
|
76 |
|------|-----------| |
|
0981a08…
|
noreply
|
77 |
| Vision | Gemini Flash → GPT-4o-mini → Claude Haiku → Ollama | |
|
0981a08…
|
noreply
|
78 |
| Chat | Claude Haiku → GPT-4o-mini → Gemini Flash → Ollama | |
|
a0146a5…
|
noreply
|
79 |
| Transcription | Local Whisper → Whisper-1 → Gemini Flash | |
|
a0146a5…
|
noreply
|
80 |
|
|
0981a08…
|
noreply
|
81 |
Ollama acts as the last-resort fallback -- if no cloud API keys are set but Ollama is running, it is used automatically. |
|
f0106a3…
|
leo
|
82 |
|
|
f0106a3…
|
leo
|
83 |
## Manual override |
|
f0106a3…
|
leo
|
84 |
|
|
f0106a3…
|
leo
|
85 |
```python |
|
f0106a3…
|
leo
|
86 |
pm = ProviderManager( |
|
f0106a3…
|
leo
|
87 |
vision_model="gpt-4o", |
|
0981a08…
|
noreply
|
88 |
chat_model="claude-sonnet-4-20250514", |
|
f0106a3…
|
leo
|
89 |
provider="openai", # Force a specific provider |
|
0981a08…
|
noreply
|
90 |
) |
|
0981a08…
|
noreply
|
91 |
|
|
0981a08…
|
noreply
|
92 |
# Use a cheap model for bulk processing |
|
0981a08…
|
noreply
|
93 |
pm = ProviderManager( |
|
0981a08…
|
noreply
|
94 |
chat_model="claude-haiku-3-5-20241022", |
|
0981a08…
|
noreply
|
95 |
vision_model="gemini-2.0-flash", |
|
a0146a5…
|
noreply
|
96 |
) |
|
a0146a5…
|
noreply
|
97 |
|
|
a0146a5…
|
noreply
|
98 |
# Or use Ollama for fully offline processing |
|
a0146a5…
|
noreply
|
99 |
pm = ProviderManager(provider="ollama") |
|
0981a08…
|
noreply
|
100 |
|
|
0981a08…
|
noreply
|
101 |
# Use Azure OpenAI |
|
0981a08…
|
noreply
|
102 |
pm = ProviderManager(provider="azure") |
|
0981a08…
|
noreply
|
103 |
|
|
0981a08…
|
noreply
|
104 |
# Use Together AI for open-source models |
|
0981a08…
|
noreply
|
105 |
pm = ProviderManager(provider="together", chat_model="meta-llama/Llama-3.3-70B-Instruct-Turbo") |
|
f0106a3…
|
leo
|
106 |
``` |
|
f0106a3…
|
leo
|
107 |
|
|
f0106a3…
|
leo
|
108 |
## BaseProvider interface |
|
f0106a3…
|
leo
|
109 |
|
|
f0106a3…
|
leo
|
110 |
All providers implement: |
|
f0106a3…
|
leo
|
111 |
|
|
f0106a3…
|
leo
|
112 |
```python |
|
f0106a3…
|
leo
|
113 |
class BaseProvider(ABC): |
|
f0106a3…
|
leo
|
114 |
def chat(messages, max_tokens, temperature) -> str |
|
f0106a3…
|
leo
|
115 |
def analyze_image(image_path, prompt, max_tokens) -> str |
|
f0106a3…
|
leo
|
116 |
def transcribe_audio(audio_path) -> dict |
|
f0106a3…
|
leo
|
117 |
def list_models() -> List[ModelInfo] |
|
f0106a3…
|
leo
|
118 |
``` |