PlanOpticon

1
# FAQ & Troubleshooting
2
3
## Frequently Asked Questions
4
5
### Do I need an API key?
6
7
You need at least one of:
8
9
- **Cloud API key**: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY`
10
- **Local Ollama**: Install [Ollama](https://ollama.com), pull a model, and run `ollama serve`
11
12
Some features work without any AI provider:
13
14
- `planopticon query stats` — direct knowledge graph queries
15
- `planopticon query "entities --type person"` — structured entity lookups
16
- `planopticon export markdown` — document generation from existing KG (7 document types, no LLM)
17
- `planopticon kg inspect` — knowledge graph statistics
18
- `planopticon kg convert` — format conversion
19
20
### How much does it cost?
21
22
PlanOpticon defaults to cheap models to minimize costs:
23
24
| Task | Default model | Approximate cost |
25
|------|--------------|-----------------|
26
| Chat/analysis | Claude Haiku / GPT-4o-mini | ~$0.25-0.50 per 1M tokens |
27
| Vision (diagrams) | Gemini Flash / GPT-4o-mini | ~$0.10-0.50 per 1M tokens |
28
| Transcription | Local Whisper (free) / Whisper-1 | $0.006/minute |
29
30
A typical 1-hour meeting costs roughly $0.05-0.15 to process with default models. Use `--provider ollama` for zero cost.
31
32
### Can I run fully offline?
33
34
Yes. Install Ollama and local Whisper:
35
36
```bash
37
ollama pull llama3.2
38
ollama pull llava
39
pip install planopticon[gpu]
40
planopticon analyze -i video.mp4 -o ./output --provider ollama
41
```
42
43
No data leaves your machine.
44
45
### What video formats are supported?
46
47
Any format FFmpeg can decode:
48
49
- MP4, MKV, AVI, MOV, WebM, FLV, WMV, M4V
50
- Container formats with common codecs (H.264, H.265, VP8, VP9, AV1)
51
52
### What document formats can I ingest?
53
54
- **PDF** — text extraction via pymupdf or pdfplumber
55
- **Markdown** — parsed with heading-based chunking
56
- **Plain text** — paragraph-based chunking with overlap
57
58
### How does the knowledge graph work?
59
60
PlanOpticon extracts entities (people, technologies, concepts, decisions) and relationships from your content. These are stored in a SQLite database (`knowledge_graph.db`) with zero external dependencies. Entities are automatically classified using a planning taxonomy (goals, requirements, risks, tasks, milestones).
61
62
When you process multiple sources, entities are merged using fuzzy name matching (0.85 threshold) with type conflict resolution and provenance tracking.
63
64
### Can I use PlanOpticon with my existing Obsidian vault?
65
66
Yes, in both directions:
67
68
```bash
69
# Ingest an Obsidian vault into PlanOpticon
70
planopticon ingest ~/Obsidian/MyVault --output ./kb --recursive
71
72
# Export PlanOpticon knowledge to an Obsidian vault
73
planopticon export obsidian --input ./kb --output ~/Obsidian/PlanOpticon
74
```
75
76
The Obsidian export produces proper YAML frontmatter, wiki-links (`[[Entity Name]]`), and tag pages.
77
78
### How do I add my own AI provider?
79
80
Create a provider module, extend `BaseProvider`, and register it:
81
82
```python
83
from video_processor.providers.base import BaseProvider, ProviderRegistry
84
85
class MyProvider(BaseProvider):
86
provider_name = "myprovider"
87
88
def chat(self, messages, max_tokens=4096, temperature=0.7, model=None):
89
# Your implementation
90
...
91
92
ProviderRegistry.register(
93
name="myprovider",
94
provider_class=MyProvider,
95
env_var="MY_PROVIDER_API_KEY",
96
model_prefixes=["my-"],
97
default_models={"chat": "my-model-v1", "vision": "", "audio": ""},
98
)
99
```
100
101
See the [Contributing guide](contributing.md) for details.
102
103
---
104
105
## Troubleshooting
106
107
### Authentication errors
108
109
#### "No auth method available for zoom"
110
111
You need to set credentials before authenticating:
112
113
```bash
114
export ZOOM_CLIENT_ID="your-client-id"
115
export ZOOM_CLIENT_SECRET="your-client-secret"
116
planopticon auth zoom
117
```
118
119
The error message tells you which environment variables to set. Each service requires different credentials — see the [Authentication guide](guide/authentication.md).
120
121
#### "Token expired" or "401 Unauthorized"
122
123
Your saved token has expired and auto-refresh failed. Re-authenticate:
124
125
```bash
126
planopticon auth google # or whatever service
127
```
128
129
To clear a stale token:
130
131
```bash
132
planopticon auth google --logout
133
planopticon auth google
134
```
135
136
Tokens are stored in `~/.planopticon/{service}_token.json`.
137
138
#### OAuth redirect errors
139
140
If the browser-based OAuth flow fails, check:
141
142
1. Your client ID and secret are correct
143
2. The redirect URI in your OAuth app matches PlanOpticon's default (`urn:ietf:wg:oauth:2.0:oob`)
144
3. The OAuth app has the required scopes enabled
145
146
### Provider errors
147
148
#### "ANTHROPIC_API_KEY not set"
149
150
Set at least one provider's API key:
151
152
```bash
153
export OPENAI_API_KEY="sk-..."
154
# or
155
export ANTHROPIC_API_KEY="sk-ant-..."
156
# or
157
export GEMINI_API_KEY="AI..."
158
```
159
160
Or use a `.env` file in your project directory.
161
162
#### "Unexpected role system" (Anthropic)
163
164
This was a bug in older versions where system messages were passed in the messages array instead of as a top-level parameter. Update to v0.4.0 or later.
165
166
#### "Model not found" or "Invalid model"
167
168
Check available models:
169
170
```bash
171
planopticon list-models
172
```
173
174
Common model name issues:
175
- Anthropic: use `claude-haiku-4-5-20251001`, not `claude-haiku`
176
- OpenAI: use `gpt-4o-mini`, not `gpt4o-mini`
177
178
#### Rate limiting / 429 errors
179
180
PlanOpticon doesn't currently implement automatic retry. If you hit rate limits:
181
182
1. Use a different provider: `--provider gemini`
183
2. Use cheaper/faster models: `--chat-model gpt-4o-mini`
184
3. Reduce processing depth: `--depth basic`
185
4. Use Ollama for zero rate limits: `--provider ollama`
186
187
### Processing errors
188
189
#### "FFmpeg not found"
190
191
Install FFmpeg:
192
193
```bash
194
# macOS
195
brew install ffmpeg
196
197
# Ubuntu/Debian
198
sudo apt-get install ffmpeg libsndfile1
199
200
# Windows
201
# Download from https://ffmpeg.org/download.html and add to PATH
202
```
203
204
#### "Audio extraction failed: no audio track found"
205
206
The video file has no audio track. PlanOpticon will skip transcription and continue with frame analysis only.
207
208
#### "Frame extraction memory error"
209
210
For very long videos, frame extraction can use significant memory. Use the `--max-memory-mb` safety valve:
211
212
```bash
213
planopticon analyze -i long-video.mp4 -o ./output --max-memory-mb 2048
214
```
215
216
Or reduce the sampling rate:
217
218
```bash
219
planopticon analyze -i long-video.mp4 -o ./output --sampling-rate 0.25
220
```
221
222
#### Batch processing — one video fails
223
224
Individual video failures don't stop the batch. Failed videos are logged in the batch manifest with error details. Check `batch_manifest.json` for the specific error.
225
226
### Knowledge graph issues
227
228
#### "No knowledge graph loaded" in companion
229
230
The companion auto-discovers knowledge graphs by looking for `knowledge_graph.db` or `knowledge_graph.json` in the current directory and parent directories. Either:
231
232
1. `cd` to the directory containing your knowledge graph
233
2. Specify the path explicitly: `planopticon companion --kb ./path/to/kb`
234
235
#### Empty or sparse knowledge graph
236
237
Common causes:
238
239
1. **Too few entities extracted**: Try `--depth comprehensive` for deeper analysis
240
2. **Short or low-quality transcript**: Check `transcript/transcript.txt` — poor audio produces poor transcription
241
3. **Wrong provider**: Some models extract entities better than others. Try `--provider openai --chat-model gpt-4o` for higher quality
242
243
#### Duplicate entities after merge
244
245
The fuzzy matching threshold is 0.85 (SequenceMatcher ratio). If you're getting duplicates, the names are too different for automatic matching. You can manually inspect and merge:
246
247
```bash
248
planopticon kg inspect ./knowledge_graph.db
249
planopticon query "entities --name python"
250
```
251
252
### Companion / REPL issues
253
254
#### Chat gives generic advice instead of project-specific answers
255
256
The companion needs both a knowledge graph and an LLM provider. Check:
257
258
```
259
planopticon> /status
260
```
261
262
If it says "KG: not loaded" or "Provider: none", fix those first:
263
264
```
265
planopticon> /provider openai
266
planopticon> /model gpt-4o-mini
267
```
268
269
#### Companion is slow
270
271
The companion makes LLM API calls for chat messages. To speed things up:
272
273
1. Use a faster model: `/model gpt-4o-mini` or `/model claude-haiku-4-5-20251001`
274
2. Use direct queries instead of chat: `/entities`, `/search`, `/neighbors` don't need an LLM
275
3. Use Ollama locally for lower latency: `/provider ollama`
276
277
### Export issues
278
279
#### Obsidian export has broken links
280
281
Make sure your Obsidian vault has wiki-links enabled (Settings > Files & Links > Use [[Wikilinks]]). PlanOpticon exports use wiki-link syntax by default.
282
283
#### PDF export fails
284
285
PDF export requires the `pdf` extra:
286
287
```bash
288
pip install planopticon[pdf]
289
```
290
291
This installs WeasyPrint, which has system dependencies. On macOS:
292
293
```bash
294
brew install pango
295
```
296
297
On Ubuntu:
298
299
```bash
300
sudo apt-get install libpango1.0-dev
301
```
302

Keyboard Shortcuts

Open search /
Next entry (timeline) j
Previous entry (timeline) k
Open focused entry Enter
Show this help ?
Toggle theme Top nav button