|
3551b80…
|
noreply
|
1 |
# FAQ & Troubleshooting |
|
3551b80…
|
noreply
|
2 |
|
|
3551b80…
|
noreply
|
3 |
## Frequently Asked Questions |
|
3551b80…
|
noreply
|
4 |
|
|
3551b80…
|
noreply
|
5 |
### Do I need an API key? |
|
3551b80…
|
noreply
|
6 |
|
|
3551b80…
|
noreply
|
7 |
You need at least one of: |
|
3551b80…
|
noreply
|
8 |
|
|
3551b80…
|
noreply
|
9 |
- **Cloud API key**: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY` |
|
3551b80…
|
noreply
|
10 |
- **Local Ollama**: Install [Ollama](https://ollama.com), pull a model, and run `ollama serve` |
|
3551b80…
|
noreply
|
11 |
|
|
3551b80…
|
noreply
|
12 |
Some features work without any AI provider: |
|
3551b80…
|
noreply
|
13 |
|
|
3551b80…
|
noreply
|
14 |
- `planopticon query stats` — direct knowledge graph queries |
|
3551b80…
|
noreply
|
15 |
- `planopticon query "entities --type person"` — structured entity lookups |
|
3551b80…
|
noreply
|
16 |
- `planopticon export markdown` — document generation from existing KG (7 document types, no LLM) |
|
3551b80…
|
noreply
|
17 |
- `planopticon kg inspect` — knowledge graph statistics |
|
3551b80…
|
noreply
|
18 |
- `planopticon kg convert` — format conversion |
|
3551b80…
|
noreply
|
19 |
|
|
3551b80…
|
noreply
|
20 |
### How much does it cost? |
|
3551b80…
|
noreply
|
21 |
|
|
3551b80…
|
noreply
|
22 |
PlanOpticon defaults to cheap models to minimize costs: |
|
3551b80…
|
noreply
|
23 |
|
|
3551b80…
|
noreply
|
24 |
| Task | Default model | Approximate cost | |
|
3551b80…
|
noreply
|
25 |
|------|--------------|-----------------| |
|
3551b80…
|
noreply
|
26 |
| Chat/analysis | Claude Haiku / GPT-4o-mini | ~$0.25-0.50 per 1M tokens | |
|
3551b80…
|
noreply
|
27 |
| Vision (diagrams) | Gemini Flash / GPT-4o-mini | ~$0.10-0.50 per 1M tokens | |
|
3551b80…
|
noreply
|
28 |
| Transcription | Local Whisper (free) / Whisper-1 | $0.006/minute | |
|
3551b80…
|
noreply
|
29 |
|
|
3551b80…
|
noreply
|
30 |
A typical 1-hour meeting costs roughly $0.05-0.15 to process with default models. Use `--provider ollama` for zero cost. |
|
3551b80…
|
noreply
|
31 |
|
|
3551b80…
|
noreply
|
32 |
### Can I run fully offline? |
|
3551b80…
|
noreply
|
33 |
|
|
3551b80…
|
noreply
|
34 |
Yes. Install Ollama and local Whisper: |
|
3551b80…
|
noreply
|
35 |
|
|
3551b80…
|
noreply
|
36 |
```bash |
|
3551b80…
|
noreply
|
37 |
ollama pull llama3.2 |
|
3551b80…
|
noreply
|
38 |
ollama pull llava |
|
3551b80…
|
noreply
|
39 |
pip install planopticon[gpu] |
|
3551b80…
|
noreply
|
40 |
planopticon analyze -i video.mp4 -o ./output --provider ollama |
|
3551b80…
|
noreply
|
41 |
``` |
|
3551b80…
|
noreply
|
42 |
|
|
3551b80…
|
noreply
|
43 |
No data leaves your machine. |
|
3551b80…
|
noreply
|
44 |
|
|
3551b80…
|
noreply
|
45 |
### What video formats are supported? |
|
3551b80…
|
noreply
|
46 |
|
|
3551b80…
|
noreply
|
47 |
Any format FFmpeg can decode: |
|
3551b80…
|
noreply
|
48 |
|
|
3551b80…
|
noreply
|
49 |
- MP4, MKV, AVI, MOV, WebM, FLV, WMV, M4V |
|
3551b80…
|
noreply
|
50 |
- Container formats with common codecs (H.264, H.265, VP8, VP9, AV1) |
|
3551b80…
|
noreply
|
51 |
|
|
3551b80…
|
noreply
|
52 |
### What document formats can I ingest? |
|
3551b80…
|
noreply
|
53 |
|
|
3551b80…
|
noreply
|
54 |
- **PDF** — text extraction via pymupdf or pdfplumber |
|
3551b80…
|
noreply
|
55 |
- **Markdown** — parsed with heading-based chunking |
|
3551b80…
|
noreply
|
56 |
- **Plain text** — paragraph-based chunking with overlap |
|
3551b80…
|
noreply
|
57 |
|
|
3551b80…
|
noreply
|
58 |
### How does the knowledge graph work? |
|
3551b80…
|
noreply
|
59 |
|
|
3551b80…
|
noreply
|
60 |
PlanOpticon extracts entities (people, technologies, concepts, decisions) and relationships from your content. These are stored in a SQLite database (`knowledge_graph.db`) with zero external dependencies. Entities are automatically classified using a planning taxonomy (goals, requirements, risks, tasks, milestones). |
|
3551b80…
|
noreply
|
61 |
|
|
3551b80…
|
noreply
|
62 |
When you process multiple sources, entities are merged using fuzzy name matching (0.85 threshold) with type conflict resolution and provenance tracking. |
|
3551b80…
|
noreply
|
63 |
|
|
3551b80…
|
noreply
|
64 |
### Can I use PlanOpticon with my existing Obsidian vault? |
|
3551b80…
|
noreply
|
65 |
|
|
3551b80…
|
noreply
|
66 |
Yes, in both directions: |
|
3551b80…
|
noreply
|
67 |
|
|
3551b80…
|
noreply
|
68 |
```bash |
|
3551b80…
|
noreply
|
69 |
# Ingest an Obsidian vault into PlanOpticon |
|
3551b80…
|
noreply
|
70 |
planopticon ingest ~/Obsidian/MyVault --output ./kb --recursive |
|
3551b80…
|
noreply
|
71 |
|
|
3551b80…
|
noreply
|
72 |
# Export PlanOpticon knowledge to an Obsidian vault |
|
3551b80…
|
noreply
|
73 |
planopticon export obsidian --input ./kb --output ~/Obsidian/PlanOpticon |
|
3551b80…
|
noreply
|
74 |
``` |
|
3551b80…
|
noreply
|
75 |
|
|
3551b80…
|
noreply
|
76 |
The Obsidian export produces proper YAML frontmatter, wiki-links (`[[Entity Name]]`), and tag pages. |
|
3551b80…
|
noreply
|
77 |
|
|
3551b80…
|
noreply
|
78 |
### How do I add my own AI provider? |
|
3551b80…
|
noreply
|
79 |
|
|
3551b80…
|
noreply
|
80 |
Create a provider module, extend `BaseProvider`, and register it: |
|
3551b80…
|
noreply
|
81 |
|
|
3551b80…
|
noreply
|
82 |
```python |
|
3551b80…
|
noreply
|
83 |
from video_processor.providers.base import BaseProvider, ProviderRegistry |
|
3551b80…
|
noreply
|
84 |
|
|
3551b80…
|
noreply
|
85 |
class MyProvider(BaseProvider): |
|
3551b80…
|
noreply
|
86 |
provider_name = "myprovider" |
|
3551b80…
|
noreply
|
87 |
|
|
3551b80…
|
noreply
|
88 |
def chat(self, messages, max_tokens=4096, temperature=0.7, model=None): |
|
3551b80…
|
noreply
|
89 |
# Your implementation |
|
3551b80…
|
noreply
|
90 |
... |
|
3551b80…
|
noreply
|
91 |
|
|
3551b80…
|
noreply
|
92 |
ProviderRegistry.register( |
|
3551b80…
|
noreply
|
93 |
name="myprovider", |
|
3551b80…
|
noreply
|
94 |
provider_class=MyProvider, |
|
3551b80…
|
noreply
|
95 |
env_var="MY_PROVIDER_API_KEY", |
|
3551b80…
|
noreply
|
96 |
model_prefixes=["my-"], |
|
3551b80…
|
noreply
|
97 |
default_models={"chat": "my-model-v1", "vision": "", "audio": ""}, |
|
3551b80…
|
noreply
|
98 |
) |
|
3551b80…
|
noreply
|
99 |
``` |
|
3551b80…
|
noreply
|
100 |
|
|
3551b80…
|
noreply
|
101 |
See the [Contributing guide](contributing.md) for details. |
|
3551b80…
|
noreply
|
102 |
|
|
3551b80…
|
noreply
|
103 |
--- |
|
3551b80…
|
noreply
|
104 |
|
|
3551b80…
|
noreply
|
105 |
## Troubleshooting |
|
3551b80…
|
noreply
|
106 |
|
|
3551b80…
|
noreply
|
107 |
### Authentication errors |
|
3551b80…
|
noreply
|
108 |
|
|
3551b80…
|
noreply
|
109 |
#### "No auth method available for zoom" |
|
3551b80…
|
noreply
|
110 |
|
|
3551b80…
|
noreply
|
111 |
You need to set credentials before authenticating: |
|
3551b80…
|
noreply
|
112 |
|
|
3551b80…
|
noreply
|
113 |
```bash |
|
3551b80…
|
noreply
|
114 |
export ZOOM_CLIENT_ID="your-client-id" |
|
3551b80…
|
noreply
|
115 |
export ZOOM_CLIENT_SECRET="your-client-secret" |
|
3551b80…
|
noreply
|
116 |
planopticon auth zoom |
|
3551b80…
|
noreply
|
117 |
``` |
|
3551b80…
|
noreply
|
118 |
|
|
3551b80…
|
noreply
|
119 |
The error message tells you which environment variables to set. Each service requires different credentials — see the [Authentication guide](guide/authentication.md). |
|
3551b80…
|
noreply
|
120 |
|
|
3551b80…
|
noreply
|
121 |
#### "Token expired" or "401 Unauthorized" |
|
3551b80…
|
noreply
|
122 |
|
|
3551b80…
|
noreply
|
123 |
Your saved token has expired and auto-refresh failed. Re-authenticate: |
|
3551b80…
|
noreply
|
124 |
|
|
3551b80…
|
noreply
|
125 |
```bash |
|
3551b80…
|
noreply
|
126 |
planopticon auth google # or whatever service |
|
3551b80…
|
noreply
|
127 |
``` |
|
3551b80…
|
noreply
|
128 |
|
|
3551b80…
|
noreply
|
129 |
To clear a stale token: |
|
3551b80…
|
noreply
|
130 |
|
|
3551b80…
|
noreply
|
131 |
```bash |
|
3551b80…
|
noreply
|
132 |
planopticon auth google --logout |
|
3551b80…
|
noreply
|
133 |
planopticon auth google |
|
3551b80…
|
noreply
|
134 |
``` |
|
3551b80…
|
noreply
|
135 |
|
|
3551b80…
|
noreply
|
136 |
Tokens are stored in `~/.planopticon/{service}_token.json`. |
|
3551b80…
|
noreply
|
137 |
|
|
3551b80…
|
noreply
|
138 |
#### OAuth redirect errors |
|
3551b80…
|
noreply
|
139 |
|
|
3551b80…
|
noreply
|
140 |
If the browser-based OAuth flow fails, check: |
|
3551b80…
|
noreply
|
141 |
|
|
3551b80…
|
noreply
|
142 |
1. Your client ID and secret are correct |
|
3551b80…
|
noreply
|
143 |
2. The redirect URI in your OAuth app matches PlanOpticon's default (`urn:ietf:wg:oauth:2.0:oob`) |
|
3551b80…
|
noreply
|
144 |
3. The OAuth app has the required scopes enabled |
|
3551b80…
|
noreply
|
145 |
|
|
3551b80…
|
noreply
|
146 |
### Provider errors |
|
3551b80…
|
noreply
|
147 |
|
|
3551b80…
|
noreply
|
148 |
#### "ANTHROPIC_API_KEY not set" |
|
3551b80…
|
noreply
|
149 |
|
|
3551b80…
|
noreply
|
150 |
Set at least one provider's API key: |
|
3551b80…
|
noreply
|
151 |
|
|
3551b80…
|
noreply
|
152 |
```bash |
|
3551b80…
|
noreply
|
153 |
export OPENAI_API_KEY="sk-..." |
|
3551b80…
|
noreply
|
154 |
# or |
|
3551b80…
|
noreply
|
155 |
export ANTHROPIC_API_KEY="sk-ant-..." |
|
3551b80…
|
noreply
|
156 |
# or |
|
3551b80…
|
noreply
|
157 |
export GEMINI_API_KEY="AI..." |
|
3551b80…
|
noreply
|
158 |
``` |
|
3551b80…
|
noreply
|
159 |
|
|
3551b80…
|
noreply
|
160 |
Or use a `.env` file in your project directory. |
|
3551b80…
|
noreply
|
161 |
|
|
3551b80…
|
noreply
|
162 |
#### "Unexpected role system" (Anthropic) |
|
3551b80…
|
noreply
|
163 |
|
|
3551b80…
|
noreply
|
164 |
This was a bug in older versions where system messages were passed in the messages array instead of as a top-level parameter. Update to v0.4.0 or later. |
|
3551b80…
|
noreply
|
165 |
|
|
3551b80…
|
noreply
|
166 |
#### "Model not found" or "Invalid model" |
|
3551b80…
|
noreply
|
167 |
|
|
3551b80…
|
noreply
|
168 |
Check available models: |
|
3551b80…
|
noreply
|
169 |
|
|
3551b80…
|
noreply
|
170 |
```bash |
|
3551b80…
|
noreply
|
171 |
planopticon list-models |
|
3551b80…
|
noreply
|
172 |
``` |
|
3551b80…
|
noreply
|
173 |
|
|
3551b80…
|
noreply
|
174 |
Common model name issues: |
|
3551b80…
|
noreply
|
175 |
- Anthropic: use `claude-haiku-4-5-20251001`, not `claude-haiku` |
|
3551b80…
|
noreply
|
176 |
- OpenAI: use `gpt-4o-mini`, not `gpt4o-mini` |
|
3551b80…
|
noreply
|
177 |
|
|
3551b80…
|
noreply
|
178 |
#### Rate limiting / 429 errors |
|
3551b80…
|
noreply
|
179 |
|
|
3551b80…
|
noreply
|
180 |
PlanOpticon doesn't currently implement automatic retry. If you hit rate limits: |
|
3551b80…
|
noreply
|
181 |
|
|
3551b80…
|
noreply
|
182 |
1. Use a different provider: `--provider gemini` |
|
3551b80…
|
noreply
|
183 |
2. Use cheaper/faster models: `--chat-model gpt-4o-mini` |
|
3551b80…
|
noreply
|
184 |
3. Reduce processing depth: `--depth basic` |
|
3551b80…
|
noreply
|
185 |
4. Use Ollama for zero rate limits: `--provider ollama` |
|
3551b80…
|
noreply
|
186 |
|
|
3551b80…
|
noreply
|
187 |
### Processing errors |
|
3551b80…
|
noreply
|
188 |
|
|
3551b80…
|
noreply
|
189 |
#### "FFmpeg not found" |
|
3551b80…
|
noreply
|
190 |
|
|
3551b80…
|
noreply
|
191 |
Install FFmpeg: |
|
3551b80…
|
noreply
|
192 |
|
|
3551b80…
|
noreply
|
193 |
```bash |
|
3551b80…
|
noreply
|
194 |
# macOS |
|
3551b80…
|
noreply
|
195 |
brew install ffmpeg |
|
3551b80…
|
noreply
|
196 |
|
|
3551b80…
|
noreply
|
197 |
# Ubuntu/Debian |
|
3551b80…
|
noreply
|
198 |
sudo apt-get install ffmpeg libsndfile1 |
|
3551b80…
|
noreply
|
199 |
|
|
3551b80…
|
noreply
|
200 |
# Windows |
|
3551b80…
|
noreply
|
201 |
# Download from https://ffmpeg.org/download.html and add to PATH |
|
3551b80…
|
noreply
|
202 |
``` |
|
3551b80…
|
noreply
|
203 |
|
|
3551b80…
|
noreply
|
204 |
#### "Audio extraction failed: no audio track found" |
|
3551b80…
|
noreply
|
205 |
|
|
3551b80…
|
noreply
|
206 |
The video file has no audio track. PlanOpticon will skip transcription and continue with frame analysis only. |
|
3551b80…
|
noreply
|
207 |
|
|
3551b80…
|
noreply
|
208 |
#### "Frame extraction memory error" |
|
3551b80…
|
noreply
|
209 |
|
|
3551b80…
|
noreply
|
210 |
For very long videos, frame extraction can use significant memory. Use the `--max-memory-mb` safety valve: |
|
3551b80…
|
noreply
|
211 |
|
|
3551b80…
|
noreply
|
212 |
```bash |
|
3551b80…
|
noreply
|
213 |
planopticon analyze -i long-video.mp4 -o ./output --max-memory-mb 2048 |
|
3551b80…
|
noreply
|
214 |
``` |
|
3551b80…
|
noreply
|
215 |
|
|
3551b80…
|
noreply
|
216 |
Or reduce the sampling rate: |
|
3551b80…
|
noreply
|
217 |
|
|
3551b80…
|
noreply
|
218 |
```bash |
|
3551b80…
|
noreply
|
219 |
planopticon analyze -i long-video.mp4 -o ./output --sampling-rate 0.25 |
|
3551b80…
|
noreply
|
220 |
``` |
|
3551b80…
|
noreply
|
221 |
|
|
3551b80…
|
noreply
|
222 |
#### Batch processing — one video fails |
|
3551b80…
|
noreply
|
223 |
|
|
3551b80…
|
noreply
|
224 |
Individual video failures don't stop the batch. Failed videos are logged in the batch manifest with error details. Check `batch_manifest.json` for the specific error. |
|
3551b80…
|
noreply
|
225 |
|
|
3551b80…
|
noreply
|
226 |
### Knowledge graph issues |
|
3551b80…
|
noreply
|
227 |
|
|
3551b80…
|
noreply
|
228 |
#### "No knowledge graph loaded" in companion |
|
3551b80…
|
noreply
|
229 |
|
|
3551b80…
|
noreply
|
230 |
The companion auto-discovers knowledge graphs by looking for `knowledge_graph.db` or `knowledge_graph.json` in the current directory and parent directories. Either: |
|
3551b80…
|
noreply
|
231 |
|
|
3551b80…
|
noreply
|
232 |
1. `cd` to the directory containing your knowledge graph |
|
3551b80…
|
noreply
|
233 |
2. Specify the path explicitly: `planopticon companion --kb ./path/to/kb` |
|
3551b80…
|
noreply
|
234 |
|
|
3551b80…
|
noreply
|
235 |
#### Empty or sparse knowledge graph |
|
3551b80…
|
noreply
|
236 |
|
|
3551b80…
|
noreply
|
237 |
Common causes: |
|
3551b80…
|
noreply
|
238 |
|
|
3551b80…
|
noreply
|
239 |
1. **Too few entities extracted**: Try `--depth comprehensive` for deeper analysis |
|
3551b80…
|
noreply
|
240 |
2. **Short or low-quality transcript**: Check `transcript/transcript.txt` — poor audio produces poor transcription |
|
3551b80…
|
noreply
|
241 |
3. **Wrong provider**: Some models extract entities better than others. Try `--provider openai --chat-model gpt-4o` for higher quality |
|
3551b80…
|
noreply
|
242 |
|
|
3551b80…
|
noreply
|
243 |
#### Duplicate entities after merge |
|
3551b80…
|
noreply
|
244 |
|
|
3551b80…
|
noreply
|
245 |
The fuzzy matching threshold is 0.85 (SequenceMatcher ratio). If you're getting duplicates, the names are too different for automatic matching. You can manually inspect and merge: |
|
3551b80…
|
noreply
|
246 |
|
|
3551b80…
|
noreply
|
247 |
```bash |
|
3551b80…
|
noreply
|
248 |
planopticon kg inspect ./knowledge_graph.db |
|
3551b80…
|
noreply
|
249 |
planopticon query "entities --name python" |
|
3551b80…
|
noreply
|
250 |
``` |
|
3551b80…
|
noreply
|
251 |
|
|
3551b80…
|
noreply
|
252 |
### Companion / REPL issues |
|
3551b80…
|
noreply
|
253 |
|
|
3551b80…
|
noreply
|
254 |
#### Chat gives generic advice instead of project-specific answers |
|
3551b80…
|
noreply
|
255 |
|
|
3551b80…
|
noreply
|
256 |
The companion needs both a knowledge graph and an LLM provider. Check: |
|
3551b80…
|
noreply
|
257 |
|
|
3551b80…
|
noreply
|
258 |
``` |
|
3551b80…
|
noreply
|
259 |
planopticon> /status |
|
3551b80…
|
noreply
|
260 |
``` |
|
3551b80…
|
noreply
|
261 |
|
|
3551b80…
|
noreply
|
262 |
If it says "KG: not loaded" or "Provider: none", fix those first: |
|
3551b80…
|
noreply
|
263 |
|
|
3551b80…
|
noreply
|
264 |
``` |
|
3551b80…
|
noreply
|
265 |
planopticon> /provider openai |
|
3551b80…
|
noreply
|
266 |
planopticon> /model gpt-4o-mini |
|
3551b80…
|
noreply
|
267 |
``` |
|
3551b80…
|
noreply
|
268 |
|
|
3551b80…
|
noreply
|
269 |
#### Companion is slow |
|
3551b80…
|
noreply
|
270 |
|
|
3551b80…
|
noreply
|
271 |
The companion makes LLM API calls for chat messages. To speed things up: |
|
3551b80…
|
noreply
|
272 |
|
|
3551b80…
|
noreply
|
273 |
1. Use a faster model: `/model gpt-4o-mini` or `/model claude-haiku-4-5-20251001` |
|
3551b80…
|
noreply
|
274 |
2. Use direct queries instead of chat: `/entities`, `/search`, `/neighbors` don't need an LLM |
|
3551b80…
|
noreply
|
275 |
3. Use Ollama locally for lower latency: `/provider ollama` |
|
3551b80…
|
noreply
|
276 |
|
|
3551b80…
|
noreply
|
277 |
### Export issues |
|
3551b80…
|
noreply
|
278 |
|
|
3551b80…
|
noreply
|
279 |
#### Obsidian export has broken links |
|
3551b80…
|
noreply
|
280 |
|
|
3551b80…
|
noreply
|
281 |
Make sure your Obsidian vault has wiki-links enabled (Settings > Files & Links > Use [[Wikilinks]]). PlanOpticon exports use wiki-link syntax by default. |
|
3551b80…
|
noreply
|
282 |
|
|
3551b80…
|
noreply
|
283 |
#### PDF export fails |
|
3551b80…
|
noreply
|
284 |
|
|
3551b80…
|
noreply
|
285 |
PDF export requires the `pdf` extra: |
|
3551b80…
|
noreply
|
286 |
|
|
3551b80…
|
noreply
|
287 |
```bash |
|
3551b80…
|
noreply
|
288 |
pip install planopticon[pdf] |
|
3551b80…
|
noreply
|
289 |
``` |
|
3551b80…
|
noreply
|
290 |
|
|
3551b80…
|
noreply
|
291 |
This installs WeasyPrint, which has system dependencies. On macOS: |
|
3551b80…
|
noreply
|
292 |
|
|
3551b80…
|
noreply
|
293 |
```bash |
|
3551b80…
|
noreply
|
294 |
brew install pango |
|
3551b80…
|
noreply
|
295 |
``` |
|
3551b80…
|
noreply
|
296 |
|
|
3551b80…
|
noreply
|
297 |
On Ubuntu: |
|
3551b80…
|
noreply
|
298 |
|
|
3551b80…
|
noreply
|
299 |
```bash |
|
3551b80…
|
noreply
|
300 |
sudo apt-get install libpango1.0-dev |
|
3551b80…
|
noreply
|
301 |
``` |