PlanOpticon

planopticon / README.md
1
# PlanOpticon
2
3
[![CI](https://github.com/ConflictHQ/PlanOpticon/actions/workflows/ci.yml/badge.svg)](https://github.com/ConflictHQ/PlanOpticon/actions/workflows/ci.yml)
4
[![PyPI](https://img.shields.io/pypi/v/planopticon)](https://pypi.org/project/planopticon/)
5
[![Python](https://img.shields.io/pypi/pyversions/planopticon)](https://pypi.org/project/planopticon/)
6
[![License](https://img.shields.io/github/license/ConflictHQ/PlanOpticon)](LICENSE)
7
[![Docs](https://img.shields.io/badge/docs-planopticon.dev-blue)](https://planopticon.dev)
8
9
**AI-powered video analysis, knowledge extraction, and planning.**
10
11
PlanOpticon processes video recordings, documents, and 20+ online sources into structured knowledge graphs, then helps you plan with an AI agent and interactive companion. It auto-discovers models across 15+ AI providers, runs fully offline with Ollama, and produces rich multi-format output.
12
13
## Features
14
15
- **15+ AI providers** -- OpenAI, Anthropic, Gemini, Ollama, Azure, Together, Fireworks, Cerebras, xAI, Bedrock, Vertex, Mistral, Cohere, AI21, HuggingFace, Qianfan, and LiteLLM. Defaults to cheap models (Haiku, GPT-4o-mini, Gemini Flash).
16
- **20+ source connectors** -- YouTube, web pages, GitHub, Reddit, HackerNews, RSS, podcasts, arXiv, S3, Google Workspace, Microsoft 365, Obsidian, Notion, Apple Notes, Zoom, Teams, Google Meet, and more.
17
- **Planning agent** -- 11 skills including project plans, PRDs, roadmaps, task breakdowns, and GitHub integration.
18
- **Interactive companion** -- Chat REPL with 15 slash commands, auto-discovery of workspace knowledge, and runtime provider/model switching.
19
- **Knowledge graphs** -- SQLite-backed (zero external deps), entity extraction with planning taxonomy (goals, requirements, risks, tasks, milestones), merge and dedup across sources.
20
- **Smart video analysis** -- Change-detection frame extraction, face filtering, diagram classification, action item detection, checkpoint/resume.
21
- **Document ingestion** -- PDF, Markdown, and plaintext pipelines feed the same knowledge graph.
22
- **Export everywhere** -- Markdown docs (7 types, no LLM required), Obsidian vaults, Notion markdown, GitHub wiki with push, PlanOpticonExchange JSON interchange, HTML/PDF reports, Mermaid diagrams.
23
- **OAuth-first auth** -- Unified OAuth manager for Google, Dropbox, Zoom, Notion, GitHub, and Microsoft with saved-token / PKCE / API-key fallback chain.
24
- **Batch processing** -- Process entire folders with merged knowledge graphs and cross-referencing.
25
26
## Quick Start
27
28
```bash
29
# Install
30
pip install planopticon
31
32
# Analyze a video
33
planopticon analyze -i meeting.mp4 -o ./output
34
35
# Ingest a document
36
planopticon ingest -i spec.pdf -o ./output
37
38
# Fetch from a source
39
planopticon fetch youtube "https://youtube.com/watch?v=..." -o ./output
40
41
# Process a folder of videos
42
planopticon batch -i ./recordings -o ./output --title "Weekly Meetings"
43
44
# Query the knowledge graph
45
planopticon query
46
planopticon query "entities --type technology"
47
48
# See available AI models
49
planopticon list-models
50
```
51
52
## Planning Agent
53
54
Run AI-powered planning skills against your knowledge base:
55
56
```bash
57
# Generate a project plan from extracted knowledge
58
planopticon agent "Create a project plan" --kb ./results
59
60
# Build a PRD
61
planopticon agent "Write a PRD for the authentication system" --kb ./results
62
63
# Break down tasks
64
planopticon agent "Break this into tasks and estimate effort" --kb ./results
65
```
66
67
11 skills: `project_plan`, `prd`, `roadmap`, `task_breakdown`, `github_integration`, `requirements_chat`, `doc_generator`, `artifact_export`, `cli_adapter`, `notes_export`, `wiki_generator`.
68
69
## Interactive Companion
70
71
A chat REPL that auto-discovers knowledge graphs, videos, and docs in your workspace:
72
73
```bash
74
# Launch the companion
75
planopticon companion
76
# or
77
planopticon --chat
78
```
79
80
15 slash commands: `/help`, `/status`, `/skills`, `/entities`, `/search`, `/neighbors`, `/export`, `/analyze`, `/ingest`, `/auth`, `/provider`, `/model`, `/run`, `/plan`, `/prd`, `/tasks`.
81
82
Switch providers and models at runtime, explore your knowledge graph interactively, or chat with any configured LLM.
83
84
## Source Connectors
85
86
| Category | Sources |
87
|----------|---------|
88
| Media | YouTube, Web, Podcasts, RSS |
89
| Code & Community | GitHub, Reddit, HackerNews, arXiv |
90
| Cloud Storage | S3, Google Drive, Dropbox |
91
| Google Workspace | Docs, Sheets, Slides (via gws CLI) |
92
| Microsoft 365 | SharePoint, OneDrive (via m365 CLI) |
93
| Notes | Obsidian, Notion, Apple Notes, OneNote, Google Keep, Logseq |
94
| Meetings | Zoom (OAuth), Teams, Google Meet |
95
96
## Export & Documents
97
98
Generate documents from your knowledge graph without an LLM:
99
100
```bash
101
planopticon export summary -o ./docs
102
planopticon export meeting-notes -o ./docs
103
planopticon export glossary -o ./docs
104
```
105
106
7 document types: `summary`, `meeting-notes`, `glossary`, `relationship-map`, `status-report`, `entity-index`, `csv`.
107
108
Additional export targets:
109
- **Obsidian** -- YAML frontmatter + wiki-links vault
110
- **Notion** -- Compatible markdown
111
- **GitHub Wiki** -- Generate and push directly
112
- **PlanOpticonExchange** -- Canonical JSON interchange with merge/dedup
113
114
## Local Run
115
116
PlanOpticon runs entirely offline with Ollama -- no API keys, no cloud, no cost.
117
118
> **13.2 hours of video content analyzed, knowledge-graphed, and summarized in ~25 hours of processing time, entirely on local hardware, for free.**
119
120
18 meeting recordings processed on a single machine using `llava` (vision), `qwen3:30b` (chat), and `whisper-large` (transcription via Apple Silicon GPU):
121
122
| Metric | Value |
123
|--------|-------|
124
| Recordings | 18 |
125
| Video duration | 13.2 hours |
126
| Processing time | 24.9 hours |
127
| Frames extracted | 1,783 |
128
| API calls (local) | 1,841 |
129
| Tokens processed | 4.87M |
130
| Total cost | **$0.00** |
131
132
```bash
133
# Fully local analysis -- no API keys needed, just Ollama running
134
planopticon analyze -i meeting.mp4 -o ./output \
135
--provider ollama \
136
--vision-model llava:latest \
137
--chat-model qwen3:30b
138
```
139
140
## Installation
141
142
### From PyPI
143
144
```bash
145
pip install planopticon
146
147
# With all extras (PDF, cloud sources, GPU)
148
pip install planopticon[all]
149
```
150
151
### From Source
152
153
```bash
154
git clone https://github.com/ConflictHQ/PlanOpticon.git
155
cd PlanOpticon
156
pip install -e ".[dev]"
157
```
158
159
### Binary Download
160
161
Download standalone binaries (no Python required) from [GitHub Releases](https://github.com/ConflictHQ/PlanOpticon/releases).
162
163
### Requirements
164
165
- Python 3.10+
166
- FFmpeg (`brew install ffmpeg` / `apt install ffmpeg`)
167
- At least one API key (`OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, or `GEMINI_API_KEY`) **or** [Ollama](https://ollama.com) running locally
168
169
## Output Structure
170
171
```
172
output/
173
├── manifest.json # Single source of truth
174
├── transcript/
175
│ ├── transcript.json # Full transcript with timestamps
176
│ ├── transcript.txt # Plain text
177
│ └── transcript.srt # Subtitles
178
├── frames/ # Content frames (people filtered out)
179
├── diagrams/ # Detected diagrams + mermaid code
180
├── captures/ # Screengrab fallbacks
181
└── results/
182
├── analysis.md # Markdown report
183
├── analysis.html # HTML report
184
├── analysis.pdf # PDF report
185
├── knowledge_graph.db # SQLite knowledge graph
186
├── knowledge_graph.json # JSON export
187
├── key_points.json # Extracted key points
188
└── action_items.json # Tasks and follow-ups
189
```
190
191
## Processing Depth
192
193
| Depth | What you get |
194
|-------|-------------|
195
| `basic` | Transcription, key points, action items |
196
| `standard` | + Diagram extraction (10 frames), knowledge graph, full reports |
197
| `comprehensive` | + More frames analyzed (20), deeper extraction |
198
199
## Documentation
200
201
Full documentation at [planopticon.dev](https://planopticon.dev)
202
203
## License
204
205
MIT License -- Copyright (c) 2026 CONFLICT LLC
206

Keyboard Shortcuts

Open search /
Next entry (timeline) j
Previous entry (timeline) k
Open focused entry Enter
Show this help ?
Toggle theme Top nav button