PlanOpticon

planopticon / docs / contributing.md
1
# Contributing
2
3
## Development setup
4
5
```bash
6
git clone https://github.com/ConflictHQ/PlanOpticon.git
7
cd PlanOpticon
8
python -m venv .venv
9
source .venv/bin/activate
10
pip install -e ".[dev]"
11
```
12
13
## Running tests
14
15
PlanOpticon has 822+ tests covering providers, pipeline stages, document processors, knowledge graph operations, exporters, skills, and CLI commands.
16
17
```bash
18
# Run all tests
19
pytest tests/ -v
20
21
# Run with coverage
22
pytest tests/ --cov=video_processor --cov-report=html
23
24
# Run a specific test file
25
pytest tests/test_models.py -v
26
27
# Run tests matching a keyword
28
pytest tests/ -k "test_knowledge_graph" -v
29
30
# Run only fast tests (skip slow integration tests)
31
pytest tests/ -m "not slow" -v
32
```
33
34
### Test conventions
35
36
- All tests live in the `tests/` directory, mirroring the `video_processor/` package structure
37
- Test files are named `test_<module>.py`
38
- Use `pytest` as the test runner -- do not use `unittest.TestCase` unless necessary for specific setup/teardown patterns
39
- Mock external API calls. Never make real API calls in tests. Use `unittest.mock.patch` or `pytest-mock` fixtures to mock provider responses.
40
- Use `tmp_path` (pytest fixture) for any tests that write files to disk
41
- Fixtures shared across test files go in `conftest.py`
42
- For testing CLI commands, use `click.testing.CliRunner`
43
- For testing provider implementations, mock at the HTTP client level (e.g., patch `requests.post` or the provider's SDK client)
44
45
### Mocking patterns
46
47
```python
48
# Mocking a provider's chat method
49
from unittest.mock import MagicMock, patch
50
51
def test_key_point_extraction():
52
pm = MagicMock()
53
pm.chat.return_value = '["Point 1", "Point 2"]'
54
result = extract_key_points(pm, "transcript text")
55
assert len(result) == 2
56
57
# Mocking an external API at the HTTP level
58
@patch("requests.post")
59
def test_provider_chat(mock_post):
60
mock_post.return_value.json.return_value = {
61
"choices": [{"message": {"content": "response"}}]
62
}
63
provider = OpenAIProvider(api_key="test")
64
result = provider.chat([{"role": "user", "content": "hello"}])
65
assert result == "response"
66
```
67
68
## Code style
69
70
We use:
71
72
- **Ruff** for both linting and formatting (100 char line length)
73
- **mypy** for type checking
74
75
Ruff handles all linting (error, warning, pyflakes, and import sorting rules) and formatting in a single tool. There is no need to run Black or isort separately.
76
77
```bash
78
# Lint
79
ruff check video_processor/
80
81
# Format
82
ruff format video_processor/
83
84
# Auto-fix lint issues
85
ruff check video_processor/ --fix
86
87
# Type check
88
mypy video_processor/ --ignore-missing-imports
89
```
90
91
### Ruff configuration
92
93
The project's `pyproject.toml` configures ruff as follows:
94
95
```toml
96
[tool.ruff]
97
line-length = 100
98
target-version = "py310"
99
100
[tool.ruff.lint]
101
select = ["E", "F", "W", "I"]
102
```
103
104
The `I` rule set covers import sorting (equivalent to isort), so imports are automatically organized by ruff.
105
106
## Project structure
107
108
```
109
PlanOpticon/
110
├── video_processor/
111
│ ├── cli/ # Click CLI commands
112
│ │ └── commands.py
113
│ ├── providers/ # LLM/API provider implementations
114
│ │ ├── base.py # BaseProvider, ProviderRegistry
115
│ │ ├── manager.py # ProviderManager
116
│ │ ├── discovery.py # Auto-discovery of available providers
117
│ │ ├── openai_provider.py
118
│ │ ├── anthropic_provider.py
119
│ │ ├── gemini_provider.py
120
│ │ └── ... # 15+ provider implementations
121
│ ├── sources/ # Cloud and web source connectors
122
│ │ ├── base.py # BaseSource, SourceFile
123
│ │ ├── google_drive.py
124
│ │ ├── zoom_source.py
125
│ │ └── ... # 20+ source implementations
126
│ ├── processors/ # Document processors
127
│ │ ├── base.py # DocumentProcessor, registry
128
│ │ ├── ingest.py # File/directory ingestion
129
│ │ ├── markdown_processor.py
130
│ │ ├── pdf_processor.py
131
│ │ └── __init__.py # Auto-registration of built-in processors
132
│ ├── integrators/ # Knowledge graph and analysis
133
│ │ ├── knowledge_graph.py # KnowledgeGraph class
134
│ │ ├── graph_store.py # SQLite graph storage
135
│ │ ├── graph_query.py # GraphQueryEngine
136
│ │ ├── graph_discovery.py # Auto-find knowledge_graph.db
137
│ │ └── taxonomy.py # Planning taxonomy classifier
138
│ ├── agent/ # Planning agent
139
│ │ ├── orchestrator.py # Agent orchestration
140
│ │ └── skills/ # Skill implementations
141
│ │ ├── base.py # Skill ABC, registry, Artifact
142
│ │ ├── project_plan.py
143
│ │ ├── prd.py
144
│ │ ├── roadmap.py
145
│ │ ├── task_breakdown.py
146
│ │ ├── doc_generator.py
147
│ │ ├── wiki_generator.py
148
│ │ ├── notes_export.py
149
│ │ ├── artifact_export.py
150
│ │ ├── github_integration.py
151
│ │ ├── requirements_chat.py
152
│ │ ├── cli_adapter.py
153
│ │ └── __init__.py # Auto-registration of skills
154
│ ├── exporters/ # Output format exporters
155
│ │ ├── __init__.py
156
│ │ └── markdown.py # Template-based markdown generation
157
│ ├── utils/ # Shared utilities
158
│ │ ├── export.py # Multi-format export orchestration
159
│ │ ├── rendering.py # Mermaid/chart rendering
160
│ │ ├── prompt_templates.py
161
│ │ ├── callbacks.py # Progress callback helpers
162
│ │ └── ...
163
│ ├── exchange.py # PlanOpticonExchange format
164
│ ├── pipeline.py # Main video processing pipeline
165
│ ├── models.py # Pydantic data models
166
│ └── output_structure.py # Output directory helpers
167
├── tests/ # 822+ tests
168
├── knowledge-base/ # Local-first graph tools
169
│ ├── viewer.html # Self-contained D3.js graph viewer
170
│ └── query.py # Python query script (NetworkX)
171
├── docs/ # MkDocs documentation
172
└── pyproject.toml # Project configuration
173
```
174
175
See [Architecture Overview](architecture/overview.md) for a more detailed breakdown of module responsibilities.
176
177
## Adding a new provider
178
179
Providers self-register via `ProviderRegistry.register()` at module level. When the provider module is imported, it registers itself automatically.
180
181
1. Create `video_processor/providers/your_provider.py`
182
2. Extend `BaseProvider` from `video_processor/providers/base.py`
183
3. Implement the four required methods: `chat()`, `analyze_image()`, `transcribe_audio()`, `list_models()`
184
4. Call `ProviderRegistry.register()` at module level
185
5. Add the import to `video_processor/providers/manager.py` in the lazy-import block
186
6. Add tests in `tests/test_providers.py`
187
188
### Example provider skeleton
189
190
```python
191
"""Your provider implementation."""
192
193
from video_processor.providers.base import BaseProvider, ModelInfo, ProviderRegistry
194
195
196
class YourProvider(BaseProvider):
197
provider_name = "yourprovider"
198
199
def __init__(self, api_key: str | None = None):
200
import os
201
self.api_key = api_key or os.environ.get("YOUR_API_KEY", "")
202
203
def chat(self, messages, max_tokens=4096, temperature=0.7, model=None):
204
# Implement chat completion
205
...
206
207
def analyze_image(self, image_bytes, prompt, max_tokens=4096, model=None):
208
# Implement image analysis
209
...
210
211
def transcribe_audio(self, audio_path, language=None, model=None):
212
# Implement audio transcription (or raise NotImplementedError)
213
...
214
215
def list_models(self):
216
return [ModelInfo(id="your-model", provider="yourprovider", capabilities=["chat"])]
217
218
219
# Self-registration at import time
220
ProviderRegistry.register(
221
"yourprovider",
222
YourProvider,
223
env_var="YOUR_API_KEY",
224
model_prefixes=["your-"],
225
default_models={"chat": "your-model"},
226
)
227
```
228
229
### OpenAI-compatible providers
230
231
For providers that use the OpenAI API format, extend `OpenAICompatibleProvider` instead of `BaseProvider`. This provides default implementations of `chat()`, `analyze_image()`, and `list_models()` -- you only need to configure the base URL and model mappings.
232
233
```python
234
from video_processor.providers.base import OpenAICompatibleProvider, ProviderRegistry
235
236
class YourProvider(OpenAICompatibleProvider):
237
provider_name = "yourprovider"
238
base_url = "https://api.yourprovider.com/v1"
239
env_var = "YOUR_API_KEY"
240
241
ProviderRegistry.register("yourprovider", YourProvider, env_var="YOUR_API_KEY")
242
```
243
244
## Adding a new cloud source
245
246
Source connectors implement the `BaseSource` ABC from `video_processor/sources/base.py`. Authentication is handled per-source, typically via environment variables.
247
248
1. Create `video_processor/sources/your_source.py`
249
2. Extend `BaseSource`
250
3. Implement `authenticate()`, `list_videos()`, and `download()`
251
4. Add the class to the lazy-import map in `video_processor/sources/__init__.py`
252
5. Add CLI commands in `video_processor/cli/commands.py` if needed
253
6. Add tests and documentation
254
255
### Example source skeleton
256
257
```python
258
"""Your source integration."""
259
260
import os
261
import logging
262
from pathlib import Path
263
from typing import List, Optional
264
265
from video_processor.sources.base import BaseSource, SourceFile
266
267
logger = logging.getLogger(__name__)
268
269
270
class YourSource(BaseSource):
271
def __init__(self, api_key: Optional[str] = None):
272
self.api_key = api_key or os.environ.get("YOUR_SOURCE_KEY", "")
273
274
def authenticate(self) -> bool:
275
"""Validate credentials. Return True on success."""
276
if not self.api_key:
277
logger.error("API key not set. Set YOUR_SOURCE_KEY env var.")
278
return False
279
# Make a test API call to verify credentials
280
...
281
return True
282
283
def list_videos(
284
self,
285
folder_id: Optional[str] = None,
286
folder_path: Optional[str] = None,
287
patterns: Optional[List[str]] = None,
288
) -> List[SourceFile]:
289
"""List available video files."""
290
...
291
292
def download(self, file: SourceFile, destination: Path) -> Path:
293
"""Download a single file. Return the local path."""
294
destination.parent.mkdir(parents=True, exist_ok=True)
295
# Download file content to destination
296
...
297
return destination
298
```
299
300
### Registering in `__init__.py`
301
302
Add your source to the `__all__` list and the `_lazy_map` dictionary in `video_processor/sources/__init__.py`:
303
304
```python
305
__all__ = [
306
...
307
"YourSource",
308
]
309
310
_lazy_map = {
311
...
312
"YourSource": "video_processor.sources.your_source",
313
}
314
```
315
316
## Adding a new skill
317
318
Agent skills extend the `Skill` ABC from `video_processor/agent/skills/base.py` and self-register via `register_skill()`.
319
320
1. Create `video_processor/agent/skills/your_skill.py`
321
2. Extend `Skill` and set `name` and `description` class attributes
322
3. Implement `execute()` to return an `Artifact`
323
4. Optionally override `can_execute()` for custom precondition checks
324
5. Call `register_skill()` at module level
325
6. Add the import to `video_processor/agent/skills/__init__.py`
326
7. Add tests
327
328
### Example skill skeleton
329
330
```python
331
"""Your custom skill."""
332
333
from video_processor.agent.skills.base import AgentContext, Artifact, Skill, register_skill
334
335
336
class YourSkill(Skill):
337
name = "your_skill"
338
description = "Generates a custom artifact from the knowledge graph."
339
340
def execute(self, context: AgentContext, **kwargs) -> Artifact:
341
"""Generate the artifact."""
342
kg_data = context.knowledge_graph.to_dict()
343
# Build content from knowledge graph data
344
content = f"# Your Artifact\n\n{len(kg_data.get('entities', []))} entities found."
345
return Artifact(
346
name="your_artifact",
347
content=content,
348
artifact_type="document",
349
format="markdown",
350
)
351
352
def can_execute(self, context: AgentContext) -> bool:
353
"""Check prerequisites (default requires KG + provider)."""
354
return context.knowledge_graph is not None
355
356
357
# Self-registration at import time
358
register_skill(YourSkill())
359
```
360
361
### Registering in `__init__.py`
362
363
Add the import to `video_processor/agent/skills/__init__.py` so the skill is loaded (and self-registered) when the skills package is imported:
364
365
```python
366
from video_processor.agent.skills import (
367
...
368
your_skill, # noqa: F401
369
)
370
```
371
372
## Adding a new document processor
373
374
Document processors extend the `DocumentProcessor` ABC from `video_processor/processors/base.py` and are registered via `register_processor()`.
375
376
1. Create `video_processor/processors/your_processor.py`
377
2. Extend `DocumentProcessor`
378
3. Set `supported_extensions` class attribute
379
4. Implement `process()` (returns `List[DocumentChunk]`) and `can_process()`
380
5. Call `register_processor()` at module level
381
6. Add the import to `video_processor/processors/__init__.py`
382
7. Add tests
383
384
### Example processor skeleton
385
386
```python
387
"""Your document processor."""
388
389
from pathlib import Path
390
from typing import List
391
392
from video_processor.processors.base import (
393
DocumentChunk,
394
DocumentProcessor,
395
register_processor,
396
)
397
398
399
class YourProcessor(DocumentProcessor):
400
supported_extensions = [".xyz", ".abc"]
401
402
def can_process(self, path: Path) -> bool:
403
return path.suffix.lower() in self.supported_extensions
404
405
def process(self, path: Path) -> List[DocumentChunk]:
406
text = path.read_text()
407
# Split into chunks as appropriate for your format
408
return [
409
DocumentChunk(
410
text=text,
411
source_file=str(path),
412
chunk_index=0,
413
metadata={"format": "xyz"},
414
)
415
]
416
417
418
# Self-registration at import time
419
register_processor([".xyz", ".abc"], YourProcessor)
420
```
421
422
### Registering in `__init__.py`
423
424
Add the import to `video_processor/processors/__init__.py`:
425
426
```python
427
from video_processor.processors import (
428
markdown_processor, # noqa: F401, E402
429
pdf_processor, # noqa: F401, E402
430
your_processor, # noqa: F401, E402
431
)
432
```
433
434
## Adding a new exporter
435
436
Exporters live in `video_processor/exporters/` and are typically called from CLI commands. There is no strict ABC for exporters -- they are plain functions that accept knowledge graph data and an output directory.
437
438
1. Create `video_processor/exporters/your_exporter.py`
439
2. Implement one or more export functions that accept KG data (as a dict) and an output path
440
3. Add CLI integration in `video_processor/cli/commands.py` under the `export` group
441
4. Add tests
442
443
### Example exporter skeleton
444
445
```python
446
"""Your exporter."""
447
448
import json
449
from pathlib import Path
450
from typing import List
451
452
453
def export_your_format(kg_data: dict, output_dir: Path) -> List[Path]:
454
"""Export knowledge graph data in your format.
455
456
Args:
457
kg_data: Knowledge graph as a dict (from KnowledgeGraph.to_dict()).
458
output_dir: Directory to write output files.
459
460
Returns:
461
List of created file paths.
462
"""
463
output_dir.mkdir(parents=True, exist_ok=True)
464
created = []
465
466
output_file = output_dir / "export.xyz"
467
output_file.write_text(json.dumps(kg_data, indent=2))
468
created.append(output_file)
469
470
return created
471
```
472
473
### Adding the CLI command
474
475
Add a subcommand under the `export` group in `video_processor/cli/commands.py`:
476
477
```python
478
@export.command("your-format")
479
@click.argument("db_path", type=click.Path(exists=True))
480
@click.option("-o", "--output", type=click.Path(), default=None)
481
def export_your_format_cmd(db_path, output):
482
"""Export knowledge graph in your format."""
483
from video_processor.exporters.your_exporter import export_your_format
484
from video_processor.integrators.knowledge_graph import KnowledgeGraph
485
486
kg = KnowledgeGraph(db_path=Path(db_path))
487
out_dir = Path(output) if output else Path.cwd() / "your-export"
488
created = export_your_format(kg.to_dict(), out_dir)
489
click.echo(f"Exported {len(created)} files to {out_dir}/")
490
```
491
492
## License
493
494
MIT License -- Copyright (c) 2026 CONFLICT LLC. All rights reserved.
495

Keyboard Shortcuts

Open search /
Next entry (timeline) j
Previous entry (timeline) k
Open focused entry Enter
Show this help ?
Toggle theme Top nav button