PlanOpticon

planopticon / docs / getting-started / installation.md
1
# Installation
2
3
## From PyPI
4
5
```bash
6
pip install planopticon
7
```
8
9
### Optional extras
10
11
```bash
12
# PDF export support
13
pip install planopticon[pdf]
14
15
# Google Drive + Dropbox integration
16
pip install planopticon[cloud]
17
18
# GPU acceleration
19
pip install planopticon[gpu]
20
21
# Everything
22
pip install planopticon[all]
23
```
24
25
## From source
26
27
```bash
28
git clone https://github.com/ConflictHQ/PlanOpticon.git
29
cd PlanOpticon
30
pip install -e ".[dev]"
31
```
32
33
## Binary download
34
35
Download standalone binaries (no Python required) from
36
[GitHub Releases](https://github.com/ConflictHQ/PlanOpticon/releases):
37
38
| Platform | Download |
39
|----------|----------|
40
| macOS (Apple Silicon) | `planopticon-macos-arm64` |
41
| macOS (Intel) | `planopticon-macos-x86_64` |
42
| Linux (x86_64) | `planopticon-linux-x86_64` |
43
| Windows | `planopticon-windows-x86_64.exe` |
44
45
## System dependencies
46
47
PlanOpticon requires **FFmpeg** for audio extraction:
48
49
=== "macOS"
50
51
```bash
52
brew install ffmpeg
53
```
54
55
=== "Ubuntu/Debian"
56
57
```bash
58
sudo apt-get install ffmpeg libsndfile1
59
```
60
61
=== "Windows"
62
63
Download from [ffmpeg.org](https://ffmpeg.org/download.html) and add to PATH.
64
65
## API keys
66
67
You need at least one AI provider API key **or** a running Ollama server.
68
69
### Cloud providers
70
71
Set API keys as environment variables:
72
73
```bash
74
export OPENAI_API_KEY="sk-..."
75
export ANTHROPIC_API_KEY="sk-ant-..."
76
export GEMINI_API_KEY="AI..."
77
```
78
79
Or create a `.env` file in your project directory:
80
81
```
82
OPENAI_API_KEY=sk-...
83
ANTHROPIC_API_KEY=sk-ant-...
84
GEMINI_API_KEY=AI...
85
```
86
87
### Ollama (fully offline)
88
89
No API keys needed — just install and run [Ollama](https://ollama.com):
90
91
```bash
92
# Install Ollama, then pull models
93
ollama pull llama3.2 # Chat/analysis
94
ollama pull llava # Vision (diagram detection)
95
96
# Start the server (if not already running)
97
ollama serve
98
```
99
100
PlanOpticon auto-detects Ollama and uses it as a fallback when no cloud API keys are set. For a fully offline pipeline, pair Ollama with local Whisper transcription (`pip install planopticon[gpu]`).
101
102
PlanOpticon will automatically discover which providers are available and route to the best model for each task.
103

Keyboard Shortcuts

Open search /
Next entry (timeline) j
Previous entry (timeline) k
Open focused entry Enter
Show this help ?
Toggle theme Top nav button