PlanOpticon

planopticon / docs / guide / planning-agent.md
1
# Planning Agent
2
3
The Planning Agent is PlanOpticon's AI-powered system for synthesizing knowledge graph content into structured planning artifacts. It takes extracted entities and relationships from video analyses, document ingestions, and other sources, then uses LLM reasoning to produce project plans, PRDs, roadmaps, task breakdowns, GitHub issues, and more.
4
5
---
6
7
## How It Works
8
9
The Planning Agent operates through a three-stage pipeline:
10
11
### 1. Context Assembly
12
13
The agent gathers context from all available sources:
14
15
- **Knowledge graph** -- entity counts, types, relationships, and planning entities from the loaded KG
16
- **Query engine** -- used to pull stats, entity lists, and relationship data for prompt construction
17
- **Provider manager** -- the configured LLM provider used for generation
18
- **Prior artifacts** -- any artifacts already generated in the session (skills can chain off each other)
19
- **Conversation history** -- accumulated chat messages when running in interactive mode
20
21
This context is bundled into an `AgentContext` dataclass that is shared across all skills.
22
23
### 2. Skill Selection
24
25
When the agent receives a user request, it determines which skills to run:
26
27
**LLM-driven planning (with provider).** The agent constructs a prompt that includes the knowledge base summary, all available skill names and descriptions, and the user's request. The LLM returns a JSON array of skill names to execute in order, along with any parameters. For example, given "Create a project plan and break it into tasks," the LLM might select `["project_plan", "task_breakdown"]`.
28
29
**Keyword fallback (without provider).** If no LLM provider is available, the agent falls back to simple keyword matching. It splits each skill name on underscores and checks whether any of those words appear in the user's request. For example, the request "generate a roadmap" would match the `roadmap` skill because "roadmap" appears in both the request and the skill name.
30
31
### 3. Execution
32
33
Selected skills are executed sequentially. Each skill:
34
35
1. Checks `can_execute()` to verify the required context is available (by default, both a knowledge graph and an LLM provider must be present)
36
2. Pulls relevant data from the knowledge graph via the query engine
37
3. Constructs a detailed prompt for the LLM with extracted context
38
4. Calls the LLM and parses the response
39
5. Returns an `Artifact` object containing the generated content
40
41
Each artifact is appended to `context.artifacts`, making it available to subsequent skills. This enables chaining -- for example, `task_breakdown` can feed into `github_issues`.
42
43
---
44
45
## AgentContext
46
47
The `AgentContext` dataclass is the shared state object that connects all components of the planning agent system.
48
49
```python
50
@dataclass
51
class AgentContext:
52
knowledge_graph: Any = None # KnowledgeGraph instance
53
query_engine: Any = None # GraphQueryEngine instance
54
provider_manager: Any = None # ProviderManager instance
55
planning_entities: List[Any] = field(default_factory=list)
56
user_requirements: Dict[str, Any] = field(default_factory=dict)
57
conversation_history: List[Dict[str, str]] = field(default_factory=list)
58
artifacts: List[Artifact] = field(default_factory=list)
59
config: Dict[str, Any] = field(default_factory=dict)
60
```
61
62
| Field | Purpose |
63
|---|---|
64
| `knowledge_graph` | The loaded `KnowledgeGraph` instance; provides access to entities, relationships, and graph operations |
65
| `query_engine` | A `GraphQueryEngine` for running structured queries (stats, entities, neighbors, relationships) |
66
| `provider_manager` | The `ProviderManager` that handles LLM API calls across providers |
67
| `planning_entities` | Entities classified into the planning taxonomy (goals, requirements, risks, etc.) |
68
| `user_requirements` | Structured requirements gathered from the `requirements_chat` skill |
69
| `conversation_history` | Accumulated chat messages for interactive sessions |
70
| `artifacts` | All artifacts generated during the session, enabling skill chaining |
71
| `config` | Arbitrary configuration overrides |
72
73
---
74
75
## Artifacts
76
77
Every skill returns an `Artifact` dataclass:
78
79
```python
80
@dataclass
81
class Artifact:
82
name: str # Human-readable name (e.g., "Project Plan")
83
content: str # The generated content (markdown, JSON, etc.)
84
artifact_type: str # Type identifier: "project_plan", "prd", "roadmap", etc.
85
format: str = "markdown" # Content format: "markdown", "json", "mermaid"
86
metadata: Dict[str, Any] = field(default_factory=dict)
87
```
88
89
Artifacts are the currency of the agent system. They can be:
90
91
- Displayed directly in the Companion REPL
92
- Exported to disk via the `artifact_export` skill
93
- Pushed to external tools via the `cli_adapter` skill
94
- Chained into other skills (e.g., task breakdown feeds into GitHub issues)
95
96
---
97
98
## Skills Reference
99
100
The agent ships with 11 built-in skills. Each skill is a class that extends `Skill` and self-registers at import time via `register_skill()`.
101
102
### project_plan
103
104
**Description:** Generate a structured project plan from knowledge graph.
105
106
Pulls the full knowledge graph context (stats, entities, relationships, and planning entities grouped by type) and asks the LLM to produce a comprehensive project plan with:
107
108
1. Executive Summary
109
2. Goals and Objectives
110
3. Scope
111
4. Phases and Milestones
112
5. Resource Requirements
113
6. Risks and Mitigations
114
7. Success Criteria
115
116
**Artifact type:** `project_plan` | **Format:** markdown
117
118
### prd
119
120
**Description:** Generate a product requirements document (PRD) / feature spec.
121
122
Filters planning entities to those of type `requirement`, `feature`, and `constraint`, then asks the LLM to generate a PRD with:
123
124
1. Problem Statement
125
2. User Stories
126
3. Functional Requirements
127
4. Non-Functional Requirements
128
5. Acceptance Criteria
129
6. Out of Scope
130
131
If no pre-filtered entities match, the LLM derives requirements from the full knowledge graph context.
132
133
**Artifact type:** `prd` | **Format:** markdown
134
135
### roadmap
136
137
**Description:** Generate a product/project roadmap.
138
139
Focuses on planning entities of type `milestone`, `feature`, and `dependency`. Asks the LLM to produce a roadmap with:
140
141
1. Vision and Strategy
142
2. Phases (with timeline estimates)
143
3. Key Dependencies
144
4. A Mermaid Gantt chart summarizing the timeline
145
146
**Artifact type:** `roadmap` | **Format:** markdown
147
148
### task_breakdown
149
150
**Description:** Break down goals into tasks with dependencies.
151
152
Focuses on planning entities of type `goal`, `feature`, and `milestone`. Returns a JSON array of task objects, each containing:
153
154
| Field | Type | Description |
155
|---|---|---|
156
| `id` | string | Task identifier (e.g., "T1", "T2") |
157
| `title` | string | Short task title |
158
| `description` | string | Detailed description |
159
| `depends_on` | list | IDs of prerequisite tasks |
160
| `priority` | string | `high`, `medium`, or `low` |
161
| `estimate` | string | Effort estimate (e.g., "2d", "1w") |
162
| `assignee_role` | string | Role needed to perform the task |
163
164
**Artifact type:** `task_list` | **Format:** json
165
166
### github_issues
167
168
**Description:** Generate GitHub issues from task breakdown.
169
170
Converts tasks into GitHub-ready issue objects. If a `task_list` artifact exists in the context, it is used as input. Otherwise, minimal issues are generated from the planning entities directly.
171
172
Each issue includes a formatted body with description, priority, estimate, and dependencies, plus labels derived from the task priority.
173
174
The skill also provides a `push_to_github(issues_json, repo)` function that shells out to the `gh` CLI to create actual issues. This is used by the `cli_adapter` skill.
175
176
**Artifact type:** `issues` | **Format:** json
177
178
### requirements_chat
179
180
**Description:** Interactive requirements gathering via guided questions.
181
182
Generates a structured requirements questionnaire based on the knowledge graph context. The questionnaire contains 8-12 targeted questions, each with:
183
184
| Field | Type | Description |
185
|---|---|---|
186
| `id` | string | Question identifier (e.g., "Q1") |
187
| `category` | string | `goals`, `constraints`, `priorities`, or `scope` |
188
| `question` | string | The question text |
189
| `context` | string | Why this question matters |
190
191
The skill also provides a `gather_requirements(context, answers)` method that takes the completed Q&A and synthesizes structured requirements (goals, constraints, priorities, scope).
192
193
**Artifact type:** `requirements` | **Format:** json
194
195
### doc_generator
196
197
**Description:** Generate technical documentation, ADRs, or meeting notes.
198
199
Supports three document types, selected via the `doc_type` parameter:
200
201
| `doc_type` | Output Structure |
202
|---|---|
203
| `technical_doc` (default) | Overview, Architecture, Components and Interfaces, Data Flow, Deployment and Configuration, API Reference |
204
| `adr` | Title, Status (Proposed), Context, Decision, Consequences, Alternatives Considered |
205
| `meeting_notes` | Meeting Summary, Key Discussion Points, Decisions Made, Action Items (with owners), Open Questions, Next Steps |
206
207
**Artifact type:** `document` | **Format:** markdown
208
209
### artifact_export
210
211
**Description:** Export artifacts in agent-ready formats.
212
213
Writes all artifacts accumulated in the context to a directory structure. Each artifact is written to a file based on its type:
214
215
| Artifact Type | Filename |
216
|---|---|
217
| `project_plan` | `project_plan.md` |
218
| `prd` | `prd.md` |
219
| `roadmap` | `roadmap.md` |
220
| `task_list` | `tasks.json` |
221
| `issues` | `issues.json` |
222
| `requirements` | `requirements.json` |
223
| `document` | `docs/<name>.md` |
224
225
A `manifest.json` is written alongside, listing all exported files with their names, types, and formats.
226
227
**Artifact type:** `export_manifest` | **Format:** json
228
229
Accepts an `output_dir` parameter (defaults to `plan/`).
230
231
### cli_adapter
232
233
**Description:** Push artifacts to external tools via their CLIs.
234
235
Converts artifacts into CLI commands for external project management tools. Supported tools:
236
237
| Tool | CLI | Example Command |
238
|---|---|---|
239
| `github` | `gh` | `gh issue create --title "..." --body "..." --label "..."` |
240
| `jira` | `jira` | `jira issue create --summary "..." --description "..."` |
241
| `linear` | `linear` | `linear issue create --title "..." --description "..."` |
242
243
The skill checks whether the target CLI is available on the system and includes that status in the output. Commands are generated in dry-run mode by default.
244
245
**Artifact type:** `cli_commands` | **Format:** json
246
247
### notes_export
248
249
**Description:** Export knowledge graph as structured notes (Obsidian, Notion).
250
251
Exports the entire knowledge graph as a collection of markdown files optimized for a specific note-taking platform. Accepts a `format` parameter:
252
253
**Obsidian format** creates:
254
255
- One `.md` file per entity with YAML frontmatter, tags, and `[[wiki-links]]`
256
- An `_Index.md` Map of Content grouping entities by type
257
- Tag pages for each entity type
258
- Artifact notes for any generated artifacts
259
260
**Notion format** creates:
261
262
- One `.md` file per entity with Notion-style callout blocks and relationship tables
263
- An `entities_database.csv` for bulk import into a Notion database
264
- An `Overview.md` page with stats and entity listings
265
- Artifact pages
266
267
**Artifact type:** `notes_export` | **Format:** markdown
268
269
### wiki_generator
270
271
**Description:** Generate a GitHub wiki from knowledge graph and artifacts.
272
273
Generates a complete GitHub wiki structure as a dictionary of page names to markdown content. Creates:
274
275
- **Home** page with entity type counts and links
276
- **_Sidebar** navigation with entity types and artifacts
277
- **Type index pages** with tables of entities per type
278
- **Individual entity pages** with descriptions, outgoing/incoming relationships, and source occurrences
279
- **Artifact pages** for any generated planning artifacts
280
281
The skill also provides standalone functions `write_wiki(pages, output_dir)` to write pages to disk and `push_wiki(wiki_dir, repo)` to push directly to a GitHub wiki repository.
282
283
**Artifact type:** `wiki` | **Format:** markdown
284
285
---
286
287
## CLI Usage
288
289
### One-shot execution
290
291
Run the agent with a request string. The agent selects and executes appropriate skills automatically.
292
293
```bash
294
# Generate a project plan
295
planopticon agent "Create a project plan" --kb ./results
296
297
# Generate a PRD
298
planopticon agent "Write a PRD for the authentication system" --kb ./results
299
300
# Break down into tasks
301
planopticon agent "Break this into tasks and estimate effort" --kb ./results
302
```
303
304
### Export artifacts to disk
305
306
Use `--export` to write generated artifacts to a directory:
307
308
```bash
309
planopticon agent "Create a full project plan with tasks" --kb ./results --export ./output
310
```
311
312
### Interactive mode
313
314
Use `-I` for a multi-turn session where you can issue multiple requests:
315
316
```bash
317
planopticon agent -I --kb ./results
318
```
319
320
In interactive mode, the agent supports:
321
322
- Free-text requests (executed via LLM skill selection)
323
- `/plan` -- shortcut to generate a project plan
324
- `/skills` -- list available skills
325
- `quit`, `exit`, `q` -- end the session
326
327
### Provider and model options
328
329
```bash
330
# Use a specific provider
331
planopticon agent "Create a roadmap" --kb ./results -p anthropic
332
333
# Use a specific model
334
planopticon agent "Generate a PRD" --kb ./results --chat-model gpt-4o
335
```
336
337
### Auto-discovery
338
339
If `--kb` is not specified, the agent uses `KBContext.auto_discover()` to find knowledge graphs in the workspace.
340
341
---
342
343
## Using Skills from the Companion REPL
344
345
The Companion REPL provides direct access to agent skills through slash commands. See the [Companion guide](companion.md) for full details.
346
347
| Companion Command | Skill Executed |
348
|---|---|
349
| `/plan` | `project_plan` |
350
| `/prd` | `prd` |
351
| `/tasks` | `task_breakdown` |
352
| `/run SKILL_NAME` | Any registered skill by name |
353
354
When executed from the Companion, skills use the same `AgentContext` that powers the chat mode. This means:
355
356
- The knowledge graph loaded at startup is automatically available
357
- The active LLM provider (set via `/provider` or `/model`) is used for generation
358
- Generated artifacts accumulate across the session, enabling chaining
359
360
---
361
362
## Example Workflows
363
364
### From video to project plan
365
366
```bash
367
# 1. Analyze a video
368
planopticon analyze -i sprint-review.mp4 -o results/
369
370
# 2. Launch the agent with the results
371
planopticon agent "Create a comprehensive project plan with tasks and a roadmap" \
372
--kb results/ --export plan/
373
374
# 3. Review the generated artifacts
375
ls plan/
376
# project_plan.md roadmap.md tasks.json manifest.json
377
```
378
379
### Interactive planning session
380
381
```bash
382
$ planopticon companion --kb ./results
383
384
planopticon> /status
385
Workspace status:
386
KG: knowledge_graph.db (58 entities, 124 relationships)
387
...
388
389
planopticon> What are the main goals discussed?
390
Based on the knowledge graph, the main goals are...
391
392
planopticon> /plan
393
--- Project Plan (project_plan) ---
394
...
395
396
planopticon> /tasks
397
--- Task Breakdown (task_list) ---
398
...
399
400
planopticon> /run github_issues
401
--- GitHub Issues (issues) ---
402
[
403
{"title": "Set up authentication service", ...},
404
...
405
]
406
407
planopticon> /run artifact_export
408
--- Export Manifest (export_manifest) ---
409
{
410
"artifact_count": 3,
411
"output_dir": "plan",
412
"files": [...]
413
}
414
```
415
416
### Skill chaining
417
418
Skills that produce artifacts make them available to subsequent skills automatically:
419
420
1. `/tasks` generates a `task_list` artifact
421
2. `/run github_issues` detects the existing `task_list` artifact and converts its tasks into GitHub issues
422
3. `/run cli_adapter` takes the most recent artifact and generates `gh issue create` commands
423
4. `/run artifact_export` writes all accumulated artifacts to disk with a manifest
424
425
This chaining works both in the Companion REPL and in one-shot agent execution, since the `AgentContext.artifacts` list persists for the duration of the session.
426

Keyboard Shortcuts

Open search /
Next entry (timeline) j
Previous entry (timeline) k
Open focused entry Enter
Show this help ?
Toggle theme Top nav button