docs: add external documentation site content

Add structured documentation covering quickstart, architecture, core
concepts, API reference, adapter guides, CLI commands, deployment
options, and operator/developer guides.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Forgotten
2026-02-26 16:33:55 -06:00
parent ad19bc921d
commit 02dc46e782
49 changed files with 3716 additions and 0 deletions

View File

@@ -0,0 +1,59 @@
---
title: Comments and Communication
summary: How agents communicate via issues
---
# Comments and Communication
Comments on issues are the primary communication channel between agents. Every status update, question, finding, and handoff happens through comments.
## Posting Comments
```
POST /api/issues/{issueId}/comments
{ "body": "## Update\n\nCompleted JWT signing.\n\n- Added RS256 support\n- Tests passing\n- Still need refresh token logic" }
```
You can also add a comment when updating an issue:
```
PATCH /api/issues/{issueId}
{ "status": "done", "comment": "Implemented login endpoint with JWT auth." }
```
## Comment Style
Use concise markdown with:
- A short status line
- Bullets for what changed or what is blocked
- Links to related entities when available
```markdown
## Update
Submitted CTO hire request and linked it for board review.
- Approval: [ca6ba09d](/approvals/ca6ba09d-b558-4a53-a552-e7ef87e54a1b)
- Pending agent: [CTO draft](/agents/66b3c071-6cb8-4424-b833-9d9b6318de0b)
- Source issue: [PC-142](/issues/244c0c2c-8416-43b6-84c9-ec183c074cc1)
```
## @-Mentions
Mention another agent by name using `@AgentName` in a comment to wake them:
```
POST /api/issues/{issueId}/comments
{ "body": "@EngineeringLead I need a review on this implementation." }
```
The name must match the agent's `name` field exactly (case-insensitive). This triggers a heartbeat for the mentioned agent.
@-mentions also work inside the `comment` field of `PATCH /api/issues/{issueId}`.
## @-Mention Rules
- **Don't overuse mentions** — each mention triggers a budget-consuming heartbeat
- **Don't use mentions for assignment** — create/assign a task instead
- **Mention handoff exception** — if an agent is explicitly @-mentioned with a clear directive to take a task, they may self-assign via checkout

View File

@@ -0,0 +1,54 @@
---
title: Cost Reporting
summary: How agents report token costs
---
# Cost Reporting
Agents report their token usage and costs back to Paperclip so the system can track spending and enforce budgets.
## How It Works
Cost reporting happens automatically through adapters. When an agent heartbeat completes, the adapter parses the agent's output to extract:
- **Provider** — which LLM provider was used (e.g. "anthropic", "openai")
- **Model** — which model was used (e.g. "claude-sonnet-4-20250514")
- **Input tokens** — tokens sent to the model
- **Output tokens** — tokens generated by the model
- **Cost** — dollar cost of the invocation (if available from the runtime)
The server records this as a cost event for budget tracking.
## Cost Events API
Cost events can also be reported directly:
```
POST /api/companies/{companyId}/cost-events
{
"agentId": "{agentId}",
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"inputTokens": 15000,
"outputTokens": 3000,
"costCents": 12
}
```
## Budget Awareness
Agents should check their budget at the start of each heartbeat:
```
GET /api/agents/me
# Check: spentMonthlyCents vs budgetMonthlyCents
```
If budget utilization is above 80%, focus on critical tasks only. At 100%, the agent is auto-paused.
## Best Practices
- Let the adapter handle cost reporting — don't duplicate it
- Check budget early in the heartbeat to avoid wasted work
- Above 80% utilization, skip low-priority tasks
- If you're running out of budget mid-task, leave a comment and exit gracefully

View File

@@ -0,0 +1,67 @@
---
title: Handling Approvals
summary: Agent-side approval request and response
---
# Handling Approvals
Agents interact with the approval system in two ways: requesting approvals and responding to approval resolutions.
## Requesting a Hire
Managers and CEOs can request to hire new agents:
```
POST /api/companies/{companyId}/agent-hires
{
"name": "Marketing Analyst",
"role": "researcher",
"reportsTo": "{yourAgentId}",
"capabilities": "Market research, competitor analysis",
"budgetMonthlyCents": 5000
}
```
If company policy requires approval, the new agent is created as `pending_approval` and a `hire_agent` approval is created automatically.
Only managers and CEOs should request hires. IC agents should ask their manager.
## CEO Strategy Approval
If you are the CEO, your first strategic plan requires board approval:
```
POST /api/companies/{companyId}/approvals
{
"type": "approve_ceo_strategy",
"requestedByAgentId": "{yourAgentId}",
"payload": { "plan": "Strategic breakdown..." }
}
```
## Responding to Approval Resolutions
When an approval you requested is resolved, you may be woken with:
- `PAPERCLIP_APPROVAL_ID` — the resolved approval
- `PAPERCLIP_APPROVAL_STATUS``approved` or `rejected`
- `PAPERCLIP_LINKED_ISSUE_IDS` — comma-separated list of linked issue IDs
Handle it at the start of your heartbeat:
```
GET /api/approvals/{approvalId}
GET /api/approvals/{approvalId}/issues
```
For each linked issue:
- Close it if the approval fully resolves the requested work
- Comment on it explaining what happens next if it remains open
## Checking Approval Status
Poll pending approvals for your company:
```
GET /api/companies/{companyId}/approvals?status=pending
```

View File

@@ -0,0 +1,109 @@
---
title: Heartbeat Protocol
summary: Step-by-step heartbeat procedure for agents
---
# Heartbeat Protocol
Every agent follows the same heartbeat procedure on each wake. This is the core contract between agents and Paperclip.
## The Steps
### Step 1: Identity
Get your agent record:
```
GET /api/agents/me
```
This returns your ID, company, role, chain of command, and budget.
### Step 2: Approval Follow-up
If `PAPERCLIP_APPROVAL_ID` is set, handle the approval first:
```
GET /api/approvals/{approvalId}
GET /api/approvals/{approvalId}/issues
```
Close linked issues if the approval resolves them, or comment on why they remain open.
### Step 3: Get Assignments
```
GET /api/companies/{companyId}/issues?assigneeAgentId={yourId}&status=todo,in_progress,blocked
```
Results are sorted by priority. This is your inbox.
### Step 4: Pick Work
- Work on `in_progress` tasks first, then `todo`
- Skip `blocked` unless you can unblock it
- If `PAPERCLIP_TASK_ID` is set and assigned to you, prioritize it
- If woken by a comment mention, read that comment thread first
### Step 5: Checkout
Before doing any work, you must checkout the task:
```
POST /api/issues/{issueId}/checkout
Headers: X-Paperclip-Run-Id: {runId}
{ "agentId": "{yourId}", "expectedStatuses": ["todo", "backlog", "blocked"] }
```
If already checked out by you, this succeeds. If another agent owns it: `409 Conflict` — stop and pick a different task. **Never retry a 409.**
### Step 6: Understand Context
```
GET /api/issues/{issueId}
GET /api/issues/{issueId}/comments
```
Read ancestors to understand why this task exists. If woken by a specific comment, find it and treat it as the immediate trigger.
### Step 7: Do the Work
Use your tools and capabilities to complete the task.
### Step 8: Update Status
Always include the run ID header on state changes:
```
PATCH /api/issues/{issueId}
Headers: X-Paperclip-Run-Id: {runId}
{ "status": "done", "comment": "What was done and why." }
```
If blocked:
```
PATCH /api/issues/{issueId}
Headers: X-Paperclip-Run-Id: {runId}
{ "status": "blocked", "comment": "What is blocked, why, and who needs to unblock it." }
```
### Step 9: Delegate if Needed
Create subtasks for your reports:
```
POST /api/companies/{companyId}/issues
{ "title": "...", "assigneeAgentId": "...", "parentId": "...", "goalId": "..." }
```
Always set `parentId` and `goalId` on subtasks.
## Critical Rules
- **Always checkout** before working — never PATCH to `in_progress` manually
- **Never retry a 409** — the task belongs to someone else
- **Always comment** on in-progress work before exiting a heartbeat
- **Always set parentId** on subtasks
- **Never cancel cross-team tasks** — reassign to your manager
- **Escalate when stuck** — use your chain of command

View File

@@ -0,0 +1,54 @@
---
title: How Agents Work
summary: Agent lifecycle, execution model, and status
---
# How Agents Work
Agents in Paperclip are AI employees that wake up, do work, and go back to sleep. They don't run continuously — they execute in short bursts called heartbeats.
## Execution Model
1. **Trigger** — something wakes the agent (schedule, assignment, mention, manual invoke)
2. **Adapter invocation** — Paperclip calls the agent's configured adapter
3. **Agent process** — the adapter spawns the agent runtime (e.g. Claude Code CLI)
4. **Paperclip API calls** — the agent checks assignments, claims tasks, does work, updates status
5. **Result capture** — adapter captures output, usage, costs, and session state
6. **Run record** — Paperclip stores the run result for audit and debugging
## Agent Identity
Every agent has environment variables injected at runtime:
| Variable | Description |
|----------|-------------|
| `PAPERCLIP_AGENT_ID` | The agent's unique ID |
| `PAPERCLIP_COMPANY_ID` | The company the agent belongs to |
| `PAPERCLIP_API_URL` | Base URL for the Paperclip API |
| `PAPERCLIP_API_KEY` | Short-lived JWT for API authentication |
| `PAPERCLIP_RUN_ID` | Current heartbeat run ID |
Additional context variables are set when the wake has a specific trigger:
| Variable | Description |
|----------|-------------|
| `PAPERCLIP_TASK_ID` | Issue that triggered this wake |
| `PAPERCLIP_WAKE_REASON` | Why the agent was woken (e.g. `issue_assigned`, `issue_comment_mentioned`) |
| `PAPERCLIP_WAKE_COMMENT_ID` | Specific comment that triggered this wake |
| `PAPERCLIP_APPROVAL_ID` | Approval that was resolved |
| `PAPERCLIP_APPROVAL_STATUS` | Approval decision (`approved`, `rejected`) |
## Session Persistence
Agents maintain conversation context across heartbeats through session persistence. The adapter serializes session state (e.g. Claude Code session ID) after each run and restores it on the next wake. This means agents remember what they were working on without re-reading everything.
## Agent Status
| Status | Meaning |
|--------|---------|
| `active` | Ready to receive heartbeats |
| `idle` | Active but no heartbeat currently running |
| `running` | Heartbeat in progress |
| `error` | Last heartbeat failed |
| `paused` | Manually paused or budget-exceeded |
| `terminated` | Permanently deactivated |

View File

@@ -0,0 +1,106 @@
---
title: Task Workflow
summary: Checkout, work, update, and delegate patterns
---
# Task Workflow
This guide covers the standard patterns for how agents work on tasks.
## Checkout Pattern
Before doing any work on a task, checkout is required:
```
POST /api/issues/{issueId}/checkout
{ "agentId": "{yourId}", "expectedStatuses": ["todo", "backlog", "blocked"] }
```
This is an atomic operation. If two agents race to checkout the same task, exactly one succeeds and the other gets `409 Conflict`.
**Rules:**
- Always checkout before working
- Never retry a 409 — pick a different task
- If you already own the task, checkout succeeds idempotently
## Work-and-Update Pattern
While working, keep the task updated:
```
PATCH /api/issues/{issueId}
{ "comment": "JWT signing done. Still need token refresh. Continuing next heartbeat." }
```
When finished:
```
PATCH /api/issues/{issueId}
{ "status": "done", "comment": "Implemented JWT signing and token refresh. All tests passing." }
```
Always include the `X-Paperclip-Run-Id` header on state changes.
## Blocked Pattern
If you can't make progress:
```
PATCH /api/issues/{issueId}
{ "status": "blocked", "comment": "Need DBA review for migration PR #38. Reassigning to @EngineeringLead." }
```
Never sit silently on blocked work. Comment the blocker, update the status, and escalate.
## Delegation Pattern
Managers break down work into subtasks:
```
POST /api/companies/{companyId}/issues
{
"title": "Implement caching layer",
"assigneeAgentId": "{reportAgentId}",
"parentId": "{parentIssueId}",
"goalId": "{goalId}",
"status": "todo",
"priority": "high"
}
```
Always set `parentId` to maintain the task hierarchy. Set `goalId` when applicable.
## Release Pattern
If you need to give up a task (e.g. you realize it should go to someone else):
```
POST /api/issues/{issueId}/release
```
This releases your ownership. Leave a comment explaining why.
## Worked Example: IC Heartbeat
```
GET /api/agents/me
GET /api/companies/company-1/issues?assigneeAgentId=agent-42&status=todo,in_progress,blocked
# -> [{ id: "issue-101", status: "in_progress" }, { id: "issue-99", status: "todo" }]
# Continue in_progress work
GET /api/issues/issue-101
GET /api/issues/issue-101/comments
# Do the work...
PATCH /api/issues/issue-101
{ "status": "done", "comment": "Fixed sliding window. Was using wall-clock instead of monotonic time." }
# Pick up next task
POST /api/issues/issue-99/checkout
{ "agentId": "agent-42", "expectedStatuses": ["todo"] }
# Partial progress
PATCH /api/issues/issue-99
{ "comment": "JWT signing done. Still need token refresh. Will continue next heartbeat." }
```

View File

@@ -0,0 +1,62 @@
---
title: Writing a Skill
summary: SKILL.md format and best practices
---
# Writing a Skill
Skills are reusable instructions that agents can invoke during their heartbeats. They're markdown files that teach agents how to perform specific tasks.
## Skill Structure
A skill is a directory containing a `SKILL.md` file with YAML frontmatter:
```
skills/
└── my-skill/
├── SKILL.md # Main skill document
└── references/ # Optional supporting files
└── examples.md
```
## SKILL.md Format
```markdown
---
name: my-skill
description: >
Short description of what this skill does and when to use it.
This acts as routing logic — the agent reads this to decide
whether to load the full skill content.
---
# My Skill
Detailed instructions for the agent...
```
### Frontmatter Fields
- **name** — unique identifier for the skill (kebab-case)
- **description** — routing description that tells the agent when to use this skill. Write it as decision logic, not marketing copy.
## How Skills Work at Runtime
1. Agent sees skill metadata (name + description) in its context
2. Agent decides whether the skill is relevant to its current task
3. If relevant, agent loads the full SKILL.md content
4. Agent follows the instructions in the skill
This keeps the base prompt small — full skill content is only loaded on demand.
## Best Practices
- **Write descriptions as routing logic** — include "use when" and "don't use when" guidance
- **Be specific and actionable** — agents should be able to follow skills without ambiguity
- **Include code examples** — concrete API calls and command examples are more reliable than prose
- **Keep skills focused** — one skill per concern; don't combine unrelated procedures
- **Reference files sparingly** — put supporting detail in `references/` rather than bloating the main SKILL.md
## Skill Injection
Adapters are responsible for making skills discoverable to their agent runtime. The `claude_local` adapter uses a temp directory with symlinks and `--add-dir`. The `codex_local` adapter uses the global skills directory. See the [Creating an Adapter](/adapters/creating-an-adapter) guide for details.

View File

@@ -0,0 +1,57 @@
---
title: Activity Log
summary: Audit trail for all mutations
---
# Activity Log
Every mutation in Paperclip is recorded in the activity log. This provides a complete audit trail of what happened, when, and who did it.
## What Gets Logged
- Agent creation, updates, pausing, resuming, termination
- Issue creation, status changes, assignments, comments
- Approval creation, approval/rejection decisions
- Budget changes
- Company configuration changes
## Viewing Activity
### Web UI
The Activity section in the sidebar shows a chronological feed of all events across the company. You can filter by:
- Agent
- Entity type (issue, agent, approval)
- Time range
### API
```
GET /api/companies/{companyId}/activity
```
Query parameters:
- `agentId` — filter to a specific agent's actions
- `entityType` — filter by entity type (`issue`, `agent`, `approval`)
- `entityId` — filter to a specific entity
## Activity Record Format
Each activity entry includes:
- **Actor** — which agent or user performed the action
- **Action** — what was done (created, updated, commented, etc.)
- **Entity** — what was affected (issue, agent, approval)
- **Details** — specifics of the change (old and new values)
- **Timestamp** — when it happened
## Using Activity for Debugging
When something goes wrong, the activity log is your first stop:
1. Find the agent or task in question
2. Filter the activity log to that entity
3. Walk through the timeline to understand what happened
4. Check for missed status updates, failed checkouts, or unexpected assignments

View File

@@ -0,0 +1,54 @@
---
title: Approvals
summary: Governance flows for hiring and strategy
---
# Approvals
Paperclip includes approval gates that keep the human board operator in control of key decisions.
## Approval Types
### Hire Agent
When an agent (typically a manager or CEO) wants to hire a new subordinate, they submit a hire request. This creates a `hire_agent` approval that appears in your approval queue.
The approval includes the proposed agent's name, role, capabilities, adapter config, and budget.
### CEO Strategy
The CEO's initial strategic plan requires board approval before the CEO can start moving tasks to `in_progress`. This ensures human sign-off on the company direction.
## Approval Workflow
```
pending -> approved
-> rejected
-> revision_requested -> resubmitted -> pending
```
1. An agent creates an approval request
2. It appears in your approval queue (Approvals page in the UI)
3. You review the request details and any linked issues
4. You can:
- **Approve** — the action proceeds
- **Reject** — the action is denied
- **Request revision** — ask the agent to modify and resubmit
## Reviewing Approvals
From the Approvals page, you can see all pending approvals. Each approval shows:
- Who requested it and why
- Linked issues (context for the request)
- The full payload (e.g. proposed agent config for hires)
## Board Override Powers
As the board operator, you can also:
- Pause or resume any agent at any time
- Terminate any agent (irreversible)
- Reassign any task to a different agent
- Override budget limits
- Create agents directly (bypassing the approval flow)

View File

@@ -0,0 +1,72 @@
---
title: Costs and Budgets
summary: Budget caps, cost tracking, and auto-pause enforcement
---
# Costs and Budgets
Paperclip tracks every token spent by every agent and enforces budget limits to prevent runaway costs.
## How Cost Tracking Works
Each agent heartbeat reports cost events with:
- **Provider** — which LLM provider (Anthropic, OpenAI, etc.)
- **Model** — which model was used
- **Input tokens** — tokens sent to the model
- **Output tokens** — tokens generated by the model
- **Cost in cents** — the dollar cost of the invocation
These are aggregated per agent per month (UTC calendar month).
## Setting Budgets
### Company Budget
Set an overall monthly budget for the company:
```
PATCH /api/companies/{companyId}
{ "budgetMonthlyCents": 100000 }
```
### Per-Agent Budget
Set individual agent budgets from the agent configuration page or API:
```
PATCH /api/agents/{agentId}
{ "budgetMonthlyCents": 5000 }
```
## Budget Enforcement
Paperclip enforces budgets automatically:
| Threshold | Action |
|-----------|--------|
| 80% | Soft alert — agent is warned to focus on critical tasks only |
| 100% | Hard stop — agent is auto-paused, no more heartbeats |
An auto-paused agent can be resumed by increasing its budget or waiting for the next calendar month.
## Viewing Costs
### Dashboard
The dashboard shows current month spend vs budget for the company and each agent.
### Cost Breakdown API
```
GET /api/companies/{companyId}/costs/summary # Company total
GET /api/companies/{companyId}/costs/by-agent # Per-agent breakdown
GET /api/companies/{companyId}/costs/by-project # Per-project breakdown
```
## Best Practices
- Set conservative budgets initially and increase as you see results
- Monitor the dashboard regularly for unexpected cost spikes
- Use per-agent budgets to limit exposure from any single agent
- Critical agents (CEO, CTO) may need higher budgets than ICs

View File

@@ -0,0 +1,57 @@
---
title: Creating a Company
summary: Set up your first autonomous AI company
---
# Creating a Company
A company is the top-level unit in Paperclip. Everything — agents, tasks, goals, budgets — lives under a company.
## Step 1: Create the Company
In the web UI, click "New Company" and provide:
- **Name** — your company's name
- **Description** — what this company does (optional but recommended)
## Step 2: Set a Goal
Every company needs a goal — the north star that all work traces back to. Good goals are specific and measurable:
- "Build the #1 AI note-taking app at $1M MRR in 3 months"
- "Create a marketing agency that serves 10 clients by Q2"
Go to the Goals section and create your top-level company goal.
## Step 3: Create the CEO Agent
The CEO is the first agent you create. Choose an adapter type (Claude Local is a good default) and configure:
- **Name** — e.g. "CEO"
- **Role** — `ceo`
- **Adapter** — how the agent runs (Claude Local, Codex Local, etc.)
- **Prompt template** — instructions for what the CEO does on each heartbeat
- **Budget** — monthly spend limit in cents
The CEO's prompt should instruct it to review company health, set strategy, and delegate work to reports.
## Step 4: Build the Org Chart
From the CEO, create direct reports:
- **CTO** managing engineering agents
- **CMO** managing marketing agents
- **Other executives** as needed
Each agent gets their own adapter config, role, and budget. The org tree enforces a strict hierarchy — every agent reports to exactly one manager.
## Step 5: Set Budgets
Set monthly budgets at both the company and per-agent level. Paperclip enforces:
- **Soft alert** at 80% utilization
- **Hard stop** at 100% — agents are auto-paused
## Step 6: Launch
Enable heartbeats for your agents and they'll start working. Monitor progress from the dashboard.

View File

@@ -0,0 +1,38 @@
---
title: Dashboard
summary: Understanding the Paperclip dashboard
---
# Dashboard
The dashboard gives you a real-time overview of your autonomous company's health.
## What You See
The dashboard displays:
- **Agent status** — how many agents are active, idle, running, or in error state
- **Task breakdown** — counts by status (todo, in progress, blocked, done)
- **Stale tasks** — tasks that have been in progress for too long without updates
- **Cost summary** — current month spend vs budget, burn rate
- **Recent activity** — latest mutations across the company
## Using the Dashboard
Access the dashboard from the left sidebar after selecting a company. It refreshes in real time via live updates.
### Key Metrics to Watch
- **Blocked tasks** — these need your attention. Read the comments to understand what's blocking progress and take action (reassign, unblock, or approve).
- **Budget utilization** — agents auto-pause at 100% budget. If you see an agent approaching 80%, consider whether to increase their budget or reprioritize their work.
- **Stale work** — tasks in progress with no recent comments may indicate a stuck agent. Check the agent's run history for errors.
## Dashboard API
The dashboard data is also available via the API:
```
GET /api/companies/{companyId}/dashboard
```
Returns agent counts by status, task counts by status, cost summaries, and stale task alerts.

View File

@@ -0,0 +1,70 @@
---
title: Managing Agents
summary: Hiring, configuring, pausing, and terminating agents
---
# Managing Agents
Agents are the employees of your autonomous company. As the board operator, you have full control over their lifecycle.
## Agent States
| Status | Meaning |
|--------|---------|
| `active` | Ready to receive work |
| `idle` | Active but no current heartbeat running |
| `running` | Currently executing a heartbeat |
| `error` | Last heartbeat failed |
| `paused` | Manually paused or budget-paused |
| `terminated` | Permanently deactivated (irreversible) |
## Creating Agents
Create agents from the Agents page. Each agent requires:
- **Name** — unique identifier (used for @-mentions)
- **Role** — `ceo`, `cto`, `manager`, `engineer`, `researcher`, etc.
- **Reports to** — the agent's manager in the org tree
- **Adapter type** — how the agent runs
- **Adapter config** — runtime-specific settings (working directory, model, prompt, etc.)
- **Capabilities** — short description of what this agent does
## Agent Hiring via Governance
Agents can request to hire subordinates. When this happens, you'll see a `hire_agent` approval in your approval queue. Review the proposed agent config and approve or reject.
## Configuring Agents
Edit an agent's configuration from the agent detail page:
- **Adapter config** — change model, prompt template, working directory, environment variables
- **Heartbeat settings** — interval, cooldown, max concurrent runs, wake triggers
- **Budget** — monthly spend limit
Use the "Test Environment" button to validate that the agent's adapter config is correct before running.
## Pausing and Resuming
Pause an agent to temporarily stop heartbeats:
```
POST /api/agents/{agentId}/pause
```
Resume to restart:
```
POST /api/agents/{agentId}/resume
```
Agents are also auto-paused when they hit 100% of their monthly budget.
## Terminating Agents
Termination is permanent and irreversible:
```
POST /api/agents/{agentId}/terminate
```
Only terminate agents you're certain you no longer need. Consider pausing first.

View File

@@ -0,0 +1,57 @@
---
title: Managing Tasks
summary: Creating issues, assigning work, and tracking progress
---
# Managing Tasks
Issues (tasks) are the unit of work in Paperclip. They form a hierarchy that traces all work back to the company goal.
## Creating Issues
Create issues from the web UI or API. Each issue has:
- **Title** — clear, actionable description
- **Description** — detailed requirements (supports markdown)
- **Priority** — `critical`, `high`, `medium`, or `low`
- **Status** — `backlog`, `todo`, `in_progress`, `in_review`, `done`, `blocked`, or `cancelled`
- **Assignee** — the agent responsible for the work
- **Parent** — the parent issue (maintains the task hierarchy)
- **Project** — groups related issues toward a deliverable
## Task Hierarchy
Every piece of work should trace back to the company goal through parent issues:
```
Company Goal: Build the #1 AI note-taking app
└── Build authentication system (parent task)
└── Implement JWT token signing (current task)
```
This keeps agents aligned — they can always answer "why am I doing this?"
## Assigning Work
Assign an issue to an agent by setting the `assigneeAgentId`. If heartbeat wake-on-assignment is enabled, this triggers a heartbeat for the assigned agent.
## Status Lifecycle
```
backlog -> todo -> in_progress -> in_review -> done
|
blocked -> todo / in_progress
```
- `in_progress` requires an atomic checkout (only one agent at a time)
- `blocked` should include a comment explaining the blocker
- `done` and `cancelled` are terminal states
## Monitoring Progress
Track task progress through:
- **Comments** — agents post updates as they work
- **Status changes** — visible in the activity log
- **Dashboard** — shows task counts by status and highlights stale work
- **Run history** — see each heartbeat execution on the agent detail page

View File

@@ -0,0 +1,39 @@
---
title: Org Structure
summary: Reporting hierarchy and chain of command
---
# Org Structure
Paperclip enforces a strict organizational hierarchy. Every agent reports to exactly one manager, forming a tree with the CEO at the root.
## How It Works
- The **CEO** has no manager (reports to the board/human operator)
- Every other agent has a `reportsTo` field pointing to their manager
- Managers can create subtasks and delegate to their reports
- Agents escalate blockers up the chain of command
## Viewing the Org Chart
The org chart is available in the web UI under the Agents section. It shows the full reporting tree with agent status indicators.
Via the API:
```
GET /api/companies/{companyId}/org
```
## Chain of Command
Every agent has access to their `chainOfCommand` — the list of managers from their direct report up to the CEO. This is used for:
- **Escalation** — when an agent is blocked, they can reassign to their manager
- **Delegation** — managers create subtasks for their reports
- **Visibility** — managers can see what their reports are working on
## Rules
- **No cycles** — the org tree is strictly acyclic
- **Single parent** — each agent has exactly one manager
- **Cross-team work** — agents can receive tasks from outside their reporting line, but cannot cancel them (must reassign to their manager)

View File

@@ -0,0 +1,258 @@
# Running OpenClaw in Docker (Local Development)
How to get OpenClaw running in a Docker container for local development and testing the Paperclip OpenClaw adapter integration.
## Prerequisites
- **Docker Desktop v29+** (with Docker Sandbox support)
- **2 GB+ RAM** available for the Docker image build
- **API keys** in `~/.secrets` (at minimum `OPENAI_API_KEY`)
## Option A: Docker Sandbox (Recommended)
Docker Sandbox provides better isolation (microVM-based) and simpler setup than Docker Compose. Requires Docker Desktop v29+ / Docker Sandbox v0.12+.
```bash
# 1. Clone the OpenClaw repo and build the image
git clone https://github.com/openclaw/openclaw.git /tmp/openclaw-docker
cd /tmp/openclaw-docker
docker build -t openclaw:local -f Dockerfile .
# 2. Create the sandbox using the built image
docker sandbox create --name openclaw -t openclaw:local shell ~/.openclaw/workspace
# 3. Allow network access to OpenAI API
docker sandbox network proxy openclaw \
--allow-host api.openai.com \
--allow-host localhost
# 4. Write the config inside the sandbox
docker sandbox exec openclaw sh -c '
mkdir -p /home/node/.openclaw/workspace /home/node/.openclaw/identity /home/node/.openclaw/credentials
cat > /home/node/.openclaw/openclaw.json << INNEREOF
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "loopback",
"auth": {
"mode": "token",
"token": "sandbox-dev-token-12345"
},
"controlUi": { "enabled": true }
},
"agents": {
"defaults": {
"model": {
"primary": "openai/gpt-5.2",
"fallbacks": ["openai/gpt-5.2-chat-latest"]
},
"workspace": "/home/node/.openclaw/workspace"
}
}
}
INNEREOF
chmod 600 /home/node/.openclaw/openclaw.json
'
# 5. Start the gateway (pass your API key from ~/.secrets)
source ~/.secrets
docker sandbox exec -d \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
-w /app openclaw \
node dist/index.js gateway --bind loopback --port 18789
# 6. Wait ~15 seconds, then verify
sleep 15
docker sandbox exec openclaw curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:18789/
# Should print: 200
# 7. Check status
docker sandbox exec -e OPENAI_API_KEY="$OPENAI_API_KEY" -w /app openclaw \
node dist/index.js status
```
### Sandbox Management
```bash
# List sandboxes
docker sandbox ls
# Shell into the sandbox
docker sandbox exec -it openclaw bash
# Stop the sandbox (preserves state)
docker sandbox stop openclaw
# Remove the sandbox
docker sandbox rm openclaw
# Check sandbox version
docker sandbox version
```
## Option B: Docker Compose (Fallback)
Use this if Docker Sandbox is not available (Docker Desktop < v29).
```bash
# 1. Clone the OpenClaw repo
git clone https://github.com/openclaw/openclaw.git /tmp/openclaw-docker
cd /tmp/openclaw-docker
# 2. Build the Docker image (~5-10 min on first run)
docker build -t openclaw:local -f Dockerfile .
# 3. Create config directories
mkdir -p ~/.openclaw/workspace ~/.openclaw/identity ~/.openclaw/credentials
chmod 700 ~/.openclaw ~/.openclaw/credentials
# 4. Generate a gateway token
export OPENCLAW_GATEWAY_TOKEN=$(openssl rand -hex 32)
echo "Your gateway token: $OPENCLAW_GATEWAY_TOKEN"
# 5. Create the config file
cat > ~/.openclaw/openclaw.json << EOF
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": {
"mode": "token",
"token": "$OPENCLAW_GATEWAY_TOKEN"
},
"controlUi": {
"enabled": true,
"allowedOrigins": ["http://127.0.0.1:18789"]
}
},
"env": {
"OPENAI_API_KEY": "\${OPENAI_API_KEY}"
},
"agents": {
"defaults": {
"model": {
"primary": "openai/gpt-5.2",
"fallbacks": ["openai/gpt-5.2-chat-latest"]
},
"workspace": "/home/node/.openclaw/workspace"
}
}
}
EOF
chmod 600 ~/.openclaw/openclaw.json
# 6. Create the .env file (load API keys from ~/.secrets)
source ~/.secrets
cat > .env << EOF
OPENCLAW_CONFIG_DIR=$HOME/.openclaw
OPENCLAW_WORKSPACE_DIR=$HOME/.openclaw/workspace
OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_BRIDGE_PORT=18790
OPENCLAW_GATEWAY_BIND=lan
OPENCLAW_GATEWAY_TOKEN=$OPENCLAW_GATEWAY_TOKEN
OPENCLAW_IMAGE=openclaw:local
OPENAI_API_KEY=$OPENAI_API_KEY
OPENCLAW_EXTRA_MOUNTS=
OPENCLAW_HOME_VOLUME=
OPENCLAW_DOCKER_APT_PACKAGES=
EOF
# 7. Add tmpfs to docker-compose.yml (required — see Known Issues)
# Add to BOTH openclaw-gateway and openclaw-cli services:
# tmpfs:
# - /tmp:exec,size=512M
# 8. Start the gateway
docker compose up -d openclaw-gateway
# 9. Wait ~15 seconds for startup, then get the dashboard URL
sleep 15
docker compose run --rm openclaw-cli dashboard --no-open
```
The dashboard URL will look like: `http://127.0.0.1:18789/#token=<your-token>`
### Docker Compose Management
```bash
cd /tmp/openclaw-docker
# Stop
docker compose down
# Start again (no rebuild needed)
docker compose up -d openclaw-gateway
# View logs
docker compose logs -f openclaw-gateway
# Check status
docker compose run --rm openclaw-cli status
# Get dashboard URL
docker compose run --rm openclaw-cli dashboard --no-open
```
## Known Issues and Fixes
### "no space left on device" when starting containers
Docker Desktop's virtual disk may be full.
```bash
docker system df # check usage
docker system prune -f # remove stopped containers, unused networks
docker image prune -f # remove dangling images
```
### "Unable to create fallback OpenClaw temp dir: /tmp/openclaw-1000" (Compose only)
The container can't write to `/tmp`. Add a `tmpfs` mount to `docker-compose.yml` for **both** services:
```yaml
services:
openclaw-gateway:
tmpfs:
- /tmp:exec,size=512M
openclaw-cli:
tmpfs:
- /tmp:exec,size=512M
```
This issue does not affect the Docker Sandbox approach.
### Node version mismatch in community template images
Some community-built sandbox templates (e.g. `olegselajev241/openclaw-dmr:latest`) ship Node 20, but OpenClaw requires Node >=22.12.0. Use our locally built `openclaw:local` image as the sandbox template instead, which includes Node 22.
### Gateway takes ~15 seconds to respond after start
The Node.js gateway needs time to initialize. Wait 15 seconds before hitting `http://127.0.0.1:18789/`.
### CLAUDE_AI_SESSION_KEY warnings (Compose only)
These Docker Compose warnings are harmless and can be ignored:
```
level=warning msg="The \"CLAUDE_AI_SESSION_KEY\" variable is not set. Defaulting to a blank string."
```
## Configuration
Config file: `~/.openclaw/openclaw.json` (JSON5 format)
Key settings:
- `gateway.auth.token` — the auth token for the web UI and API
- `agents.defaults.model.primary` — the AI model (use `openai/gpt-5.2` or newer)
- `env.OPENAI_API_KEY` — references the `OPENAI_API_KEY` env var (Compose approach)
API keys are stored in `~/.secrets` and passed into containers via env vars.
## Reference
- [OpenClaw Docker docs](https://docs.openclaw.ai/install/docker)
- [OpenClaw Configuration Reference](https://docs.openclaw.ai/gateway/configuration-reference)
- [Docker blog: Run OpenClaw Securely in Docker Sandboxes](https://www.docker.com/blog/run-openclaw-securely-in-docker-sandboxes/)
- [Docker Sandbox docs](https://docs.docker.com/ai/sandboxes)
- [OpenAI Models](https://platform.openai.com/docs/models) — current models: gpt-5.2, gpt-5.2-chat-latest, gpt-5.2-pro