Playbooks
Conversations are for exploring. Playbooks are for repeating.
A playbook is a markdown document that defines a multi-step AI workflow. You write it once, then run it many times with different inputs. Each run executes the steps sequentially, feeding context forward so later steps build on earlier results.
Playbooks are useful when you have a process that follows the same structure every time but operates on different data – code reviews, content briefs, research synthesis, feature specs, incident postmortems.
When to Use What
| Prompts | Conversations | Playbooks | |
|---|---|---|---|
| Structure | Single prompt, one shot | Free-form multi-turn chat | Defined multi-step workflow |
| Reusability | Template variables | Model and system prompt carry over | Full workflow with inputs |
| AI Calls | 0 (prompt library only) | 1 per message | 1 per step, automatic |
| Branching | None | Manual (you steer) | Conditional (rules-based) |
| Human Input | Template fill-in before use | Every message | Optional, at specific steps |
| Best For | Storing and sharing prompts | Exploration and iteration | Repeatable multi-step processes |
Use a prompt when you have a single reusable piece of text, optionally with template variables.
Use a conversation when you want to explore a topic interactively, steer the AI in real time, or don’t know the shape of the output in advance.
Use a playbook when you have a defined process with multiple steps, where each step’s output feeds the next, and you want to run that process repeatedly with different inputs.
Anatomy of a Playbook
A playbook is written in markdown with specific section headings that the parser recognizes. Here is the high-level structure:
# Title
Description of what this playbook does.
## SYSTEM
Optional system prompt that applies to every step.
## INPUTS
- `variable_name` (type: default): Description
- `another_var` (enum: option1, option2, option3): Description
## STEP 1: First Step Title
Prompt text with {{variable_name}} placeholders.
Context from previous steps is included automatically.
## STEP 2: Second Step Title
This step can reference {{another_var}} and builds on Step 1's output.
## ARTIFACTS
type: markdownSections
| Section | Required | Purpose |
|---|---|---|
# Title |
Yes | The playbook name (first # heading) |
| Description | No | Text between the title and first ## section |
## SYSTEM |
No | System prompt sent to the AI model on every step |
## INPUTS |
No | Declares input variables the user fills in before running |
## STEP N: Title |
Yes (at least one) | Individual workflow steps, numbered sequentially |
## ARTIFACTS |
No | Declares the output format (markdown, json, mermaid, chartjs, html_css, javascript, typescript) |
Steps are the core of a playbook. Each step sends a prompt to the AI model, receives a response, and passes that context forward to the next step.
Getting Started
Every new account starts with 8 starter playbooks:
| Playbook | What It Does |
|---|---|
| Code Review Pipeline | Systematic code review across quality, security, and maintainability |
| Content Brief Generator | Generates a content brief from a topic and audience |
| Research Synthesis | Analyzes a topic from multiple angles, produces a balanced synthesis |
| Technical Decision Matrix | Evaluates technology choices with structured scoring and branching |
| Multi-Audience Content Adapter | Transforms content for different audiences with branching per audience type |
| Full Stack Feature Spec | Generates feature specs with architecture design, schema, API, and test plan |
| Interview Prep Coach | Practice questions, model answers, and personalized feedback with elicitation |
| Incident Postmortem Generator | Structured blameless postmortem with timeline, root cause, and action items |
To run a starter playbook:
- Navigate to Playbooks in the sidebar
- Open any starter playbook (tagged with
example) - Click Run
- Select an AI model and fill in the required inputs
- Click Execute
The execution streams results step by step. You see each step’s output as tokens arrive.
Key Capabilities
Branching
Steps can branch based on input values or outputs from previous steps. The playbook takes different paths depending on the data:
## STEP 2: Analysis
```if architecture_style == "microservices"```
### STEP 2a: Service Design
Design the service boundaries and inter-service communication...
```elif architecture_style == "serverless"```
### STEP 2b: Function Design
Design the serverless functions and triggers...
```else```
### STEP 2c: Module Design
Design the monolith modules and dependency graph...
```endif```Only the matching branch executes. Sub-steps within branches use letter suffixes (2a, 2b, 2c).
Output Capture
Steps can capture their output into named variables for use in downstream branching conditions or step prompts:
@output(requirements_summary, extract:"priority_level")The extract option pulls a specific field from structured JSON output. Without it, the full step output is stored.
Elicitation (Human-in-the-Loop)
Steps can pause execution and ask the user a question before continuing:
@elicit(confirm, "Does this architecture approach look right?")
@elicit(select, "Which framework?", "React", "Vue", "Svelte")
@elicit(input, "Describe your requirements:")Three elicitation types are supported:
| Type | Behavior |
|---|---|
input |
Free-text input field |
confirm |
Yes/no confirmation |
select |
Pick from a list of options |
When execution reaches an elicitation step, it pauses and presents the question. After the user responds, execution resumes from that step.
Breakpoints
You can set breakpoints on specific steps before running a playbook. When execution reaches a breakpoint, it pauses after completing that step. You can inspect the output, optionally modify input variables, and resume execution.
Breakpoints are useful for debugging playbooks or for workflows where you want to review intermediate results before proceeding.
Streaming Execution
Playbook execution streams output token by token via Server-Sent Events. The execution view shows:
- A step rail on the left tracking progress through each step
- The main output panel in the center showing the current step’s streaming output
- A variable tracker on the right showing input values and named outputs as they update
Each completed step’s output is available with Copy, Raw, and Meta views. The meta view shows token counts, cost estimates, and latency for that step.
Prompt References
Steps can reference prompts from your library instead of inline content:
@prompt(library:abc-123-def)This pulls the prompt content at execution time, so the playbook always uses the latest version of the referenced prompt.
Artifacts
The ## ARTIFACTS section declares what the final output should be treated as. Supported types: markdown, json, mermaid, chartjs, html_css, javascript, typescript. This enables structured artifact downloads from completed executions.
Execution History
Every run is recorded. Navigate to a playbook’s Executions tab to see past runs with:
- Status (completed, failed, cancelled, paused)
- Token counts and cost estimates
- Duration
- Step-by-step results
You can restart a completed execution from any step, creating a new execution that inherits the context up to that point.
MCP Integration
Playbooks are fully manageable via MCP tools. You can create, list, update, delete, execute, and interact with paused executions – all from an AI assistant.
| Tool | Description |
|---|---|
list_playbooks |
List playbooks with search, tag filtering, and pagination |
get_playbook |
Get a playbook with its parsed structure |
create_playbook |
Create a new playbook from markdown content |
update_playbook |
Update a playbook (auto-snapshots before changes) |
delete_playbook |
Soft-delete a playbook |
validate_playbook |
Validate playbook markdown without saving |
execute_playbook |
Execute a playbook with inputs, model, and optional breakpoints |
get_playbook_execution |
Get execution details with all step results |
list_playbook_executions |
List executions for a playbook |
get_playbook_versions |
Get version history |
restore_playbook_version |
Restore to a previous version |
resume_playbook_execution |
Resume a paused execution (from breakpoint) |
respond_to_elicitation |
Answer an elicitation prompt in a paused execution |
execute_playbook MCP tool returns step results synchronously (non-streaming). It supports breakpoints via a comma-separated list of step numbers. Paused executions can be resumed with resume_playbook_execution or answered with respond_to_elicitation.
For full MCP tool schemas, see the MCP Reference.
The PLAYBOOK.md Standard
Promptmark’s playbook format is based on the PLAYBOOK.md open standard – a specification for defining AI workflows in markdown.
PLAYBOOK.md defines a portable format for multi-step AI workflows with inputs, branching, output capture, elicitation, prompt references, and artifacts. The goal is interoperability: a playbook written in one tool can be read and executed by any other tool that implements the spec.
Promptmark is a full implementation of the PLAYBOOK.md specification. Every syntax feature documented in the Playbook Syntax Reference conforms to the standard, including:
- Input declarations with type constraints and defaults
- Sequential and branched step execution
@output()capture with optionalextractfor structured data@elicit()for human-in-the-loop checkpoints@prompt()references with multiple resolution schemes## ARTIFACTSsection for typed output formats
Playbooks you write in Promptmark are portable. Other tools that adopt the PLAYBOOK.md standard can read and execute the same files.
Resources:
- PLAYBOOK.md Specification – The canonical specification
- PLAYBOOK-md on GitHub – The GitHub organization for the standard
What’s Next
- Playbook Syntax Reference – Complete specification for inputs, steps, branches, outputs, elicitation, and artifacts
- Running Playbooks – Detailed guide to preflight, execution, breakpoints, and results
- Triggers and Delivery – Trigger URLs, output delivery (webhook, email, GitHub), and automation
- Playbook MCP Tools – Full schemas and examples for MCP integration