Documentation

Playbook Syntax Reference

Playbooks are multi-step AI workflows written in an extended markdown format. Each playbook chains prompts together with variable interpolation, conditional branching, human-in-the-loop checkpoints, and structured output capture.

This page is the complete DSL specification. For a tutorial-style introduction, see the Playbooks guide.

Document Structure

A playbook is a markdown document with this overall structure:

# Title                       (required)
Description text              (optional)

## SYSTEM                     (optional)
## INPUTS                     (optional)
## STEP 1: Title              (at least one required)
## STEP 2: Title
...
## ARTIFACTS                  (optional)

Sections can appear in any order, but steps must be sequentially numbered starting from 1.


Title

The first # Heading in the document is the playbook title. This is required – parsing fails without it.

markdown
# Code Review Pipeline

Constraints:

  • Must be a level-1 heading (# , not ## or ###)
  • Must appear before any ## section headings
  • Blank lines before the title are skipped

Common mistakes:

  • Starting with ## INPUTS before defining a # Title – this produces a parse error
  • Using ## instead of # for the title

Description

Any text between the title and the first ## section heading becomes the description. This is plain text, not sent to the AI – it appears in the playbook’s metadata.

markdown
# Code Review Pipeline

Review code changes systematically for quality, security, and maintainability.

## INPUTS
...

The description is trimmed of leading and trailing whitespace. If there is no text between the title and the first section, the description is empty.


System Prompt

The ## SYSTEM section defines an optional system prompt sent to the AI for every step in the playbook.

markdown
## SYSTEM

You are a senior technical architect. Provide balanced, evidence-based
technology evaluations. Consider trade-offs, ecosystem maturity, team
skills, and long-term maintenance costs.

Constraints:

  • Heading is case-insensitive: ## SYSTEM and ## System Prompt both work
  • ## SYSTEM PROMPT is also accepted as an alternative heading
  • Content ends at the next ## heading
  • Only one system prompt per playbook (the last one wins if duplicated)

Inputs

The ## INPUTS section declares variables that the user provides before execution. Each input is a list item with a specific format.

Syntax

- `name` (type): Description
- `name` (type: default): Description with a default value
- `name` (enum: opt1, opt2, opt3): Description for enum

Line Format

Each input must match this exact pattern:

- `variable_name` (type_spec): description text
Component Format Required
List marker - Yes
Name `name` (backtick-wrapped) Yes
Type spec (type) or (type: value) Yes
Description : text after parentheses No

Variable Names

Names must start with a letter and contain only letters, digits, and underscores.

Valid: topic, max_count, userInput2

Invalid: _reserved, 2fast, my-var, has spaces

Types

Type Aliases Renders As
string (default if unrecognized) Single-line text input
text Multi-line textarea
number num, int, float Numeric input
boolean bool Toggle / checkbox
enum select, choice Dropdown with fixed options

If no type is specified or the type is not recognized, it defaults to string.

Defaults

Provide a default value after a colon inside the type parentheses:

markdown
- `language` (string: Go): Programming language
- `count` (number: 5): How many items to generate

An input with a default is optional – it can be left blank during execution and the default will be used. An input without a default is required.

Enum Options

For enum types, the value after the colon is parsed as a comma-separated list of options:

markdown
- `tone` (enum: formal, casual, academic): Writing tone
- `focus` (select: security, performance, readability, all): Review focus

Whitespace around options is trimmed. Empty options (from trailing commas) are discarded.

Info
Enum inputs do not support a separate default value. The user must select one of the listed options.

Duplicate Detection

Declaring two inputs with the same name is a fatal parse error:

markdown
## INPUTS

- `topic` (string): First declaration
- `topic` (text): Duplicate -- parse error

Complete Example

markdown
## INPUTS

- `code` (text): The code or diff to review
- `language` (string: Go): Programming language
- `focus` (enum: security, performance, readability, all): Review focus area
- `depth` (enum: quick, standard, deep): Analysis depth
- `verbose` (boolean): Enable verbose output
- `max_issues` (number: 10): Maximum issues to report

Steps

Steps are the core of a playbook. Each step is a prompt sent to the AI, executed sequentially with context accumulation – every step receives the outputs of all previous steps.

Syntax

markdown
## STEP 1: Title Goes Here

This is the prompt content for step 1.
It can span multiple lines.

Use {{variable_name}} to interpolate input values.

## STEP 2: Next Step

This step automatically has access to Step 1's output.

Numbering

Steps must use ## STEP N: Title format where N is a sequential integer starting from 1. The parser issues a warning if numbering is not sequential (e.g., skipping from step 1 to step 3).

At least one step is required – a playbook with no steps produces a fatal parse error.

Variable Interpolation

Use {{variable_name}} anywhere in step content to insert the value of an input variable:

markdown
## STEP 1: Research

Research the topic "{{topic}}" and identify key themes
relevant to a {{audience}} audience.

Variables are resolved against the playbook’s input values at execution time. If a variable has no value and no default, the placeholder remains in the rendered text.

Context Accumulation

Each step automatically receives the outputs of all previous steps as context. The AI sees:

  1. The system prompt (if defined)
  2. A summary of all previous step outputs
  3. The current step’s rendered content

You do not need to manually reference previous step outputs – they are injected into the system message automatically.


Prompt References

Use @prompt(library:ID) to pull a prompt from your Promptmark library into a step. The referenced prompt’s content is prepended to the step’s own content.

Syntax

markdown
## STEP 1: Review

@prompt(library:abc123-def456)

Apply the review criteria above to this code: {{code}}

Constraints:

  • Must appear on its own line within a step
  • The ID is the prompt’s UUID from your library
  • Only one @prompt reference per step (last one wins)
  • The prompt’s content is prepended before the step content in the AI call
  • If the referenced prompt does not exist, the step executes with only its own content

Output Capture

Use @output(varname) to capture a step’s output as a named variable. Named outputs can be referenced in branch conditions and are available to downstream steps.

Basic Capture

markdown
## STEP 1: Analyze

Analyze the project requirements.

@output(analysis_result)

After step 1 completes, the full AI response is stored in the analysis_result variable.

Extracted Fields

Use @output(varname, extract:"field") to extract a specific JSON field from the AI’s response:

markdown
## STEP 1: Classify

Classify the severity of this issue.

@output(severity_info, extract:"level")

When extract is specified, the engine:

  1. Instructs the AI to include a JSON object at the end of its response (e.g., {"level": "high"})
  2. After the response completes, scans from the bottom for a JSON object containing the specified field
  3. Stores the extracted value in the named variable
  4. Strips the JSON line from the displayed output

If extraction fails (no JSON found, or the field is missing), the full response text is stored as a fallback.

Constraints:

  • Must appear on its own line within a step
  • Variable name follows the same rules as input names (letters, digits, underscores; starts with a letter)
  • Extract field name must be a single word (\w+)
  • One @output directive per step

Elicitation

Use @elicit(type, "prompt") to pause execution and collect input from the user mid-playbook. The playbook resumes after the user responds.

Types

Type Renders As User Action
input Text field Type free-form text
confirm Yes/No buttons Click Yes or No
select Dropdown menu Pick from options

Syntax

Text input:

markdown
@elicit(input, "What additional context should we consider?")

Confirmation:

markdown
@elicit(confirm, "Does this architecture look right? Proceed with the detailed spec?")

Selection (with options):

markdown
@elicit(select, "Which framework?", "React", "Vue", "Svelte", "Angular")

For select, options are additional quoted strings after the prompt.

Behavior

  • A step with @elicit pauses execution with status awaiting_input
  • The user’s response is stored internally as __elicit_step_N (where N is the step number)
  • If the step has no other content besides the @elicit directive, the user’s response becomes the step output directly – no AI call is made
  • If the step has additional prompt content, execution resumes with the AI call after the user responds
  • The @elicit directive is stripped from the step content – it does not appear in the prompt sent to the AI

Combined with Output Capture

You can use @elicit and @output in the same step:

markdown
## STEP 3: Choose Framework

@elicit(select, "Which framework?", "React", "Vue", "Svelte")
@output(chosen_framework)

If this is an elicit-only step (no other content), the user’s selection is stored directly as chosen_framework.

Common mistakes:

  • Forgetting to quote the prompt and options – all arguments must be double-quoted strings
  • Using single quotes instead of double quotes – only "double quotes" are recognized

Branching

Branches let a playbook take different paths based on input values or named outputs from previous steps.

Syntax

Branches use triple-backtick fenced markers:

markdown
## STEP 2: Evaluate

Evaluate {{technology}} against the criteria.

```if evaluation_depth == "thorough"```

### STEP 2a: Deep Analysis

Perform a detailed analysis including benchmarks and security review.

```elif evaluation_depth == "quick"```

### STEP 2b: Quick SWOT

Provide a concise SWOT analysis.

```else```

### STEP 2c: Standard Review

Provide a standard evaluation.

```endif```
Warning
The branch markers must appear on their own line with no other content. Each marker is a line that starts and ends with triple backticks: ```if ... ```, ```elif ... ```, ```else```, ```endif```.

Operators

The parser supports two comparison operators:

Operator Meaning
== Equals (exact string match)
!= Not equals

All comparisons are string-based. The value on the right side must be double-quoted.

Condition Format

```if variable_name == "value"```
```if variable_name != "value"```

The variable can reference either an input variable or a named output from a previous step. The parser automatically detects the source:

  • If the variable name matches a declared input, it resolves from inputs
  • If it matches a named output (from @output), it resolves from step outputs
  • If neither, the parser issues a warning about an undeclared variable

Sub-steps

Steps inside branches use ### headings with letter suffixes:

markdown
### STEP 2a: Deep Analysis
### STEP 2b: Quick Review

The label format is Na where N is the parent step number and a/b/c etc. distinguish the branch arms. Sub-steps support all the same features as regular steps: @prompt, @output, @elicit, and variable interpolation.

Else Branch

The else branch matches when no if or elif condition is true. It has no condition:

markdown
```else```

### STEP 2c: Fallback

Default behavior when no condition matches.

```endif```

No Match Behavior

If no branch matches and there is no else block, the step is skipped entirely (status: skipped).

Branching on Named Outputs

You can branch on values captured by @output in earlier steps:

markdown
## STEP 1: Classify

Classify the input as "technical" or "general".

@output(content_type, extract:"category")

## STEP 2: Adapt

```if content_type == "technical"```

### STEP 2a: Technical Adaptation
Adapt for a technical audience with code examples.

```else```

### STEP 2b: General Adaptation
Adapt for a general audience with analogies.

```endif```

Common mistakes:

  • Forgetting the ```endif``` marker – the parser will consume subsequent steps into the branch
  • Using >, <, or contains operators – only == and != are supported
  • Referencing a variable that is not declared in ## INPUTS or captured by @output

Artifacts

The ## ARTIFACTS section declares the expected output format of the final step’s result.

Syntax

markdown
## ARTIFACTS

type: markdown

Valid Types

Type Description
markdown Markdown document
json JSON data
mermaid Mermaid diagram
chartjs Chart.js configuration
html_css HTML + CSS
javascript JavaScript code
typescript TypeScript code

The type is case-insensitive. An unrecognized type produces a parser warning.

Info
## OUTPUT is accepted as an alias for ## ARTIFACTS.

Limits

Limit Value
Maximum playbook content 200 KB
Variable name format Letters, digits, underscores (starts with letter)
Minimum steps 1
Branch operators ==, != only

Complete Annotated Example

This example uses every feature documented above.

markdown
# Technical Decision Matrix

Evaluate technology choices with a structured decision framework.

## SYSTEM

You are a senior technical architect. Provide balanced, evidence-based
technology evaluations. Consider trade-offs, ecosystem maturity, team
skills, and long-term maintenance costs.

## INPUTS

- `technology` (string): The technology or approach to evaluate
- `criteria` (string): Key evaluation criteria (comma-separated)
- `constraints` (text): Project constraints and requirements
- `evaluation_depth` (enum: quick, thorough): Level of analysis depth

## STEP 1: Requirements Analysis

Analyze the following project constraints and extract the key
technical requirements:

{{constraints}}

Focus on: scalability needs, team expertise, integration requirements,
and timeline pressure.

@output(requirements_summary, extract:"priority_level")

## STEP 2: Technology Assessment

Evaluate {{technology}} against these criteria: {{criteria}}

Consider the requirements analysis from the previous step.

Provide ratings (1-5) for each criterion with justification.

```if evaluation_depth == "thorough"```

### STEP 2a: Deep Dive Analysis

Perform a detailed analysis of {{technology}} including:
- Community health and contributor trends
- Security vulnerability history
- Performance benchmarks vs alternatives
- Migration complexity from current stack

```else```

### STEP 2b: Quick Assessment

Provide a concise SWOT analysis of {{technology}} for the given criteria.

```endif```

## STEP 3: Validate Direction

@elicit(confirm, "Does the assessment look right? Proceed to recommendation?")

## STEP 4: Recommendation

Based on the assessment, provide a final recommendation with:
1. Go/No-Go decision with confidence level
2. Key risks and mitigations
3. Implementation timeline estimate
4. Alternative options if No-Go

## ARTIFACTS

type: markdown

What happens when this playbook runs:

  1. Inputs collected – The user fills in technology, criteria, constraints, and selects evaluation_depth
  2. Step 1 executes – The AI analyzes constraints. The @output directive captures the response and extracts the priority_level JSON field
  3. Step 2 executes – The AI evaluates the technology. The branch checks evaluation_depth: if "thorough", sub-step 2a runs a deep dive; otherwise, sub-step 2b runs a quick SWOT
  4. Step 3 pauses – The @elicit(confirm) displays a Yes/No prompt. Since this step has no other content, the user’s response becomes the step output directly (no AI call)
  5. Step 4 executes – The AI synthesizes everything into a final recommendation, with all previous step outputs available as context
  6. Artifact – The result is expected as a markdown document