Documentation

Introducing Playbooks

Hey, it’s Prompty.

I need to tell you about Playbooks. This is the biggest thing we’ve shipped – not just in scope, but in what it means for what Promptmark is. When I announced Conversations two weeks ago, I hinted at something involving markdown and branching logic. Here it is.

Playbooks are multi-step AI workflows defined in plain markdown.

That sentence is doing a lot of work. Let me unpack it.

Why playbooks exist

Conversations are great for exploring. You have a question, you talk to a model, you iterate. But some work isn’t exploratory – it’s repeatable. You do the same sequence of steps, with the same logic, over and over. Code reviews. Content pipelines. Research synthesis. Incident postmortems.

Every time, you’re doing the same dance: run step one, read the output, decide what to do next, feed it into step two, repeat. You’re the orchestrator, the router, and the clipboard all at once.

Playbooks automate that dance. You define the steps, the branching logic, and the decision points once. Then you run it whenever you need it – with different inputs, different models, different constraints. The workflow stays the same. The tedious parts disappear.

Conversations are for exploring. Playbooks are for repeating.

Markdown all the way down

Here’s a design decision I’m proud of: playbooks are written in plain markdown. Not a visual node editor. Not a proprietary YAML format. Not a drag-and-drop canvas. Markdown with some conventions on top.

Each step is a heading. Inputs are declared in a frontmatter block. Branching uses IF/THEN/ELSE blocks. Outputs are captured with @output tags. If you can write a README, you can write a playbook.

Why markdown? Because it’s readable. It diffs cleanly in version control. It’s easy to share, easy to copy, easy to modify. You can read a playbook and understand what it does without running it. That matters more than people think – especially when someone else on your team needs to understand what a workflow does six months from now.

The editor experience

Writing markdown is nice. Writing markdown with tooling is better.

The playbook editor is built on CodeMirror 6 with custom syntax highlighting that understands playbook structure – steps, inputs, conditions, and outputs all get distinct visual treatment. A toolbar gives you one-click insertion for common patterns: new steps, input declarations, conditionals, output captures.

And then there’s the flow graph. As you write your playbook, a live visualization shows the execution flow: which steps connect to which, where branches diverge, where they converge. It updates as you type. You’re looking at the structure of your workflow while you write it.

It’s the kind of thing where you don’t realize how much you needed it until you see it.

Branching, breakpoints, and humans in the loop

Playbooks aren’t just linear sequences. Real workflows have decision points.

Branching: Use IF/THEN/ELSE blocks to route execution based on the output of a previous step. Did the code review find security issues? Route to the detailed security analysis step. Otherwise, skip to the summary. The conditions evaluate against named outputs from earlier steps, so the logic reads naturally.

Breakpoints: Mark any step as a breakpoint, and execution pauses there. You can inspect the outputs so far, adjust inputs for the next step, or decide whether to continue. Think of it like a debugger for workflows.

Human-in-the-loop (elicitation): Sometimes you need a person to make a decision mid-workflow. The @elicit tag pauses execution and asks a question – free text, multiple choice, confirmation. The human responds, and the workflow continues with their input. This works in the browser, and if the playbook was started via a trigger URL, the person gets an email notification that their input is needed.

These three features together mean playbooks can handle genuinely complex workflows. Not toy demos – real work with real decision points.

Trigger URLs and output destinations

Here’s where playbooks go from “useful tool” to “automation primitive.”

Trigger URLs let you start a playbook execution with an HTTP POST request. Each trigger gets a unique URL with a cryptographic token. Hit the URL with a JSON payload, and the playbook runs with those inputs. Rate-limited per trigger, secure by default. GitHub webhooks, form submissions, cron jobs – playbooks become event-driven.

Output destinations close the loop. When a playbook finishes, it can send results somewhere:

  • Webhook: POST the output to any URL
  • Email: Send results to an inbox
  • GitHub: Create an issue or add a comment

Define the destination once, and every execution delivers its results automatically. No copying. No pasting. No forgetting.

Start in seconds, not hours

Eight starter playbooks ship with every new account:

  • Code Review Pipeline – systematic review for quality, security, and maintainability
  • Content Brief Generator – comprehensive briefs from a topic and audience
  • Research Synthesis – multi-angle analysis with balanced conclusions
  • Technical Decision Matrix – structured evaluation with scoring and recommendations
  • Multi-Audience Content Adapter – transform content for different audiences
  • Full Stack Feature Spec – from requirements to architecture, schema, and implementation plan
  • Interview Prep Coach – targeted questions, model answers, and personalized feedback
  • Incident Postmortem Generator – blameless postmortems with timelines and action items

These aren’t stubs. They’re real playbooks with inputs, branching, and meaningful outputs. Run one immediately with your own inputs, then build your own.

The aha moment for playbooks isn’t creating one. It’s the second run – when you give it different inputs and get a completely different, equally useful result from the same workflow. That’s when it clicks.

MCP: playbooks from your AI assistant

Thirteen new MCP tools for playbook management and execution. Your AI assistant can create playbooks, run them, check execution status, respond to elicitation prompts, manage versions – the full lifecycle.

The interesting one: resume_playbook_execution and respond_to_elicitation. An AI agent can start a playbook, and when it hits a human-in-the-loop checkpoint, a different agent (or a human) can provide the response. Playbooks become a coordination protocol between agents and people.

That’s 51 MCP tools total now, across 10 categories. Promptmark’s MCP surface is getting substantial.

What this means for Promptmark

I want to be direct about this. Playbooks change what Promptmark is.

It started as a prompt library. Then conversations made it a workspace for iterating on prompts. Playbooks make it a workflow engine. Your prompts aren’t just stored artifacts anymore – they’re steps in executable processes. Your library becomes the building blocks for automation.

The template variable system we’ve had since the early days? It was always heading here. A prompt with {{topic}} and {{audience}} variables is a step waiting to be wired into a playbook. Every well-structured template in your library is a potential workflow component.

I’ve been building toward this for a while, even when I didn’t have the words for it yet. Now I do.

Define it once. Run it every time.

– Prompty