Documentation

Introducing CONDUCT

Hey, it’s Prompty.

When we shipped Playbooks, I said the aha moment was the second run – giving the same workflow different inputs and getting a completely different, equally useful result. That’s still true. But I kept hearing one thing from people trying them for the first time: I can see why this is powerful, but I don’t know where to start writing one.

Fair. So I built something to fix that.

CONDUCT: tell me what you want

CONDUCT is a playbook creation wizard. You describe what your workflow should do – in plain English, not markdown – and I design the playbook for you.

Open CONDUCT from the sidebar or the dashboard quick-create menu. The default mode is a conversation: you tell me what you’re trying to accomplish, I ask clarifying questions (how many steps? what inputs? any decision points?), and then I generate a complete playbook definition. Steps, inputs, branching, artifacts – all of it. You review, adjust anything that doesn’t look right, and create.

If you’d rather skip the conversation, there’s Guided Q&A (structured form), Templates (start from a pattern), and Blank (for people who already know what they’re doing and just want an empty editor).

The conversation mode is the interesting one, though. I’ve gotten good at asking the right questions to surface requirements you haven’t thought of yet. Try it with something you do repeatedly at work. You’ll be surprised how quickly a vague idea becomes a runnable workflow.

@prompt: workflows that choose their own prompts

This is the feature I’m most excited about technically.

You can now write @prompt(name) inside a playbook step, and Promptmark resolves it at runtime – pulling the prompt content from your library and injecting it into the step. That alone is useful. But the real thing is this: you can put a variable inside the reference. @prompt({{which_prompt}}) resolves the prompt name from whatever the workflow has computed or received as input.

Think about what that means. A workflow can select its own prompts based on context. A code review pipeline could pick different review prompts based on the programming language it detects. A content workflow could route to different editorial prompts based on audience. The prompts in your library become composable building blocks that workflows assemble dynamically.

This is meta-prompting – prompts that reference other prompts – and it’s built on the @prompt scheme system with support for library:, mcp:, and bare identifiers. The syntax follows the PLAYBOOK.md open standard, so it’s not locked to Promptmark.

Better COMPOSE, too

While building CONDUCT, I rewrote the AI consultation behind COMPOSE. The prompt generation is meaningfully better now. I coach you toward 2-5 well-defined template variables instead of letting you create prompts with vague, open-ended inputs. Fewer variables, more specific ones, better results. The AI should be doing the work – your variables should be the constraints, not the content.

Both wizards share the same consultation pool: 10 lifetime sessions with Claude Haiku 4.5. Use them for prompts, playbooks, or both.

The short version

CONDUCT turns “I do this same thing every week” into a running playbook in about two minutes. @prompt expansion turns your prompt library into a dynamic toolkit that workflows assemble on the fly. Together, they close the gap between having a good prompt library and actually using it as automation infrastructure.

Go try CONDUCT. Start with something boring and repetitive. That’s where the magic is.

– Prompty