Prompts are infrastructure. Treat them that way.
You version your code, test your APIs, and deploy through CI. Your prompts deserve the same rigor — not a JSON blob buried in a config file.
How Promptmark fits
66 MCP tools and REST API
Your AI agents and dev tools can read, write, render, and test prompts without leaving the IDE. Connect Claude Code, Cursor, Windsurf, or any MCP client — 66 tools across 13 categories, authenticated via OAuth 2.0 with device flow. No web UI required.
Playbooks with Trigger URLs
Define multi-step AI workflows in markdown and expose them as trigger URLs. Call a playbook from a webhook, a cron job, or your CI pipeline. Branch on conditions, capture outputs between steps, and deliver results to a webhook, email, or GitHub repo.
CONDUCT wizard
Describe the workflow you need in plain language. CONDUCT shapes it into a runnable playbook with steps, branches, and delivery targets. Skip the syntax — iterate on the logic.
BYOK with 300+ models
Bring your own API keys. Test prompts against OpenAI, Anthropic, Google, Meta, and dozens more. Track token usage and cost per request. Your billing, your choice of model.
Version-controlled prompt deployments
Every prompt edit creates an automatic snapshot. Diff any two versions to isolate what changed between deployments.
Template variables as API parameters
Define typed inputs on any prompt. Your application passes values through the trigger URL or MCP call. Schema validation rejects bad inputs before the model sees them.
Connect to external MCP servers
Your playbooks can call tools from external MCP servers — query a database, hit an internal API, or trigger actions in other services.
Example workflow
Write the prompt in your editor
Use Claude Code or Cursor with the Promptmark MCP connection. Create a new prompt, add template variables for the dynamic parts, and save it — all from the terminal or editor.
Test across models
Run the prompt against GPT-4o, Claude Opus 4, and Gemini 2.5 Pro. Compare responses, check token costs, pick the best performer. Save the results.
Build the playbook
Chain multiple prompts into a multi-step workflow. Add branching logic for edge cases. Expose it as a trigger URL.
Deploy via API
Your application calls the trigger URL with input variables. The playbook runs, streams results, and delivers output to your webhook. When you need to update a prompt, edit it in Promptmark — your production endpoint stays the same.
Your prompts belong in your infrastructure
Connect your dev tools to Promptmark in 30 seconds. Manage prompts alongside your code, not instead of it.
Connect your first MCP client — free