Documentation

MCP Setup

Promptmark includes a built-in MCP server that lets AI assistants manage your prompt library directly. Connect any MCP-compatible client over Streamable HTTP.

Connection Details

URL https://promptmark.ai/mcp
Protocol Streamable HTTP
Auth OAuth (automatic) or Bearer token (manual)

Client Configuration

Most MCP clients handle authentication automatically. You provide the server URL, and the client walks you through a one-time login via your browser. No tokens to copy.

Claude.ai connects to MCP servers through its built-in Connector UI.

  1. Open Claude.ai
  2. Go to Customize → Connectors → Add custom connector
  3. Paste the URL: https://promptmark.ai/mcp
  4. Claude.ai opens a browser window for you to log in with Civic Auth
  5. Approve the connection, and you’re done

Claude.ai handles OAuth automatically — no tokens to copy or config files to edit.

  1. Go to Settings → Connectors
  2. Enable Developer Mode
  3. Click Create and paste the URL: https://promptmark.ai/mcp
  4. ChatGPT opens a browser window for you to log in with Civic Auth

Run from your terminal:

bash
claude mcp add --transport http promptmark https://promptmark.ai/mcp

On first use, Claude Code opens a browser window for you to log in with Civic Auth and approve the connection.

Or add manually to .mcp.json in your project root (per-project) or ~/.claude/.mcp.json (global):

json
{
  "mcpServers": {
    "promptmark": {
      "type": "url",
      "url": "https://promptmark.ai/mcp"
    }
  }
}

Config changes are detected automatically.

  1. Go to Settings → Tools & MCP → Add New MCP Server
  2. Select streamable-http
  3. Enter URL: https://promptmark.ai/mcp
  4. Cursor opens a browser window for you to log in with Civic Auth

Or add directly to your Cursor MCP config file:

json
{
  "mcpServers": {
    "promptmark": {
      "url": "https://promptmark.ai/mcp"
    }
  }
}

Add to .vscode/mcp.json in your project or your VS Code user settings:

json
{
  "servers": {
    "promptmark": {
      "type": "http",
      "url": "https://promptmark.ai/mcp"
    }
  }
}

On first use, VS Code opens a browser window for you to log in with Civic Auth. Works with GitHub Copilot’s MCP support.

  1. Go to Settings → MCP Servers
  2. Click + Add and select the Streamable HTTP or SSE Server (URL) tab
  3. Paste the configuration:
json
{
  "promptmark": {
    "url": "https://promptmark.ai/mcp"
  }
}

Warp opens a browser window for you to log in with Civic Auth on first use.

Open-source AI coding CLI with MCP support.

Add to opencode.json in your project:

json
{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "promptmark": {
      "type": "remote",
      "url": "https://promptmark.ai/mcp",
      "enabled": true
    }
  }
}

OpenCode handles OAuth automatically — it detects the auth requirement on first connection and opens a browser window for you to log in with Civic Auth.

For any client that supports the MCP protocol over HTTP:

Endpoint https://promptmark.ai/mcp
Protocol Streamable HTTP
Auth OAuth 2.0 (automatic) or Authorization: Bearer <token>

Most clients handle OAuth automatically when given the URL. If your client requires a token instead, see Bearer Tokens below.

For agent frameworks (CrewAI, LangChain, Pydantic AI, OpenClaw), see Agent Frameworks below.

How OAuth Works

When you add the Promptmark URL to your MCP client, authentication happens automatically behind the scenes. Here is what the client does:

  1. The client discovers Promptmark’s auth endpoints via:

    • GET /.well-known/oauth-protected-resource (RFC 9728)
    • GET /.well-known/oauth-authorization-server (RFC 8414)
  2. The client registers itself dynamically via POST /oauth/register (RFC 7591) and initiates an authorization code flow with PKCE.

  3. Your browser opens to the Promptmark login page. You log in with Civic Auth and approve the connection.

  4. The client exchanges the authorization code for a token via POST /oauth/token and uses it for all subsequent requests.

Info
Tokens are valid for 24 hours. Most clients handle token refresh automatically. If your session expires, the client will prompt you to log in again.

Device Flow (Alternative)

Some clients use the OAuth Device Flow instead of the browser redirect flow. The result is the same — you log in and approve — but the mechanism differs:

  1. The client requests a device code:
POST /api/oauth/device/code
Auth: None

Returns a device code and verification URL. No auth required.

Response:

json
{
  "device_code": "abc123...",
  "user_code": "ABCD-1234",
  "verification_uri": "https://promptmark.ai/auth/device",
  "expires_in": 900,
  "interval": 5
}
  1. You visit the verification_uri in your browser, log in, and enter the user_code to approve.

  2. The client polls for a token:

POST /api/oauth/device/token
Auth: None

Polls with the device_code until the user approves. Returns authorization_pending while waiting.

Request body: device_code=abc123... (form-encoded or JSON)

Response (on approval):

json
{
  "access_token": "eyJhbG...",
  "token_type": "Bearer",
  "expires_in": 86400
}
  1. The client uses the returned token as Authorization: Bearer <token> for all subsequent requests.
Info
Device codes expire after 15 minutes. The polling interval is 5 seconds. Tokens are valid for 24 hours.

Bearer Tokens

For agents, scripts, CI/CD pipelines, and clients that don’t support OAuth, you can authenticate with a static bearer token.

  1. Log in to Promptmark at promptmark.ai
  2. Go to Settings > Developer
  3. Copy your Bearer token from the Bearer Token section
  4. Pass it in the Authorization header: Authorization: Bearer YOUR_TOKEN
Warning
Your token grants full access to your prompt library. Keep it secret. If compromised, regenerate it from the same Settings page.

To use a bearer token in a client config file, add the headers field:

json
{
  "mcpServers": {
    "promptmark": {
      "type": "url",
      "url": "https://promptmark.ai/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_TOKEN"
      }
    }
  }
}

Agent Frameworks

Agent frameworks connect to MCP servers programmatically. These use bearer tokens rather than OAuth — generate a token from Settings > Developer in Promptmark before proceeding.

CrewAI

python
from crewai.mcp import MCPServerHTTP

mcps = [
    MCPServerHTTP(
        url="https://promptmark.ai/mcp",
        headers={"Authorization": "Bearer YOUR_TOKEN"},
        streamable=True,
    ),
]

LangChain / LangGraph

python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
    "promptmark": {
        "transport": "http",
        "url": "https://promptmark.ai/mcp",
        "headers": {
            "Authorization": "Bearer YOUR_TOKEN"
        },
    }
})
tools = await client.get_tools()

Requires langchain-mcp-adapters v0.2.0+.

OpenClaw

bash
openclaw mcp set promptmark '{"url":"https://promptmark.ai/mcp","transport":"streamable-http","headers":{"Authorization":"Bearer YOUR_TOKEN"}}'

Or in config under mcp.servers. Manage with openclaw mcp list, openclaw mcp show, openclaw mcp unset.

Pydantic AI

python
import httpx
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP

server = MCPServerStreamableHTTP(
    "https://promptmark.ai/mcp",
    http_client=httpx.AsyncClient(
        headers={"Authorization": "Bearer YOUR_TOKEN"},
    ),
)

agent = Agent("openai:gpt-4o", toolsets=[server])

async def main():
    result = await agent.run("List my prompts in Promptmark")
    print(result.output)

Requires pydantic-ai v0.2.0+ and httpx.

Other Frameworks

These frameworks also support Streamable HTTP MCP connections with bearer token auth:

  • Google ADKMcpToolset with StreamableHTTPConnectionParams (docs)
  • Vercel AI SDKcreateMCPClient() in @ai-sdk/mcp v6+ (docs)
  • Microsoft AutoGenStreamableHttpMcpToolAdapter in autogen-ext (docs)
  • MastraMCPClient from @mastra/mcp (docs)
  • n8n — MCP Client Tool node with HTTP Streamable transport (docs)

Verifying the Connection

Once connected, ask your AI assistant to list your prompts:

“List my prompts in Promptmark”

The assistant should call the list_prompts tool and return your prompt library. If you see an authentication error, check that your OAuth session is active or verify your token in Settings > Developer.

Available Tools

After connecting, your AI assistant has access to 74 tools across these categories:

Category Count Operations
Prompts 13 CRUD, search, list tags, schema, render, validate inputs, get/update/remove tools
Collections 6 CRUD, assign prompt to collection
Tags 4 Rename, add, remove, bulk add
Remix 1 List remixes
Versions 3 List, get, restore
Responses 4 Capture, list, get, delete
Scans 1 Acknowledge scan issues
Conversations 11 CRUD, send message, import, list messages, link prompt, update/remove tools
Playbooks 13 CRUD, validate, execute, get/list executions, versions, restore, resume, elicit
MCP Connections 10 CRUD connections, refresh tools, list/get cached tools, execution history, favorite tools
Skills 6 CRUD, search, get full SKILL.md (agentskills.io spec)
PostHog 2 Configure BYOK analytics, get configuration status

See MCP Reference for detailed documentation of every tool, including parameters, types, and examples.