MCP Setup
Promptmark includes a built-in MCP server that lets AI assistants manage your prompt library directly. Connect any MCP-compatible client over Streamable HTTP.
Connection Details
| URL | https://promptmark.ai/mcp |
| Protocol | Streamable HTTP |
| Auth | OAuth (automatic) or Bearer token (manual) |
Client Configuration
Most MCP clients handle authentication automatically. You provide the server URL, and the client walks you through a one-time login via your browser. No tokens to copy.
Claude.ai connects to MCP servers through its built-in Connector UI.
- Open Claude.ai
- Go to Customize → Connectors → Add custom connector
- Paste the URL:
https://promptmark.ai/mcp - Claude.ai opens a browser window for you to log in with Civic Auth
- Approve the connection, and you’re done
Claude.ai handles OAuth automatically — no tokens to copy or config files to edit.
- Go to Settings → Connectors
- Enable Developer Mode
- Click Create and paste the URL:
https://promptmark.ai/mcp - ChatGPT opens a browser window for you to log in with Civic Auth
Run from your terminal:
claude mcp add --transport http promptmark https://promptmark.ai/mcpOn first use, Claude Code opens a browser window for you to log in with Civic Auth and approve the connection.
Or add manually to .mcp.json in your project root (per-project) or ~/.claude/.mcp.json (global):
{
"mcpServers": {
"promptmark": {
"type": "url",
"url": "https://promptmark.ai/mcp"
}
}
}Config changes are detected automatically.
- Go to Settings → Tools & MCP → Add New MCP Server
- Select streamable-http
- Enter URL:
https://promptmark.ai/mcp - Cursor opens a browser window for you to log in with Civic Auth
Or add directly to your Cursor MCP config file:
{
"mcpServers": {
"promptmark": {
"url": "https://promptmark.ai/mcp"
}
}
}Add to .vscode/mcp.json in your project or your VS Code user settings:
{
"servers": {
"promptmark": {
"type": "http",
"url": "https://promptmark.ai/mcp"
}
}
}On first use, VS Code opens a browser window for you to log in with Civic Auth. Works with GitHub Copilot’s MCP support.
- Go to Settings → MCP Servers
- Click + Add and select the Streamable HTTP or SSE Server (URL) tab
- Paste the configuration:
{
"promptmark": {
"url": "https://promptmark.ai/mcp"
}
}Warp opens a browser window for you to log in with Civic Auth on first use.
Open-source AI coding CLI with MCP support.
Add to opencode.json in your project:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"promptmark": {
"type": "remote",
"url": "https://promptmark.ai/mcp",
"enabled": true
}
}
}OpenCode handles OAuth automatically — it detects the auth requirement on first connection and opens a browser window for you to log in with Civic Auth.
For any client that supports the MCP protocol over HTTP:
| Endpoint | https://promptmark.ai/mcp |
| Protocol | Streamable HTTP |
| Auth | OAuth 2.0 (automatic) or Authorization: Bearer <token> |
Most clients handle OAuth automatically when given the URL. If your client requires a token instead, see Bearer Tokens below.
For agent frameworks (CrewAI, LangChain, Pydantic AI, OpenClaw), see Agent Frameworks below.
How OAuth Works
When you add the Promptmark URL to your MCP client, authentication happens automatically behind the scenes. Here is what the client does:
-
The client discovers Promptmark’s auth endpoints via:
GET /.well-known/oauth-protected-resource(RFC 9728)GET /.well-known/oauth-authorization-server(RFC 8414)
-
The client registers itself dynamically via
POST /oauth/register(RFC 7591) and initiates an authorization code flow with PKCE. -
Your browser opens to the Promptmark login page. You log in with Civic Auth and approve the connection.
-
The client exchanges the authorization code for a token via
POST /oauth/tokenand uses it for all subsequent requests.
Device Flow (Alternative)
Some clients use the OAuth Device Flow instead of the browser redirect flow. The result is the same — you log in and approve — but the mechanism differs:
- The client requests a device code:
/api/oauth/device/code
Returns a device code and verification URL. No auth required.
Response:
{
"device_code": "abc123...",
"user_code": "ABCD-1234",
"verification_uri": "https://promptmark.ai/auth/device",
"expires_in": 900,
"interval": 5
}-
You visit the
verification_uriin your browser, log in, and enter theuser_codeto approve. -
The client polls for a token:
/api/oauth/device/token
Polls with the device_code until the user approves. Returns authorization_pending while waiting.
Request body: device_code=abc123... (form-encoded or JSON)
Response (on approval):
{
"access_token": "eyJhbG...",
"token_type": "Bearer",
"expires_in": 86400
}- The client uses the returned token as
Authorization: Bearer <token>for all subsequent requests.
Bearer Tokens
For agents, scripts, CI/CD pipelines, and clients that don’t support OAuth, you can authenticate with a static bearer token.
- Log in to Promptmark at promptmark.ai
- Go to Settings > Developer
- Copy your Bearer token from the Bearer Token section
- Pass it in the
Authorizationheader:Authorization: Bearer YOUR_TOKEN
To use a bearer token in a client config file, add the headers field:
{
"mcpServers": {
"promptmark": {
"type": "url",
"url": "https://promptmark.ai/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN"
}
}
}
}Agent Frameworks
Agent frameworks connect to MCP servers programmatically. These use bearer tokens rather than OAuth — generate a token from Settings > Developer in Promptmark before proceeding.
CrewAI
from crewai.mcp import MCPServerHTTP
mcps = [
MCPServerHTTP(
url="https://promptmark.ai/mcp",
headers={"Authorization": "Bearer YOUR_TOKEN"},
streamable=True,
),
]LangChain / LangGraph
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"promptmark": {
"transport": "http",
"url": "https://promptmark.ai/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN"
},
}
})
tools = await client.get_tools()Requires langchain-mcp-adapters v0.2.0+.
OpenClaw
openclaw mcp set promptmark '{"url":"https://promptmark.ai/mcp","transport":"streamable-http","headers":{"Authorization":"Bearer YOUR_TOKEN"}}'Or in config under mcp.servers. Manage with openclaw mcp list, openclaw mcp show, openclaw mcp unset.
Pydantic AI
import httpx
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP
server = MCPServerStreamableHTTP(
"https://promptmark.ai/mcp",
http_client=httpx.AsyncClient(
headers={"Authorization": "Bearer YOUR_TOKEN"},
),
)
agent = Agent("openai:gpt-4o", toolsets=[server])
async def main():
result = await agent.run("List my prompts in Promptmark")
print(result.output)Requires pydantic-ai v0.2.0+ and httpx.
Other Frameworks
These frameworks also support Streamable HTTP MCP connections with bearer token auth:
- Google ADK —
McpToolsetwithStreamableHTTPConnectionParams(docs) - Vercel AI SDK —
createMCPClient()in@ai-sdk/mcpv6+ (docs) - Microsoft AutoGen —
StreamableHttpMcpToolAdapterinautogen-ext(docs) - Mastra —
MCPClientfrom@mastra/mcp(docs) - n8n — MCP Client Tool node with HTTP Streamable transport (docs)
Verifying the Connection
Once connected, ask your AI assistant to list your prompts:
“List my prompts in Promptmark”
The assistant should call the list_prompts tool and return your prompt library. If you see an authentication error, check that your OAuth session is active or verify your token in Settings > Developer.
Available Tools
After connecting, your AI assistant has access to 74 tools across these categories:
| Category | Count | Operations |
|---|---|---|
| Prompts | 13 | CRUD, search, list tags, schema, render, validate inputs, get/update/remove tools |
| Collections | 6 | CRUD, assign prompt to collection |
| Tags | 4 | Rename, add, remove, bulk add |
| Remix | 1 | List remixes |
| Versions | 3 | List, get, restore |
| Responses | 4 | Capture, list, get, delete |
| Scans | 1 | Acknowledge scan issues |
| Conversations | 11 | CRUD, send message, import, list messages, link prompt, update/remove tools |
| Playbooks | 13 | CRUD, validate, execute, get/list executions, versions, restore, resume, elicit |
| MCP Connections | 10 | CRUD connections, refresh tools, list/get cached tools, execution history, favorite tools |
| Skills | 6 | CRUD, search, get full SKILL.md (agentskills.io spec) |
| PostHog | 2 | Configure BYOK analytics, get configuration status |
See MCP Reference for detailed documentation of every tool, including parameters, types, and examples.