Testing
Promptmark’s testing feature lets you send prompts to AI models and capture responses — directly from the prompt detail page or the dedicated test interface.
Running a Test
- Open a prompt and click Test (or navigate to Tests > New)
- Select an AI model from the model picker
- If the prompt has template variables, fill in the values
- Click Run
- The response streams in via SSE (Server-Sent Events)
Model Selection
Promptmark supports models through multiple providers. When you run a test, Promptmark automatically selects the best available provider for the model you chose.
Direct Provider Keys (Recommended)
Connect your OpenAI or Anthropic account for direct API access. Direct connections skip the middleman, giving you lower latency and direct billing to your own account.
OpenAI — Paste your OpenAI API key in Settings > Connections. Get one from platform.openai.com/api-keys.
Anthropic — Paste your Anthropic API key in Settings > Connections. Get one from console.anthropic.com.
openai/gpt-4o, the request goes directly to OpenAI’s API – not through OpenRouter.
OpenRouter (Fallback)
OpenRouter provides access to hundreds of models from many providers (Google, Meta, Mistral, and more) through a single integration. If you don’t have a direct key for a model’s provider, Promptmark routes the request through OpenRouter.
Connect your OpenRouter account via OAuth in Settings > Connections.
Provider Priority
When you select a model, Promptmark resolves the provider in this order:
- Direct key – If the model belongs to OpenAI or Anthropic and you have that provider connected, the request goes direct.
- OpenRouter – For all other providers, or if no direct key is available.
After a streaming response completes, you’ll see a “via OpenAI”, “via Anthropic”, or “via OpenRouter” indicator showing which provider served the request.
Key Storage
All API keys and OAuth tokens are encrypted with AES-GCM before storage. Keys are never exposed in the UI after saving – only the last 4 characters are shown for identification.
SSE Streaming
Test responses stream in real-time using Server-Sent Events:
POST /api/test/start— Initiates the test, returns a test IDGET /api/test/stream?id={testID}— Opens an SSE connection for the response- Tokens stream as they’re generated
- The stream closes when the response is complete
Response Capture
After a test completes, you can capture the response for later reference:
- View captured responses on the prompt detail page
- Compare responses across different models
- Add feedback (thumbs up/down, notes)
Via MCP
{
"tool": "capture_response",
"arguments": {
"prompt_id": "abc123",
"model_id": "anthropic/claude-sonnet-4",
"content": "The AI-generated response text...",
"metadata": {
"tokens": 1500,
"latency_ms": 3200,
"temperature": 0.7
}
}
}Test Feedback
After viewing a test response, you can submit feedback:
- Thumbs up/down — Quick quality signal
- Notes — Detailed feedback about the response
Feedback is stored with the test response for future reference.
Test History
View all past tests:
- Per-prompt: On the prompt detail page, see all tests for that prompt
- All tests: Navigate to Tests in the sidebar for a global test list
- Tests are paginated and sorted by creation date (newest first)
MCP Tools for Responses
| Tool | Description |
|---|---|
capture_response |
Save an AI response with optional metadata |
list_captured_responses |
List responses (filter by prompt, paginated) |
get_captured_response |
Get full response content by ID |
delete_captured_response |
Permanently delete a captured response |