Documentation

Testing

Promptmark’s testing feature lets you send prompts to AI models and capture responses — directly from the prompt detail page or the dedicated test interface.

Running a Test

  1. Open a prompt and click Test (or navigate to Tests > New)
  2. Select an AI model from the model picker
  3. If the prompt has template variables, fill in the values
  4. Click Run
  5. The response streams in via SSE (Server-Sent Events)

Model Selection

Promptmark supports models through two mechanisms:

Platform OpenRouter Key

If the instance has OPENROUTER_API_KEY configured, users can connect their OpenRouter account via OAuth in Settings. This gives access to hundreds of models through a single integration.

Bring Your Own Key (BYOK)

Users can add their own API keys for individual providers in Settings:

  • OpenAI
  • Anthropic
  • Google (Gemini)
  • And other providers

Keys are encrypted with AES-GCM before storage.

SSE Streaming

Test responses stream in real-time using Server-Sent Events:

  1. POST /api/test/start — Initiates the test, returns a test ID
  2. GET /api/test/stream?id={testID} — Opens an SSE connection for the response
  3. Tokens stream as they’re generated
  4. The stream closes when the response is complete

Response Capture

After a test completes, you can capture the response for later reference:

  • View captured responses on the prompt detail page
  • Compare responses across different models
  • Add feedback (thumbs up/down, notes)

Via MCP

{
  "tool": "capture_response",
  "arguments": {
    "prompt_id": "abc123",
    "model_id": "claude-3-opus",
    "content": "The AI-generated response text...",
    "metadata": {
      "tokens": 1500,
      "latency_ms": 3200,
      "temperature": 0.7
    }
  }
}

Test Feedback

After viewing a test response, you can submit feedback:

  • Thumbs up/down — Quick quality signal
  • Notes — Detailed feedback about the response

Feedback is stored with the test response for future reference.

Test History

View all past tests:

  • Per-prompt: On the prompt detail page, see all tests for that prompt
  • All tests: Navigate to Tests in the sidebar for a global test list
  • Tests are paginated and sorted by creation date (newest first)

MCP Tools for Responses

Tool Description
capture_response Save an AI response with optional metadata
list_captured_responses List responses (filter by prompt, paginated)
get_captured_response Get full response content by ID
delete_captured_response Permanently delete a captured response