Documentation

Conversations

Conversations let you chat with AI models directly inside Promptmark. Each conversation maintains full message history, tracks token usage, and can be linked to prompts from your library.

Starting a Conversation

  1. Navigate to Chats in the sidebar
  2. Click New Conversation
  3. Fill in the form:
    • Title – A name for the conversation (max 200 characters)
    • Model – Pick from your favorite models or free models
    • System Prompt – Optional instructions for the AI (or link a prompt instead)
  4. Click Create

You land in the conversation thread, ready to send messages.

Quick Start from a Prompt

You can start a conversation directly from any prompt in your library. The prompt’s type determines how it’s used:

  • System or Agent type prompts become the conversation’s system prompt
  • All other types are injected as the first user message

Linked Prompts

A linked prompt connects one of your saved prompts to a conversation. This is useful when you want to reuse a carefully crafted system prompt across multiple conversations.

Linking at Creation

When creating a new conversation, you can select a prompt to link. If the prompt is a System or Agent type, its content is automatically set as the system prompt. Otherwise, it’s sent as the first user message.

Linking Later

You can link or change a prompt on an existing conversation by editing the conversation’s metadata. If the conversation doesn’t already have a system prompt and the linked prompt has content, that content becomes the system prompt.

Sending Messages

Type your message and send it. The AI response streams in real-time – you see tokens appear as the model generates them.

How Streaming Works

When you send a message:

  1. Your message is saved to the conversation
  2. A placeholder assistant message is created
  3. An SSE (Server-Sent Events) stream opens to the AI provider
  4. Tokens stream into the page as they arrive
  5. Once complete, the full response is saved with token counts, cost estimate, and latency

Token Tracking

Every assistant response records:

  • Prompt tokens – How many tokens your message history consumed
  • Completion tokens – How many tokens the AI generated
  • Cost estimate – Estimated cost based on the model’s pricing
  • Latency – Response generation time in milliseconds

These roll up into conversation-level totals (total tokens, total cost) visible in the stats panel.

API Key Requirement

Conversations require an OpenRouter API key. Connect your OpenRouter account via OAuth or add a BYOK (Bring Your Own Key) API key in Settings. Without a configured key, you can still view and manage conversations, but you cannot send messages.

Managing Conversations

Conversation List

The Chats page shows all your conversations sorted by last update, with infinite scroll pagination. Each card shows the title, model, message count, and a preview of the last message.

Use the search bar to filter conversations by title.

Filter by Source

Filter conversations by where they came from:

Source Meaning
Native Created in the Promptmark web UI
MCP Created via MCP tools
Claude Imported from Claude
ChatGPT Imported from ChatGPT
JSON Imported from generic JSON
Markdown Imported from markdown

Renaming

Edit a conversation’s title from the thread view. Titles are capped at 200 characters.

Changing the Model

You can switch the AI model mid-conversation. New messages will use the updated model while previous messages remain unchanged.

Deleting

Delete a conversation from the list or thread view. Deletions are soft – the conversation is hidden but not permanently destroyed.

Save Message as Prompt

You can save any message from a conversation as a new prompt in your library.

Viewing Messages

Conversations load the most recent 50 messages. If the conversation has more, you can load older messages.

Assistant messages are rendered as markdown with syntax-highlighted code blocks. You can toggle between the rendered view and raw text.

Import and Export

Importing Conversations

Promptmark can import conversations from other AI platforms. Navigate to Chats and use the import dialog.

Supported formats:

Format Source File Type
Claude Claude export (conversations.json) JSON
ChatGPT ChatGPT export JSON
Generic JSON Any structured JSON JSON
Markdown Markdown-formatted conversations Text

How to import:

  1. Select the source format
  2. Upload a file (up to 10MB) or paste content
  3. If the file contains multiple conversations, pick which ones to import (or import all)
  4. Review the preview showing title, message count, and first/last messages
  5. Confirm the import
Info
Imported conversations preserve original metadata for lossless round-tripping. If you import a Claude conversation and export it back to Claude format, the metadata (UUIDs, content blocks, attachments) is restored.

Import limits:

  • Maximum file size: 10MB
  • Maximum messages per conversation: 10,000
  • Rate limit: 5 conversations per hour
  • Message limit: 10,000 messages per day

Exporting Conversations

Export any conversation from the thread view. Two formats are available:

Format Description
Claude Claude-compatible JSON (array of conversations with chat_messages)
ChatGPT ChatGPT-compatible JSON (mapping tree structure)

The export downloads as a JSON file named after the conversation title.

Stats

Open the stats panel from a conversation thread to see aggregated metrics:

  • Token breakdown – Prompt tokens, completion tokens, total tokens
  • Specialized tokens – Cached tokens, reasoning tokens (when available)
  • Context usage – Percentage of the model’s context window used
  • Cost – Total estimated cost across all messages
  • Latency – Average, minimum, and maximum response time
  • Message counts – User messages, assistant messages, system messages
  • Duration – Time span from first to last message
  • Source – Whether the conversation is native, imported, or created via MCP
  • Truncated responses – Count of responses that hit the model’s max output length
  • Model mismatches – Cases where the actual model used differed from the requested model

MCP Tools

Conversations are fully manageable via MCP tools. You can create, list, update, delete, send messages, import conversations, and link prompts – all from an AI assistant.

Tool Description
list_conversations List conversations with search and source filtering
get_conversation Get conversation details with optional messages
create_conversation Create a conversation with model and optional system prompt
update_conversation Update title, model, linked prompt, or system prompt
delete_conversation Soft-delete a conversation
send_message Send a message and get the AI response (non-streaming)
import_conversation Import a conversation with messages
list_conversation_messages Paginate through messages with cursor-based pagination
link_conversation_prompt Link a prompt to a conversation for system prompt injection
Info
The send_message MCP tool returns the full AI response in a single call (non-streaming). This is because the MCP protocol does not support SSE streaming. An OpenRouter API key is required.

For full MCP tool schemas, see the MCP Reference.