Prompts
Version-controlled AI prompt templates with variable substitution, input validation, and execution tracking.
What is a Prompt?
A Prompt in Prompt Forge is a reusable, version-controlled template that defines how to interact with Large Language Models (LLMs). Each prompt consists of:
- Template Content - The actual prompt text with variable placeholders
- Input Schema - JSON Schema defining required and optional variables
- Model Configuration - Provider, model, temperature, max tokens, etc.
- Version History - Track changes over time with incremental versions
- Execution Metrics - Latency, token usage, and cost tracking
Prompt Structure
Basic Prompt Anatomy
You are a sentiment analysis expert. Analyze the sentiment of the following text.
Text: {{text}}
Provide your analysis in the following format:
- Sentiment: (Positive, Negative, or Neutral)
- Confidence: (0.0 to 1.0)
- Key Phrases: List the phrases that influenced your decisionVariables are denoted using double curly braces: {{variableName}}. These will be replaced with actual values when the prompt is executed.
Input Schema
The input schema defines what variables are required and their types. It follows JSON Schema specification:
{
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The text to analyze for sentiment"
}
},
"required": ["text"]
}Model Parameters
Each prompt version includes configuration for the AI model:
| Parameter | Type | Description |
|---|---|---|
modelrequired | string | Model identifier (e.g., "claude-3-5-sonnet-20241022", "gpt-4") |
temperature | number | Randomness in output (0.0 = deterministic, 2.0 = very random) |
max_tokens | number | Maximum tokens in the response |
{
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.3,
"max_tokens": 500
}Creating Prompts
Via Web Interface
Navigate to Prompts → New Prompt in the dashboard:
- Enter a unique name (will be converted to kebab-case)
- Add a description of what the prompt does
- Select a category (Summarization, Analysis, Generation, etc.)
- Write your template with
{{variables}} - Define input schema for variables
- Configure model parameters
- Test with sample inputs
- Publish when ready
Via GraphQL API
# Step 1: Create the prompt
mutation CreatePrompt {
createPrompt(input: {
name: "sentiment-analyzer"
description: "Analyzes sentiment of text with confidence scores"
category: ANALYSIS
isPublic: false
}) {
id
name
}
}
# Step 2: Add first version with content
mutation CreatePromptVersion {
createPromptVersion(input: {
promptId: "cm1234567890"
content: """
You are a sentiment analysis expert. Analyze the sentiment of: {{text}}
Format:
- Sentiment: (Positive/Negative/Neutral)
- Confidence: (0.0-1.0)
"""
params: {
model: "claude-3-5-sonnet-20241022"
temperature: 0.3
max_tokens: 300
}
inputSchema: {
type: "object"
properties: {
text: {
type: "string"
description: "Text to analyze"
}
}
required: ["text"]
}
}) {
id
version
}
}Versioning
Prompts support versioning to track changes over time. Each version is immutable once created. This allows you to:
- Safely iterate on prompts without breaking existing integrations
- Roll back to previous versions if needed
- A/B test different prompt variations
- Track performance improvements across versions
Creating New Versions
When you update a prompt through the web interface, you're creating a new version. The version number automatically increments:
v1 → Initial version
v2 → Improved clarity of instructions
v3 → Added examples for better consistency
v4 → Optimized for lower token usageExecuting Prompts
Execute a prompt by providing values for all required variables:
mutation ExecutePrompt {
executePrompt(
promptId: "cm1234567890"
input: {
text: "I absolutely love this product! Best purchase ever!"
}
) {
output
latencyMs
tokenIn
tokenOut
costUsd
executionId
}
}Response Structure
{
"data": {
"executePrompt": {
"output": "- Sentiment: Positive\n- Confidence: 0.95",
"latencyMs": 1250,
"tokenIn": 45,
"tokenOut": 28,
"costUsd": "0.0012",
"executionId": "exec_abc123"
}
}
}| Parameter | Type | Description |
|---|---|---|
output | string | The AI-generated response text |
latencyMs | number | Execution time in milliseconds |
tokenIn | number | Number of input tokens consumed |
tokenOut | number | Number of output tokens generated |
costUsd | string | Cost in USD for this execution |
executionId | string | Unique identifier for this execution |
Categories
Prompts can be organized into categories for better organization:
GENERAL
General purpose prompts
SUMMARIZATION
Text summarization and condensation
ANALYSIS
Content analysis and evaluation
GENERATION
Content creation and generation
CLASSIFICATION
Categorization and labeling
EXTRACTION
Data extraction and parsing
ROUTING
Intent detection and routing
CUSTOM
Custom use cases
Best Practices
Forking Prompts
You can fork any prompt (including public prompts from other users) to create your own copy:
mutation ForkPrompt {
forkPrompt(id: "cm1234567890") {
id
name
versions {
id
version
}
}
}Forking creates a complete copy with all versions, which you can then modify independently.