Prompts

Version-controlled AI prompt templates with variable substitution, input validation, and execution tracking.

What is a Prompt?

A Prompt in Prompt Forge is a reusable, version-controlled template that defines how to interact with Large Language Models (LLMs). Each prompt consists of:

  • Template Content - The actual prompt text with variable placeholders
  • Input Schema - JSON Schema defining required and optional variables
  • Model Configuration - Provider, model, temperature, max tokens, etc.
  • Version History - Track changes over time with incremental versions
  • Execution Metrics - Latency, token usage, and cost tracking

Prompt Structure

Basic Prompt Anatomy

Example Prompt Template
text
You are a sentiment analysis expert. Analyze the sentiment of the following text.

Text: {{text}}

Provide your analysis in the following format:
- Sentiment: (Positive, Negative, or Neutral)
- Confidence: (0.0 to 1.0)
- Key Phrases: List the phrases that influenced your decision

Variables are denoted using double curly braces: {{variableName}}. These will be replaced with actual values when the prompt is executed.

Input Schema

The input schema defines what variables are required and their types. It follows JSON Schema specification:

{
  "type": "object",
  "properties": {
    "text": {
      "type": "string",
      "description": "The text to analyze for sentiment"
    }
  },
  "required": ["text"]
}

Model Parameters

Each prompt version includes configuration for the AI model:

ParameterTypeDescription
model
required
stringModel identifier (e.g., "claude-3-5-sonnet-20241022", "gpt-4")
temperaturenumberRandomness in output (0.0 = deterministic, 2.0 = very random)
max_tokensnumberMaximum tokens in the response
{
  "model": "claude-3-5-sonnet-20241022",
  "temperature": 0.3,
  "max_tokens": 500
}

Creating Prompts

Via Web Interface

Navigate to Prompts → New Prompt in the dashboard:

  1. Enter a unique name (will be converted to kebab-case)
  2. Add a description of what the prompt does
  3. Select a category (Summarization, Analysis, Generation, etc.)
  4. Write your template with {{variables}}
  5. Define input schema for variables
  6. Configure model parameters
  7. Test with sample inputs
  8. Publish when ready

Via GraphQL API

Create Prompt + First Version
graphql
# Step 1: Create the prompt
mutation CreatePrompt {
  createPrompt(input: {
    name: "sentiment-analyzer"
    description: "Analyzes sentiment of text with confidence scores"
    category: ANALYSIS
    isPublic: false
  }) {
    id
    name
  }
}

# Step 2: Add first version with content
mutation CreatePromptVersion {
  createPromptVersion(input: {
    promptId: "cm1234567890"
    content: """
You are a sentiment analysis expert. Analyze the sentiment of: {{text}}

Format:
- Sentiment: (Positive/Negative/Neutral)
- Confidence: (0.0-1.0)
"""
    params: {
      model: "claude-3-5-sonnet-20241022"
      temperature: 0.3
      max_tokens: 300
    }
    inputSchema: {
      type: "object"
      properties: {
        text: {
          type: "string"
          description: "Text to analyze"
        }
      }
      required: ["text"]
    }
  }) {
    id
    version
  }
}

Versioning

Prompts support versioning to track changes over time. Each version is immutable once created. This allows you to:

  • Safely iterate on prompts without breaking existing integrations
  • Roll back to previous versions if needed
  • A/B test different prompt variations
  • Track performance improvements across versions
By default, executing a prompt uses the latest version. You can optionally specify a specific version number if needed.

Creating New Versions

When you update a prompt through the web interface, you're creating a new version. The version number automatically increments:

v1 → Initial version
v2 → Improved clarity of instructions
v3 → Added examples for better consistency
v4 → Optimized for lower token usage

Executing Prompts

Execute a prompt by providing values for all required variables:

Execute via API
graphql
mutation ExecutePrompt {
  executePrompt(
    promptId: "cm1234567890"
    input: {
      text: "I absolutely love this product! Best purchase ever!"
    }
  ) {
    output
    latencyMs
    tokenIn
    tokenOut
    costUsd
    executionId
  }
}

Response Structure

{
  "data": {
    "executePrompt": {
      "output": "- Sentiment: Positive\n- Confidence: 0.95",
      "latencyMs": 1250,
      "tokenIn": 45,
      "tokenOut": 28,
      "costUsd": "0.0012",
      "executionId": "exec_abc123"
    }
  }
}
ParameterTypeDescription
outputstringThe AI-generated response text
latencyMsnumberExecution time in milliseconds
tokenInnumberNumber of input tokens consumed
tokenOutnumberNumber of output tokens generated
costUsdstringCost in USD for this execution
executionIdstringUnique identifier for this execution

Categories

Prompts can be organized into categories for better organization:

GENERAL

General purpose prompts

SUMMARIZATION

Text summarization and condensation

ANALYSIS

Content analysis and evaluation

GENERATION

Content creation and generation

CLASSIFICATION

Categorization and labeling

EXTRACTION

Data extraction and parsing

ROUTING

Intent detection and routing

CUSTOM

Custom use cases

Best Practices

Be Specific - Clear, detailed instructions produce better results than vague ones.
Use Examples - Include examples in your prompt template to guide the AI's output format.
Test Thoroughly - Use the built-in test functionality to validate your prompt with various inputs.
Version Strategically - Create a new version when making significant changes, but avoid over-versioning for minor tweaks.
Monitor Costs - Review execution metrics to optimize token usage and costs.

Forking Prompts

You can fork any prompt (including public prompts from other users) to create your own copy:

mutation ForkPrompt {
  forkPrompt(id: "cm1234567890") {
    id
    name
    versions {
      id
      version
    }
  }
}

Forking creates a complete copy with all versions, which you can then modify independently.

Next Steps

Build Chains

Combine multiple prompts into workflows

Learn about Chains →

View API Reference

Complete GraphQL API for prompts

Prompts API →