Simple Prompt Example
Complete walkthrough of creating, testing, and executing a sentiment analysis prompt.
Overview
This example demonstrates creating a sentiment analysis prompt from scratch, including version management, input validation, and execution monitoring.
We'll build a prompt that analyzes customer feedback and returns structured sentiment data with confidence scores.
Step 1: Create the Prompt
First, create the prompt metadata:
Create Prompt
graphql
mutation CreatePrompt {
createPrompt(input: {
name: "customer-feedback-sentiment"
description: "Analyzes customer feedback to determine sentiment, confidence, and key themes"
category: ANALYSIS
isPublic: false
}) {
id
name
createdAt
}
}Response
json
{
"data": {
"createPrompt": {
"id": "cm5xk8z9a0001",
"name": "customer-feedback-sentiment",
"createdAt": "2025-01-21T14:30:00Z"
}
}
}Save the
id value - you'll need it to create versions and execute the prompt.Step 2: Create First Version
Now add the actual prompt content and configuration:
Create Prompt Version
graphql
mutation CreatePromptVersion {
createPromptVersion(input: {
promptId: "cm5xk8z9a0001"
content: """
You are a customer feedback analysis expert. Analyze the sentiment of the following customer feedback.
Customer Feedback:
{{feedback}}
Provide your analysis in the following JSON format:
{
"sentiment": "Positive" | "Negative" | "Neutral",
"confidence": 0.0 to 1.0,
"primaryEmotion": "string",
"keyThemes": ["theme1", "theme2"],
"actionable": true | false,
"summary": "Brief one-sentence summary of the feedback"
}
Be accurate and consider context carefully.
"""
params: {
model: "claude-3-5-sonnet-20241022"
temperature: 0.2
max_tokens: 500
}
inputSchema: {
type: "object"
properties: {
feedback: {
type: "string"
description: "Customer feedback text to analyze"
minLength: 1
}
}
required: ["feedback"]
}
}) {
id
version
createdAt
}
}Response
json
{
"data": {
"createPromptVersion": {
"id": "ver_abc123",
"version": 1,
"createdAt": "2025-01-21T14:32:00Z"
}
}
}We use a low temperature (0.2) for consistent, deterministic analysis. The structured JSON output format makes it easy to use the results in downstream processing.
Step 3: Test the Prompt
Execute the prompt with sample customer feedback:
Execute Prompt
graphql
mutation ExecutePrompt {
executePrompt(
promptId: "cm5xk8z9a0001"
input: {
feedback: "I absolutely love this product! The quality exceeded my expectations and the customer service team was incredibly helpful when I had questions about setup. Will definitely recommend to friends."
}
) {
output
latencyMs
tokenIn
tokenOut
costUsd
executionId
}
}Response
json
{
"data": {
"executePrompt": {
"output": "{\n \"sentiment\": \"Positive\",\n \"confidence\": 0.98,\n \"primaryEmotion\": \"Delight\",\n \"keyThemes\": [\"Product Quality\", \"Customer Service\", \"Recommendation\"],\n \"actionable\": false,\n \"summary\": \"Highly satisfied customer praising product quality and excellent customer service support.\"\n}",
"latencyMs": 1850,
"tokenIn": 125,
"tokenOut": 95,
"costUsd": "0.0018",
"executionId": "exec_xyz789"
}
}
}Parse the JSON output to use in your application:
Parsed Result
json
{
"sentiment": "Positive",
"confidence": 0.98,
"primaryEmotion": "Delight",
"keyThemes": ["Product Quality", "Customer Service", "Recommendation"],
"actionable": false,
"summary": "Highly satisfied customer praising product quality and excellent customer service support."
}Step 4: Test with Negative Feedback
Execute with Negative Feedback
graphql
mutation ExecutePrompt {
executePrompt(
promptId: "cm5xk8z9a0001"
input: {
feedback: "This is the worst experience I've ever had. The product arrived damaged, customer service was unhelpful, and I still haven't received my refund after 3 weeks. Do not buy from this company."
}
) {
output
latencyMs
costUsd
}
}Response
json
{
"data": {
"executePrompt": {
"output": "{\n \"sentiment\": \"Negative\",\n \"confidence\": 0.99,\n \"primaryEmotion\": \"Anger\",\n \"keyThemes\": [\"Damaged Product\", \"Poor Customer Service\", \"Refund Issues\"],\n \"actionable\": true,\n \"summary\": \"Extremely dissatisfied customer reporting damaged product, unhelpful service, and unresolved refund after 3 weeks.\"\n}",
"latencyMs": 1920,
"costUsd": "0.0019"
}
}
}Parsed result:
{
"sentiment": "Negative",
"confidence": 0.99,
"primaryEmotion": "Anger",
"keyThemes": ["Damaged Product", "Poor Customer Service", "Refund Issues"],
"actionable": true,
"summary": "Extremely dissatisfied customer reporting damaged product, unhelpful service, and unresolved refund after 3 weeks."
}Notice the
actionable field is true for negative feedback, indicating urgent attention is needed. This can trigger automated workflows.Step 5: Integrate into Application
JavaScript Example
Sentiment Analysis Function
typescript
interface SentimentResult {
sentiment: 'Positive' | 'Negative' | 'Neutral'
confidence: number
primaryEmotion: string
keyThemes: string[]
actionable: boolean
summary: string
}
async function analyzeFeedback(feedback: string): Promise<SentimentResult> {
const apiKey = process.env.PROMPTFORGE_API_KEY
const promptId = 'cm5xk8z9a0001'
const response = await fetch('https://api.promptforge.sh/graphql', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': apiKey,
},
body: JSON.stringify({
query: `
mutation ExecutePrompt($promptId: ID!, $input: JSON!) {
executePrompt(promptId: $promptId, input: $input) {
output
costUsd
}
}
`,
variables: {
promptId,
input: { feedback },
},
}),
})
const { data, errors } = await response.json()
if (errors) {
throw new Error(`Analysis failed: ${errors[0].message}`)
}
const result = JSON.parse(data.executePrompt.output)
return result
}
// Usage
const result = await analyzeFeedback(
"Great product, but shipping took too long."
)
console.log(`Sentiment: ${result.sentiment}`)
console.log(`Confidence: ${result.confidence}`)
console.log(`Themes: ${result.keyThemes.join(', ')}`)
if (result.actionable) {
console.log('⚠️ Requires immediate attention!')
// Trigger alert workflow
}Python Example
Batch Processing
python
import requests
import json
from typing import Dict, List
def analyze_feedback_batch(feedback_list: List[str]) -> List[Dict]:
api_key = 'your-api-key'
prompt_id = 'cm5xk8z9a0001'
endpoint = 'https://api.promptforge.sh/graphql'
results = []
for feedback in feedback_list:
response = requests.post(
endpoint,
headers={
'Content-Type': 'application/json',
'X-API-Key': api_key,
},
json={
'query': """
mutation ExecutePrompt($promptId: ID!, $input: JSON!) {
executePrompt(promptId: $promptId, input: $input) {
output
costUsd
}
}
""",
'variables': {
'promptId': prompt_id,
'input': {'feedback': feedback},
},
},
)
data = response.json()
if 'errors' in data:
results.append({'error': data['errors'][0]['message']})
else:
output = json.loads(data['data']['executePrompt']['output'])
results.append(output)
return results
# Usage
feedbacks = [
"Excellent service, very happy!",
"Product broke after one day of use.",
"It's okay, nothing special.",
]
results = analyze_feedback_batch(feedbacks)
for i, result in enumerate(results):
if 'error' in result:
print(f"Feedback {i+1}: Error - {result['error']}")
else:
print(f"Feedback {i+1}: {result['sentiment']} ({result['confidence']})")
if result['actionable']:
print(f" ⚠️ Action required: {result['summary']}")Step 6: Monitor Performance
Query execution history to track performance and costs:
Get Execution Stats
graphql
query GetPromptExecutions {
prompt(id: "cm5xk8z9a0001") {
name
versions {
version
executions {
id
status
latencyMs
tokenIn
tokenOut
costUsd
createdAt
}
}
}
}Calculate Metrics
typescript
// Calculate average metrics
const executions = data.prompt.versions[0].executions
const stats = {
totalExecutions: executions.length,
successRate: (executions.filter(e => e.status === 'success').length / executions.length) * 100,
avgLatency: executions.reduce((sum, e) => sum + e.latencyMs, 0) / executions.length,
avgCost: executions.reduce((sum, e) => sum + parseFloat(e.costUsd), 0) / executions.length,
totalCost: executions.reduce((sum, e) => sum + parseFloat(e.costUsd), 0),
}
console.log(`Total executions: ${stats.totalExecutions}`)
console.log(`Success rate: ${stats.successRate.toFixed(1)}%`)
console.log(`Average latency: ${stats.avgLatency.toFixed(0)}ms`)
console.log(`Average cost: $${stats.avgCost.toFixed(4)}`)
console.log(`Total cost: $${stats.totalCost.toFixed(2)}`)Step 7: Iterate and Improve
After analyzing execution patterns, create an improved version:
Create Version 2
graphql
mutation CreatePromptVersion {
createPromptVersion(input: {
promptId: "cm5xk8z9a0001"
content: """
You are a customer feedback analysis expert. Analyze the sentiment of the following customer feedback.
Customer Feedback:
{{feedback}}
Provide your analysis in the following JSON format:
{
"sentiment": "Positive" | "Negative" | "Neutral",
"confidence": 0.0 to 1.0,
"primaryEmotion": "string",
"secondaryEmotions": ["emotion1", "emotion2"],
"keyThemes": ["theme1", "theme2"],
"urgencyLevel": "low" | "medium" | "high" | "critical",
"actionable": true | false,
"suggestedResponse": "Brief suggested response approach",
"summary": "Brief one-sentence summary"
}
Consider intensity of language, specific issues mentioned, and customer intent.
"""
params: {
model: "claude-3-5-sonnet-20241022"
temperature: 0.2
max_tokens: 600
}
inputSchema: {
type: "object"
properties: {
feedback: {
type: "string"
description: "Customer feedback text to analyze"
minLength: 1
}
}
required: ["feedback"]
}
}) {
id
version
}
}Version 2 adds
secondaryEmotions, urgencyLevel, and suggestedResponse fields for richer analysis. The original version remains available for comparison.Key Takeaways
- Structured Output - Request JSON format for easy parsing and integration
- Low Temperature - Use 0.1-0.3 for analytical tasks requiring consistency
- Input Validation - Define JSON Schema to validate inputs before execution
- Versioning - Create new versions for improvements while keeping old ones
- Monitor Metrics - Track latency and costs to optimize performance