Prompt Management Guide

Overview

Kitten Stack's Prompt Management system enables you to store, version, test, and optimize prompts for your LLM applications. Our AI-powered optimization helps you create high-quality prompts that deliver consistent, effective results with minimal effort on your part.

Key Features

  • Versioned Prompt Storage: Track changes to prompts over time and roll back when needed
  • AI-Powered Optimization: Let our AI analyze and improve your prompts automatically
  • Dynamic Prompt Injection: Insert variables and context into prompts at runtime
  • Token-Efficient Formatting: Reduce token usage while maintaining or improving output quality

Getting Started

1. Creating a Prompt Template

Begin by creating a reusable prompt template with variable placeholders:

// Using the JavaScript SDK
const promptManager = new KittenStack.PromptManager(client);

// Create a prompt template
const templateId = await promptManager.createTemplate({
  name: "Customer Service Response",
  description: "Template for responding to customer inquiries",
  content: "You are a helpful customer service agent for {{company_name}}. {{rules}}" +
           "Respond to the following customer inquiry: {{customer_message}}",
  tags: ["customer-service", "support"]
});

2. Using a Prompt Template

Inject variables into your template at runtime:

// Fill the template with specific values
const filledPrompt = await promptManager.fillTemplate(templateId, {
  company_name: "Acme Corporation",
  rules: "Always be polite and professional. Don't make up information. ",
  customer_message: "I've been waiting for my order for two weeks. Can you help me track it?"
});

// Use the filled prompt with a model
const response = await client.chat.completions.create({
  model: "openai/gpt-4-turbo",
  messages: [
    { role: "system", content: filledPrompt },
    { role: "user", content: "Can you check on order #12345?" }
  ]
});

3. Optimizing Prompts with AI

Let our AI analyze and improve your prompts:

// Get AI-powered optimization suggestions
const optimizationResult = await promptManager.optimizePrompt(templateId);

// Review the suggestions
console.log(optimizationResult.optimized_content);
console.log(optimizationResult.optimization_notes);

// Apply the optimization if you like it
await promptManager.applyOptimization(templateId, optimizationResult.optimized_content);

4. Versioning Prompts

Track changes to your prompts over time:

// Update an existing template (creates a new version)
const newVersionId = await promptManager.updateTemplate(templateId, {
  content: "You are a helpful customer service agent for {{company_name}}. {{rules}}" +
           "Focus on resolving the customer's issue efficiently. " +
           "Respond to the following customer inquiry: {{customer_message}}",
});

// List all versions of a template
const versions = await promptManager.listTemplateVersions(templateId);

// Revert to a previous version
await promptManager.setActiveVersion(templateId, versions[0].id);

Token Optimization Techniques

Our AI-powered prompt optimization includes several techniques to optimize token usage:

Compression

Remove unnecessary words and phrases while maintaining the prompt's intent:

// Get a compression optimization
const compressionResult = await promptManager.optimizePrompt(promptId, {
  optimization_type: "compression",
  target_reduction: 0.3 // Try to reduce by 30%
});

Smart Truncation

Prioritize important information when shortening prompts:

// Get a truncation optimization
const truncationResult = await promptManager.optimizePrompt(promptId, {
  optimization_type: "truncation",
  max_tokens: 2000,
  preserve_sections: ["conclusion", "key findings"]
});

Clarity Improvements

Make prompts more explicit and less ambiguous:

// Get a clarity optimization
const clarityResult = await promptManager.optimizePrompt(promptId, {
  optimization_type: "clarity"
});

API Reference

Endpoints

Endpoint Description Method
/prompts/templates Create and manage prompt templates POST, GET
/prompts/templates/{id} Get, update, or delete a specific template GET, PUT, DELETE
/prompts/templates/{id}/versions List versions for a template GET
/prompts/templates/{id}/fill Fill a template with variables POST
/prompts/templates/{id}/optimize Get AI optimization suggestions POST
/prompts/optimize/compress Compress a prompt to use fewer tokens POST
/prompts/optimize/truncate Intelligently truncate content POST
/prompts/chains Create and manage prompt chains POST, GET
/prompts/chains/{id}/execute Execute a prompt chain POST

Best Practices

Effective Prompt Design

  1. Be Specific: Clearly define the task and expected output format
  2. Provide Examples: Include examples of desired outputs when possible
  3. Use Consistent Formatting: Maintain consistent structure across prompt templates
  4. Layer Instructions: Present instructions in a logical order of importance
  5. Leverage AI Optimization: Regularly use our AI optimization to improve your prompts

Template Organization

  1. Use Tags: Tag templates for easy filtering and organization
  2. Document Purpose: Include clear descriptions for each template
  3. Group Related Templates: Use folders or namespaces to organize related templates
  4. Track Performance: Monitor which templates perform best for different tasks

Common Use Cases

Content Generation

Create templates for consistent content generation across various formats:

// Blog post outline template
const blogOutlineTemplate = await promptManager.createTemplate({
  name: "Blog Outline Generator",
  content: "Create a detailed outline for a blog post about {{topic}}. " +
           "The post should be targeted at {{audience}} and include {{num_sections}} main sections. " +
           "Each section should have a clear heading and 3-5 bullet points of content to cover."
});

Customer Support

Standardize support responses while allowing for personalization:

// Support response template
const supportTemplate = await promptManager.createTemplate({
  name: "Support Response Generator",
  content: "You are a support agent for {{company}}. " +
           "The customer has the following issue: {{issue_description}} " +
           "Their account status is: {{account_status}} " +
           "Respond helpfully, addressing them by name ({{customer_name}}) and " +
           "offering specific solutions based on their account status and issue."
});

Data Analysis

Create templates for standardized data analysis workflows:

// Data analysis template
const analysisTemplate = await promptManager.createTemplate({
  name: "Data Trend Analysis",
  content: "Analyze the following data: {{data_points}} " +
           "Identify the top {{num_trends}} trends in this data. " +
           "For each trend, provide: " +
           "1. A clear name for the trend " +
           "2. Quantitative evidence from the data " +
           "3. Potential business implications"
});

Integration with Other Kitten Stack Features

Prompt management works seamlessly with other Kitten Stack features:

  • Knowledge & Memory: Inject retrieved document snippets into prompts
  • Model Gateway: Test the same prompts across different models
  • Monitoring & Optimization: Track prompt performance metrics

Next Steps

Explore these related guides to build more powerful LLM applications: