Kitten Stack's Prompt Management system enables you to store, version, test, and optimize prompts for your LLM applications. Our AI-powered optimization helps you create high-quality prompts that deliver consistent, effective results with minimal effort on your part.
Begin by creating a reusable prompt template with variable placeholders:
// Using the JavaScript SDK
const promptManager = new KittenStack.PromptManager(client);
// Create a prompt template
const templateId = await promptManager.createTemplate({
name: "Customer Service Response",
description: "Template for responding to customer inquiries",
content: "You are a helpful customer service agent for {{company_name}}. {{rules}}" +
"Respond to the following customer inquiry: {{customer_message}}",
tags: ["customer-service", "support"]
});
Inject variables into your template at runtime:
// Fill the template with specific values
const filledPrompt = await promptManager.fillTemplate(templateId, {
company_name: "Acme Corporation",
rules: "Always be polite and professional. Don't make up information. ",
customer_message: "I've been waiting for my order for two weeks. Can you help me track it?"
});
// Use the filled prompt with a model
const response = await client.chat.completions.create({
model: "openai/gpt-4-turbo",
messages: [
{ role: "system", content: filledPrompt },
{ role: "user", content: "Can you check on order #12345?" }
]
});
Let our AI analyze and improve your prompts:
// Get AI-powered optimization suggestions
const optimizationResult = await promptManager.optimizePrompt(templateId);
// Review the suggestions
console.log(optimizationResult.optimized_content);
console.log(optimizationResult.optimization_notes);
// Apply the optimization if you like it
await promptManager.applyOptimization(templateId, optimizationResult.optimized_content);
Track changes to your prompts over time:
// Update an existing template (creates a new version)
const newVersionId = await promptManager.updateTemplate(templateId, {
content: "You are a helpful customer service agent for {{company_name}}. {{rules}}" +
"Focus on resolving the customer's issue efficiently. " +
"Respond to the following customer inquiry: {{customer_message}}",
});
// List all versions of a template
const versions = await promptManager.listTemplateVersions(templateId);
// Revert to a previous version
await promptManager.setActiveVersion(templateId, versions[0].id);
Our AI-powered prompt optimization includes several techniques to optimize token usage:
Remove unnecessary words and phrases while maintaining the prompt's intent:
// Get a compression optimization
const compressionResult = await promptManager.optimizePrompt(promptId, {
optimization_type: "compression",
target_reduction: 0.3 // Try to reduce by 30%
});
Prioritize important information when shortening prompts:
// Get a truncation optimization
const truncationResult = await promptManager.optimizePrompt(promptId, {
optimization_type: "truncation",
max_tokens: 2000,
preserve_sections: ["conclusion", "key findings"]
});
Make prompts more explicit and less ambiguous:
// Get a clarity optimization
const clarityResult = await promptManager.optimizePrompt(promptId, {
optimization_type: "clarity"
});
Endpoint | Description | Method |
---|---|---|
/prompts/templates |
Create and manage prompt templates | POST, GET |
/prompts/templates/{id} |
Get, update, or delete a specific template | GET, PUT, DELETE |
/prompts/templates/{id}/versions |
List versions for a template | GET |
/prompts/templates/{id}/fill |
Fill a template with variables | POST |
/prompts/templates/{id}/optimize |
Get AI optimization suggestions | POST |
/prompts/optimize/compress |
Compress a prompt to use fewer tokens | POST |
/prompts/optimize/truncate |
Intelligently truncate content | POST |
/prompts/chains |
Create and manage prompt chains | POST, GET |
/prompts/chains/{id}/execute |
Execute a prompt chain | POST |
Create templates for consistent content generation across various formats:
// Blog post outline template
const blogOutlineTemplate = await promptManager.createTemplate({
name: "Blog Outline Generator",
content: "Create a detailed outline for a blog post about {{topic}}. " +
"The post should be targeted at {{audience}} and include {{num_sections}} main sections. " +
"Each section should have a clear heading and 3-5 bullet points of content to cover."
});
Standardize support responses while allowing for personalization:
// Support response template
const supportTemplate = await promptManager.createTemplate({
name: "Support Response Generator",
content: "You are a support agent for {{company}}. " +
"The customer has the following issue: {{issue_description}} " +
"Their account status is: {{account_status}} " +
"Respond helpfully, addressing them by name ({{customer_name}}) and " +
"offering specific solutions based on their account status and issue."
});
Create templates for standardized data analysis workflows:
// Data analysis template
const analysisTemplate = await promptManager.createTemplate({
name: "Data Trend Analysis",
content: "Analyze the following data: {{data_points}} " +
"Identify the top {{num_trends}} trends in this data. " +
"For each trend, provide: " +
"1. A clear name for the trend " +
"2. Quantitative evidence from the data " +
"3. Potential business implications"
});
Prompt management works seamlessly with other Kitten Stack features:
Explore these related guides to build more powerful LLM applications: