Kitten Stack's API is designed as a drop-in replacement for OpenAI's Chat Completions API.
This means you can use your existing OpenAI client code with minimal changes to start
building applications with enhanced context capabilities.
Obtain API Key
Sign up at www.kittenstack.com/signup and get your API key from the dashboard.
Update Base URL
Change your OpenAI client's base URL to https://api.kittenstack.com/v1
Use Your API Key
Replace your OpenAI API key with your Kitten Stack API key.
Start Making Requests
Your chat completions will now automatically include context from your uploaded documents and can be integrated into any application you build.
import OpenAI from 'openai';
// Initialize the client with Kitten Stack's base URL and your API key
const openai = new OpenAI({
baseURL: 'https://api.kittenstack.com/v1',
apiKey: 'your_kitten_stack_api_key'
});
// Use the client as you normally would with OpenAI
async function getCompletion() {
const response = await openai.chat.completions.create({
model: 'openai/gpt-4-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What does our Q4 report say about revenue growth?' }
]
});
console.log(response.choices[0].message.content);
}
getCompletion();
from openai import OpenAI
# Initialize with Kitten Stack's base URL and your API key
client = OpenAI(
base_url="https://api.kittenstack.com/v1",
api_key="your_kitten_stack_api_key"
)
# Use the client as you would with OpenAI
response = client.chat.completions.create(
model="openai/gpt-4-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What does our Q4 report say about revenue growth?"}
]
)
print(response.choices[0].message.content)
Kitten Stack automatically retrieves relevant context from your document repository, but
you can customize how context is selected and used in your applications.
Include search options in your request to fine-tune how context is retrieved and integrated
// JavaScript example with search options
const response = await openai.chat.completions.create({
model: 'openai/gpt-4-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What does our Q4 report say about revenue growth?' }
],
// Customize context retrieval
search_options: {
use_documents: true, // Include document context (default true)
similarity_threshold: 0.8, // Minimum similarity score (0-1)
document_types: ["pdf", "docx"], // Filter by file types
max_documents: 5 // Maximum documents to include
}
});
Kitten Stack supports models from various providers through a unified interface. Specify the provider and model
in the format: provider/model-name
Provider | Model Examples |
---|---|
openai | openai/gpt-4-turbo, openai/gpt-3.5-turbo |
anthropic | anthropic/claude-3-opus, anthropic/claude-3-sonnet, anthropic/claude-3-haiku |
google/gemini-pro, google/gemini-ultra |
Kitten Stack supports streaming responses for a better user experience with long-form content:
// JavaScript streaming example
const stream = await openai.chat.completions.create({
model: 'openai/gpt-4-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Write a detailed summary of our Q1 performance.' }
],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
Kitten Stack uses standard HTTP status codes. Here's how to handle common errors:
// JavaScript error handling example
try {
const response = await openai.chat.completions.create({
model: 'openai/gpt-4-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What does our annual report say?' }
]
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error.status === 401) {
console.error('Authentication failed. Check your API key.');
} else if (error.status === 402) {
console.error('Insufficient credits. Please top up your account.');
} else if (error.status === 429) {
console.error('Rate limit exceeded. Please slow down requests.');
} else {
console.error('An error occurred:', error.message);
}
}
Once you've integrated the chat API, explore these additional components to build complete LLM applications: