Browse our entire catalog of AI models with detailed pricing information. Find the perfect model for your use case based on capabilities, pricing, and context window.
Model pricing is based on the number of tokens processed. Input tokens (prompts you send) and output tokens (completions generated by the model) are priced differently, with output tokens typically being more expensive. The pricing is per 1 million tokens.
The context window represents the maximum number of tokens a model can process in a single request. This includes both the input prompt and the generated response. Larger context windows allow for more detailed prompts and longer conversations, but may also incur higher costs.