Access multiple AI providers through a single OpenAI-compatible API with model aliasing, rate limiting, and comprehensive tracking.
const fetchCompletion = async () => { const response = await fetch('https://api.teatree.chat/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'gpt-4o', // Use a model alias! messages: [{ role: 'user', content: 'Explain quantum computing' }], max_tokens: 150 }) }); return await response.json(); };
TeaTree provides a unified interface to access models from multiple AI providers with powerful features for production deployments.
Access models from OpenAI, Anthropic, Fireworks, and XAI through a single, consistent API interface.
Use advanced thinking for complex tasks that involve coding, math and more.
NO API key required at all, you can just use it as simple as sending a request in your app.
Monitor all requests with detailed logs. Track token usage, model selection, and success rates.
All endpoints support streaming responses, allowing for more responsive user interfaces.
Follows the OpenAI API specification, making it a drop-in replacement for existing applications.
Use these OpenAI-compatible endpoints to interact with various AI models across different providers.
Returns a list of available models and their aliases that can be used with the API.
{ "object": "list", "data": [ { "id": "gpt-4o", "object": "model", "created": 1677649963, "owned_by": "openai", "provider": "openai", "description": "Latest GPT model with improved capabilities", "context_window": 8192 }, { "id": "gpt-4o-mini", // This is an alias "object": "model", "created": 1677649963, "owned_by": "openai", "provider": "openai", "description": "Faster and cheaper version of GPT-4o", "context_window": 100000 } ... ] }
curl -X GET https://api.teatree.chat/v1/models \ -H "Authorization: Bearer your-api-key"
This endpoint creates a completion for the chat message, compatible with various provider models.
ID of the model or alias to use for completion.
string requiredA list of messages comprising the conversation so far.
array requiredMaximum number of tokens to generate in the completion.
integer default: 100What sampling temperature to use (higher = more creative).
number default: 0.7Whether to stream back partial progress.
boolean default: falsecurl -X POST https://api.teatree.chat/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer your-api-key" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "Tell me about quantum computing"} ], "max_tokens": 100, "temperature": 0.7 }'
TeaTree provides access to various models from different providers through a unified API.
Model ID | Provider | Description | Context Window | Supports | Type |
---|---|---|---|---|---|
We couldn't load the models from the API. Please try again later.
No models match your current search criteria. Try adjusting your filters.
Here are code examples for interacting with the TeaTree API in various programming languages.
const { OpenAI } = require('openai'); // Initialize with TeaTree API endpoint const openai = new OpenAI({ apiKey: 'your-api-key', baseURL: 'https://api.teatree.chat/v1' }); async function main() { const response = await openai.chat.completions.create({ model: 'gpt-4o', // Use a model alias messages: [ { role: 'user', content: 'Explain quantum computing' } ], max_tokens: 150 }); console.log(response.choices[0].message.content); } main();
from openai import OpenAI # Initialize with TeaTree API endpoint client = OpenAI( api_key="your-api-key", base_url="https://api.teatree.chat/v1" ) # Create a chat completion response = client.chat.completions.create( model="gpt-4o-mini", # Using a model alias messages=[ {"role": "user", "content": "Write a short poem about AI"} ], max_tokens=100, temperature=0.7 ) # Print the response print(response.choices[0].message.content)
const { OpenAI } = require('openai'); const openai = new OpenAI({ apiKey: 'your-api-key', baseURL: 'https://api.teatree.chat/v1' }); async function streamResponse() { const stream = await openai.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'user', content: 'Write a short story' } ], stream: true, max_tokens: 200 }); // Stream the response for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); } } streamResponse();
# List available models curl -X GET https://api.teatree.chat/v1/models \ -H "Authorization: Bearer your-api-key" # Create a chat completion curl -X POST https://api.teatree.chat/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer your-api-key" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "What are the key features of TeaTree API?"} ], "max_tokens": 150 }'
Enable your AI models to call functions in your application with structured inputs and outputs.
Function calling allows AI models to generate structured JSON that your application can use to call your own functions. This is perfect for:
{ "model": "gpt-4o", "messages": [ { "role": "user", "content": "What's the weather in Paris?" } ], "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "Get weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string" } }, "required": ["location"] } } } ] }
const weatherFunction = { "name": "get_weather", "description": "Get current weather in a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "City and state or country" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "Temperature unit" } }, "required": ["location"] } }
const queryFunction = { "name": "query_database", "description": "Query product database", "parameters": { "type": "object", "properties": { "category": { "type": "string", "enum": ["electronics", "clothing", "books"], "description": "Product category" }, "price_max": { "type": "number", "description": "Maximum price" }, "in_stock": { "type": "boolean", "description": "Only show in-stock items" } } } }
To enable function calling in your TeaTree API requests, just add the appropriate parameters to your API calls: