LLmHub API Installation and First API Call
Official API documentation for LLmHub API (api.llmhub.dev)
What is LLmHub's differentiator?
LLMHub is an intelligent autorouting platform that automatically selects the optimal AI model for your prompts. Instead of deciding which model to use for each task, simply connect to our API with model="automatic"
and let our system determine the best model based on your prompt's requirements.
Your First API Call
The LLMHub API uses an API format compatible with OpenAI. By modifying your configuration, you can use the OpenAI SDK or any software compatible with the OpenAI API to access LLMHub.
PARAM | VALUE |
---|---|
base_url | https://api.llmhub.dev/v1 |
api_key | Apply for an API key at llmhub.dev |
Note: LLMHub's intelligent routing system will automatically select the most appropriate model for your task when you use model="automatic"
. Our system analyzes your prompt in real-time to determine which model will provide the best results.
Invoke The Chat API
Once you have obtained an API key, you can access the LLMHub API using the following example scripts. You can set the stream
parameter to true
to get streaming responses.
cURL
curl https://api.llmhub.dev/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-d '{
"model": "automatic",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": false
}'
Python
# Please install OpenAI SDK first: `pip3 install openai`
from openai import OpenAI
client = OpenAI(
base_url="https://api.llmhub.dev/v1",
api_key="<LLMHub API Key>"
)
response = client.chat.completions.create(
model="automatic",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
Node.js
// Please install OpenAI SDK first: `npm install openai`
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://api.llmhub.dev/v1',
apiKey: '<LLMHub API Key>'
});
async function main() {
const completion = await openai.chat.completions.create({
model: "automatic",
messages: [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
});
console.log(completion.choices[0].message.content);
}
main();
How It Works
When you send a request with model="automatic"
, LLMHub:
- Analyzes your prompt's complexity, length, and requirements
- Determines which AI model is best suited for the specific task
- Routes your request to the optimal model
- Returns the response in a standard format
This intelligent routing ensures you always get the best possible response without having to manually select models for different use cases.
API Parameters
LLMHub supports all standard OpenAI API parameters, including:
Parameter | Type | Description |
---|---|---|
model | string | Set to "automatic" to use LLMHub's intelligent routing |
messages | array | Array of message objects with role and content |
temperature | number | Controls randomness (0-2, default 1) |
max_tokens | integer | Maximum tokens to generate |
stream | boolean | Whether to stream the response |
top_p | number | Controls diversity via nucleus sampling |
frequency_penalty | number | Reduces repetition of token sequences |
presence_penalty | number | Reduces repetition of topics |
Rate Limits and Pricing
Visit our pricing page for information about rate limits and pricing tiers.
Support
For questions or support, contact our team at support@llmhub.dev or visit our documentation for more information.