Docs
LLmHub API Smart Autorouting

LLmHub API Smart Autorouting

Official API documentation for LLmHub API (api.llmhub.dev)

LLMHUB's automatic routing feature intelligently directs your queries to the most appropriate model based on the nature of your prompt. When you specify model="automatic", LLMHUB evaluates the content and complexity of your request and routes it as follows:

  • Very Hard Questions:
    Complex reasoning or advanced conceptual questions are routed to DeepSeek's reasoning model, hosted on Azure US East servers.

  • Coding Tasks:
    Programming-related queries are sent to gpt-4o-mini, a model optimized for generating and debugging code.

  • Regular Questions:
    Everyday queries and general knowledge questions are handled by llama 3.3 70b, which is designed for fast and reliable responses.


Examples of Automatic Routing

1. Very Hard Question Example

For challenging reasoning tasks—such as advanced scientific or philosophical questions—LLMHUB routes your request to DeepSeek's reasoning model on Azure US East servers.

Example Prompt:

"What are the implications of Gödel's incompleteness theorem on modern computational complexity theory?"

Python Example:

from openai import OpenAI
 
llmhub_client = OpenAI(
    base_url="https://api.llmhub.dev/v1",
    api_key="API KEY",
)
 
messages = [{"role": "user", "content": "What are the implications of Gödel's incompleteness theorem on modern computational complexity theory?"}]
response = llmhub_client.chat.completions.create(
    model="automatic",
    messages=messages
)
 
print(response.choices[0].message.content)

2. Coding Task Example

For coding-related queries, such as generating or debugging code, the request is directed to gpt-4o-mini.

Example Prompt:

"Write a Python function to implement the quicksort algorithm."

Python Example:

from openai import OpenAI
 
llmhub_client = OpenAI(
    base_url="https://api.llmhub.dev/v1",
    api_key="API KEY",
)
 
messages = [{"role": "user", "content": "Write a Python function to implement the quicksort algorithm."}]
response = llmhub_client.chat.completions.create(
    model="automatic",
    messages=messages
)
 
print(response.choices[0].message.content)

3. Regular Question Example

For common queries like general knowledge questions, LLMHUB routes the request to llama 3.3 70b.

Example Prompt:

"What is the capital of France?"

Python Example:

from openai import OpenAI
 
llmhub_client = OpenAI(
    base_url="https://api.llmhub.dev/v1",
    api_key="API KEY",
)
 
messages = [{"role": "user", "content": "What is the capital of France?"}]
response = llmhub_client.chat.completions.create(
    model="automatic",
    messages=messages
)
 
print(response.choices[0].message.content)

Conclusion

By using model="automatic", you leverage LLMHUB's intelligent autorouting capability. This means your very hard questions are handled by DeepSeek's reasoning model, coding-related queries by gpt-4o-mini, and regular questions by llama 3.3 70b—ensuring optimal performance and response quality across diverse use cases. This setup simplifies development by eliminating the need to manually choose the best model for each type of query.