Docs
LLmHub API Changelog

LLmHub API Changelog

Official API documentation for LLmHub API (api.llmhub.dev)

All notable changes to this project will be documented in this file.
This project adheres to Semantic Versioning and follows the Keep a Changelog format.


[1.0.0] - 2025-03-24

Added

  • Initial Launch of LLMHUB:
    The official launch of LLMHUB, a smart autorouting system for large language models that automatically directs your prompts to the optimal model and returns high-quality responses.

  • RESTful, Streaming, and Realtime APIs:
    Implemented comprehensive API endpoints for chat completions and realtime interactions, making it easy to integrate LLMHUB in any environment that supports HTTP requests.

  • Automatic Routing:
    Leveraging model="automatic", LLMHUB intelligently routes your queries based on their nature:

    • Very Hard Questions: Routed to DeepSeek's reasoning model hosted on Azure US East servers.
    • Coding Tasks: Handled by the specialized gpt-4o-mini model.
    • Regular Queries: Processed by llama 3.3 70b for everyday questions and general knowledge.
  • Multi-Round Conversation Support:
    Designed a stateless API that allows developers to concatenate conversation history to maintain context across multiple turns.

  • Client Library Integration:
    Configured the OpenAI Python library to work with LLMHUB by updating the base URL and API key handling.
    Example:

    from openai import OpenAI
     
    llmhub_client = OpenAI(
        base_url="https://api.llmhub.dev/v1",
        api_key="API KEY",
    )
  • Extensive Documentation and Code Examples:
    Provided integration examples across multiple programming languages (cURL, Python, Go, Node.js, Ruby, C#, PHP, Java, and PowerShell) to help developers get started quickly.

  • Debugging and Rate Limiting Features:
    Integrated HTTP response headers to include debugging information and rate limit data, ensuring smoother troubleshooting and performance monitoring.

  • Backward Compatibility:
    Laid the groundwork for future updates by ensuring the initial release is compatible with upcoming enhancements and maintains stability for developers.


This release marks the official launch of LLMHUB. We are excited to offer developers a robust, intelligent platform that simplifies the process of routing queries to the best available language model, ensuring optimal performance and quality of responses. Enjoy building with LLMHUB!