LLM Gateways for Linux

View 5 business solutions

Browse free open source LLM Gateways and projects for Linux below. Use the toggles on the left to filter open source LLM Gateways by OS, license, language, programming language, and project status.

  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 1
    Bifrost

    Bifrost

    The Fastest LLM Gateway with built in OTel observability

    Bifrost is an LLM gateway designed to provide a unified OpenAI-compatible API front for many different model providers. It abstracts away the complexity of working directly with multiple backend providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, etc.), enabling you to plug in providers and switch between them without touching your client code. It is built to be high performance: in benchmark tests at 5,000 requests per second, it reportedly adds only microseconds of overhead and achieves perfect success rates with no failed requests. Bifrost supports features such as automatic fallback (failover between providers), load balancing across API keys/providers, and semantic caching to reduce latency and cost. It also includes observability with built-in metrics, tracing, logging, and supports governance features like rate limiting, access control, and cost budgeting. The architecture is modular: there is a core engine, plugin layers, and transport layers (HTTP APIs).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    Chat Nio

    Chat Nio

    Next Generation AI One-Stop Internationalization Solution

    Chat Nio is described as a next-generation, all-in-one AI platform that serves as an end-to-end solution for both B2B and B2C use cases. It supports dozens of underlying AI providers (OpenAI, Claude, Stable Diffusion, DALL·E, Midjourney, and many Chinese models, etc.), giving users flexibility in backend selection and switching. It offers a full stack: model management, channel/provider integration, a model marketplace, caching, subscription and billing support, dashboard analytics, and a web/admin UI. The platform supports model caching so repeated queries or similar inputs may be accelerated, and has mechanisms for elastic billing/subscription models to monetize usage. CoAI also supports “cloud sync,” allowing user settings or data to sync across deployments, and “preset” configurations to streamline user experiences.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    Kong

    Kong

    The Cloud-Native API Gateway

    Kong is a next generation cloud-native API platform for multi-cloud and hybrid organizations. When building for the web, mobile, or Internet of Things, you’ll need a common functionality to run your software, and Kong is that solution. Kong acts as a gateway, connecting microservices requests and APIs natively while also providing load balancing, logging, monitoring, authentication, rate-limiting, and so much more through plugins. Kong is highly extensible as well as platform agnostic, connecting APIs across different environments, platforms and patterns. Achieve architectural freedom with Kong today.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    LLM Gateway

    LLM Gateway

    Route, manage, and analyze your LLM requests across multiple providers

    LLM Gateway is an open-source middleware that consolidates interactions with multiple LLM providers—such as OpenAI, Anthropic, Google Vertex AI—behind a single, unified API compatible with OpenAI's spec. Designed for both self-hosted and cloud use, it enables developers to route requests dynamically, secure and manage API keys, monitor token usage and costs, and analyze performance metrics. With optional UI, telemetry, and Docker deployment, it's ideal for teams aiming to centralize LLM orchestration and gain visibility into AI usage.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Fixed Asset Pro | Asset Management Icon
    Fixed Asset Pro | Asset Management

    Fixed Asset Pro Is A Powerful, Effective Depreciation And Fixed Asset Management Software System

    Fixed Asset Pro Is The Affordable Fixed Asset Management And Depreciation Software System For Small And Mid-sized Businesses And Organizations.
    Learn More
  • 5
    APIPark

    APIPark

    APIPark is the #1 open-source AI Gateway and Developer Portal

    APIPark is an open-source, all-in-one AI gateway and API developer portal, that helps developers and enterprises easily manage, integrate, and deploy AI services. No matter which AI model you use, APIPark provides a one-stop integration solution. It unifies the management of all authentication information and tracks the costs of API calls. Standardize the request data format for all AI models. When switching AI models or modifying prompts, it won’t affect your app or microservices, simplifying your AI usage and reducing maintenance costs. You can quickly combine AI models and prompts into new APIs. For example, using OpenAI GPT-4 and custom prompts, you can create sentiment analysis APIs, translation APIs, or data analysis APIs. API lifecycle management helps standardize the process of managing APIs, including traffic forwarding, load balancing, and managing different versions of publicly accessible APIs. This improves API quality and maintainability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    LangDB AI Gateway

    LangDB AI Gateway

    Govern, secure, and optimize your AI traffic

    AI Gateway is a high-performance, open-source API gateway optimized for managing and monitoring LLM traffic at scale. Developed by the LangDB team, AI Gateway acts as an intermediary between clients and backend LLMs, providing advanced features like caching, rate limiting, prompt management, and observability. It helps teams secure and optimize their LLM deployments, whether using local models or external APIs like OpenAI or Anthropic. With native support for multi-tenant environments and low-latency inference routing, AI Gateway is an essential tool for companies building production-grade generative AI services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    MagicAPI AI Gateway

    MagicAPI AI Gateway

    Built for demanding AI workflows

    The world's fastest AI Gateway proxy, written in Rust and optimized for maximum performance. This high-performance API gateway routes requests to various AI providers (OpenAI, GROQ) with streaming support, making it perfect for developers who need reliable and blazing-fast AI API access.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Portkey AI Gateway

    Portkey AI Gateway

    A blazing fast AI Gateway with integrated guardrails

    Portkey AI Gateway aims to offer a blazing fast, secure, and flexible gateway for interacting with a wide variety of models and enforcing guardrails. It presents a single, friendly API through which you can route to 200+ LLMs, while applying configurable input/output guardrails to enforce policies or restrict certain content. It supports automatic retries, fallbacks, load balancing across providers or keys, and request timeouts to avoid latency spikes. The gateway is multimodal: it can handle text, vision, audio, and image models under a common interface. It also offers features for governance: role-based access, compliance with standards (SOC2, HIPAA, GDPR), secure key management, and logging/analytics of usage, latency, errors, and cost. The system integrates with agent frameworks like LangChain, Autogen, and others, enabling the building of more complex AI applications. It’s lightweight and optimized for low latency with a small footprint.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next