The Fastest LLM Gateway with built in OTel observability
Self-hosted, community-driven, local OpenAI compatible API
Private Open AI on Kubernetes
Unofficial (Golang) Go bindings for the Hugging Face Inference API
Enterprise-grade API gateway that helps you monitor and impose cost
The Cloud-Native API Gateway and AI Gateway
A Go implementation of the Model Context Protocol (MCP)
Fast backend for long-term AI user memory via structured profiles
Convert API into MCP Server in seconds
Aqueduct allows you to run LLM and ML workloads on any infrastructure