Openai style api for open large language models
A list of free LLM inference resources accessible via API
Route, manage, and analyze your LLM requests across multiple providers
Seamlessly integrate LLMs into scikit-learn
The unofficial python package that returns response of Google Bard
OpenAI API client for Kotlin with multiplatform capabilities
The Fastest LLM Gateway with built in OTel observability
Run Local LLMs on Any Device. Open-source
Self-hosted, community-driven, local OpenAI compatible API
Evaluate and compare LLM outputs, catch regressions, improve prompts
Adding guardrails to large language models
Easiest and laziest way for building multi-agent LLMs applications
Optimizing inference proxy for LLMs
LLM Frontend for Power Users
Operating LLMs in production
Replace OpenAI GPT with another LLM in your app
lightweight package to simplify LLM API calls
Low-latency REST API for serving text-embeddings
LLM
Govern, secure, and optimize your AI traffic
Private Open AI on Kubernetes
Semantic cache for LLMs. Fully integrated with LangChain
Distribute and run LLMs with a single file
Unofficial (Golang) Go bindings for the Hugging Face Inference API
Built for demanding AI workflows