Openai style api for open large language models
Run LLMs locally on Cloud Workstations
A list of free LLM inference resources accessible via API
Language Model Reinforcement Learning Environments frameworks
Accelerate local LLM inference and finetuning
Enhances Tesseract OCR output using LLMs (local or API)
A high-throughput and memory-efficient inference and serving engine
FlashInfer: Kernel Library for LLM Serving
Run Local LLMs on Any Device. Open-source
Adding guardrails to large language models
Simple, Pythonic building blocks to evaluate LLM applications
Operating LLMs in production
Supercharge Your LLM Application Evaluations
lightweight package to simplify LLM API calls
A Simple and Universal Swarm Intelligence Engine
Replace OpenAI GPT with another LLM in your app
Evaluation and Tracking for LLM Experiments
Optimizing inference proxy for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon
Easiest and laziest way for building multi-agent LLMs applications
Framework to easily create LLM powered bots over any dataset
OpenLIT is an open-source LLM Observability tool
Uncover insights, surface problems, monitor, and fine tune your LLM