Run Local LLMs on Any Device. Open-source
An easy-to-use LLMs quantization package with user-friendly apis
Large Language Model Text Generation Inference
A high-throughput and memory-efficient inference and serving engine
Openai style api for open large language models
Sparsity-aware deep learning inference runtime for CPUs
Phi-3.5 for Mac: Locally-run Vision and Language Models
Visual Instruction Tuning: Large Language-and-Vision Assistant
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Ready-to-use OCR with 80+ supported languages
State-of-the-art Parameter-Efficient Fine-Tuning
Operating LLMs in production
Replace OpenAI GPT with another LLM in your app
Open platform for training, serving, and evaluating language models
A high-performance ML model serving framework, offers dynamic batching
Efficient few-shot learning with Sentence Transformers
PyTorch library of curated Transformer models and their components
Neural Network Compression Framework for enhanced OpenVINO
Libraries for applying sparsification recipes to neural networks
A Unified Library for Parameter-Efficient Learning
Bring the notion of Model-as-a-Service to life
LLM training code for MosaicML foundation models
Low-latency REST API for serving text-embeddings
FlashInfer: Kernel Library for LLM Serving
The unofficial python package that returns response of Google Bard