Run Local LLMs on Any Device. Open-source
Operating LLMs in production
A high-throughput and memory-efficient inference and serving engine
Visual Instruction Tuning: Large Language-and-Vision Assistant
Openai style api for open large language models
Large Language Model Text Generation Inference
An easy-to-use LLMs quantization package with user-friendly apis
Sparsity-aware deep learning inference runtime for CPUs
Phi-3.5 for Mac: Locally-run Vision and Language Models
Ready-to-use OCR with 80+ supported languages
Database system for building simpler and faster AI-powered application
Libraries for applying sparsification recipes to neural networks
A high-performance ML model serving framework, offers dynamic batching
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Neural Network Compression Framework for enhanced OpenVINO
Replace OpenAI GPT with another LLM in your app
The unofficial python package that returns response of Google Bard
State-of-the-art Parameter-Efficient Fine-Tuning
Efficient few-shot learning with Sentence Transformers
A Unified Library for Parameter-Efficient Learning
Open platform for training, serving, and evaluating language models
Framework that is dedicated to making neural data processing
FlashInfer: Kernel Library for LLM Serving
Bring the notion of Model-as-a-Service to life
DoWhy is a Python library for causal inference