Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
A high-throughput and memory-efficient inference and serving engine
Sparsity-aware deep learning inference runtime for CPUs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Operating LLMs in production
Optimizing inference proxy for LLMs
Bring the notion of Model-as-a-Service to life
Large Language Model Text Generation Inference
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Visual Instruction Tuning: Large Language-and-Vision Assistant
Database system for building simpler and faster AI-powered application
A general-purpose probabilistic programming system
An easy-to-use LLMs quantization package with user-friendly apis
Phi-3.5 for Mac: Locally-run Vision and Language Models
Openai style api for open large language models
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Libraries for applying sparsification recipes to neural networks
State-of-the-art Parameter-Efficient Fine-Tuning
DoWhy is a Python library for causal inference
Replace OpenAI GPT with another LLM in your app
The unofficial python package that returns response of Google Bard
Neural Network Compression Framework for enhanced OpenVINO
Efficient few-shot learning with Sentence Transformers
A Unified Library for Parameter-Efficient Learning