Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
A high-throughput and memory-efficient inference and serving engine
Sparsity-aware deep learning inference runtime for CPUs
Phi-3.5 for Mac: Locally-run Vision and Language Models
FlashInfer: Kernel Library for LLM Serving
Operating LLMs in production
Large Language Model Text Generation Inference
PyTorch library of curated Transformer models and their components
Run 100B+ language models at home, BitTorrent-style
DoWhy is a Python library for causal inference
Database system for building simpler and faster AI-powered application
State-of-the-art Parameter-Efficient Fine-Tuning
Easiest and laziest way for building multi-agent LLMs applications
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
A high-performance ML model serving framework, offers dynamic batching
An easy-to-use LLMs quantization package with user-friendly apis
LLMFlows - Simple, Explicit and Transparent LLM Apps
The unofficial python package that returns response of Google Bard
Libraries for applying sparsification recipes to neural networks
LLM training code for MosaicML foundation models
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Replace OpenAI GPT with another LLM in your app