Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
A high-throughput and memory-efficient inference and serving engine
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
FlashInfer: Kernel Library for LLM Serving
An easy-to-use LLMs quantization package with user-friendly apis
Efficient few-shot learning with Sentence Transformers
Easiest and laziest way for building multi-agent LLMs applications
Operating LLMs in production
LLM training code for MosaicML foundation models
DoWhy is a Python library for causal inference
A general-purpose probabilistic programming system
Phi-3.5 for Mac: Locally-run Vision and Language Models
Neural Network Compression Framework for enhanced OpenVINO
Large Language Model Text Generation Inference
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
PyTorch library of curated Transformer models and their components
The unofficial python package that returns response of Google Bard
Sparsity-aware deep learning inference runtime for CPUs
Database system for building simpler and faster AI-powered application
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Tensor search for humans
Replace OpenAI GPT with another LLM in your app
Open platform for training, serving, and evaluating language models