Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Library for OCR-related tasks powered by Deep Learning
An easy-to-use LLMs quantization package with user-friendly apis
Efficient few-shot learning with Sentence Transformers
Easiest and laziest way for building multi-agent LLMs applications
Phi-3.5 for Mac: Locally-run Vision and Language Models
Operating LLMs in production
DoWhy is a Python library for causal inference
Neural Network Compression Framework for enhanced OpenVINO
Large Language Model Text Generation Inference
LLM training code for MosaicML foundation models
A general-purpose probabilistic programming system
FlashInfer: Kernel Library for LLM Serving
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
PyTorch library of curated Transformer models and their components
The unofficial python package that returns response of Google Bard
Database system for building simpler and faster AI-powered application
A high-performance ML model serving framework, offers dynamic batching
Framework that is dedicated to making neural data processing
Replace OpenAI GPT with another LLM in your app