Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
Operating LLMs in production
A high-throughput and memory-efficient inference and serving engine
An easy-to-use LLMs quantization package with user-friendly apis
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Large Language Model Text Generation Inference
Phi-3.5 for Mac: Locally-run Vision and Language Models
Sparsity-aware deep learning inference runtime for CPUs
Visual Instruction Tuning: Large Language-and-Vision Assistant
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Ready-to-use OCR with 80+ supported languages
Openai style api for open large language models
Database system for building simpler and faster AI-powered application
Replace OpenAI GPT with another LLM in your app
Neural Network Compression Framework for enhanced OpenVINO
The unofficial python package that returns response of Google Bard
Framework that is dedicated to making neural data processing
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Efficient few-shot learning with Sentence Transformers
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Libraries for applying sparsification recipes to neural networks
PyTorch library of curated Transformer models and their components
DoWhy is a Python library for causal inference
Open platform for training, serving, and evaluating language models