Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
TT-NN operator library, and TT-Metalium low level kernel programming
Emscripten: An LLVM-to-WebAssembly Compiler
Distribute and run LLMs with a single file
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Production ready toolkit to run AI locally
Fast Multimodal LLM on Mobile Devices
An Easy-to-Use and High-Performance AI Deployment Framework
High-speed Large Language Model Serving for Local Deployment
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
A @ClickHouse fork that supports high-performance vector search
Alibaba's high-performance LLM inference engine for diverse apps
UCCL is an efficient communication library for GPUs
Mooncake is the serving platform for Kimi
Locally run an Instruction-Tuned Chat-Style LLM
Implements a reference architecture for creating information systems