CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
Distribute and run LLMs with a single file
Emscripten: An LLVM-to-WebAssembly Compiler
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Production ready toolkit to run AI locally
An Easy-to-Use and High-Performance AI Deployment Framework
TT-NN operator library, and TT-Metalium low level kernel programming
High-speed Large Language Model Serving for Local Deployment
Alibaba's high-performance LLM inference engine for diverse apps
A @ClickHouse fork that supports high-performance vector search
Fast Multimodal LLM on Mobile Devices
Mooncake is the serving platform for Kimi
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
UCCL is an efficient communication library for GPUs
Locally run an Instruction-Tuned Chat-Style LLM
Implements a reference architecture for creating information systems