Fast inference engine for Transformer models
Pure C++ implementation of several models for real-time chatting
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Port of OpenAI's Whisper model in C/C++
A GPU-accelerated library containing highly optimized building blocks
MNN is a blazing fast, lightweight deep learning framework
Easy-to-use deep learning framework with 3 key features
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
High-performance neural network inference framework for mobile
OpenVINO™ Toolkit repository
Deep Learning API and Server in C++14 support for Caffe, PyTorch
C++ library for high performance inference on NVIDIA GPUs
lightweight, standalone C++ inference engine for Google's Gemma models
Deep learning inference framework optimized for mobile platforms
Fast and user-friendly runtime for transformer inference