A library for accelerating Transformer models on NVIDIA GPUs
LM Studio Apple MLX engine
A real time inference engine for temporal logical specifications
High-performance reactive message-passing based Bayesian engine
A high-throughput and memory-efficient inference and serving engine
Jlama is a modern LLM inference engine for Java
950 line, minimal, extensible LLM inference engine built from scratch
Alibaba's high-performance LLM inference engine for diverse apps
A high-performance inference engine for AI models
lightweight, standalone C++ inference engine for Google's Gemma models
High-performance inference framework for large language models
A lightweight vLLM implementation built from scratch
Code for running inference and finetuning with SAM 3 model
Mooncake is the serving platform for Kimi
Low-latency AI inference engine optimized for mobile devices
RGBD video generation model conditioned on camera input
Pruna is a model optimization framework built for developers
Offline inference engine for art, real-time voice conversations
Fast, flexible LLM inference
Inference Llama 2 in one file of pure C
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Fast Multimodal LLM on Mobile Devices
Parallax is a distributed model serving framework
Fast inference engine for Transformer models
Universal LLM Deployment Engine with ML Compilation