Open Source OCR Engine
The open big data serving engine
Alibaba's high-performance LLM inference engine for diverse apps
The AI-Native Search Database
Fast Multimodal LLM on Mobile Devices
Mooncake is the serving platform for Kimi
Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference
OCR offline image text recognition command line windows program
Emscripten: An LLVM-to-WebAssembly Compiler
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
HeavyDB (formerly MapD/OmniSciDB)
A @ClickHouse fork that supports high-performance vector search
GPU accelerated decision optimization
QVAC Fabric: cross-platform LLM inference and fine-tuning
lightweight, standalone C++ inference engine for Google's Gemma models
C++-based high-performance parallel environment execution engine
A GPU-accelerated library containing highly optimized building blocks
Doom-based AI research platform for reinforcement learning
High-speed Large Language Model Serving for Local Deployment
Fast inference engine for Transformer models
Graphical User Interface Face Anonymization Tool
Real-time behaviour synthesis with MuJoCo, using Predictive Control
Runtime extension of Proximus enabling Deployment on AMD Ryzen™ AI
Lightweight inference library for ONNX files, written in C++