ONNX Runtime: cross-platform, high performance ML inferencing
NativeScript for Android using v8
Incredibly fast JavaScript runtime, bundler, test runner
Fast, small, safe, gradually typed embeddable scripting language
API and runtime that allows access to VR hardware
LiteRT is the new name for TensorFlow Lite (TFLite)
A retargetable MLIR-based machine learning compiler runtime toolkit
A fast and reliable entity component system (ECS) and much more
Next generation AWS IoT Client SDK for C++ using AWS Common Runtime
Extensible WebAssembly runtime for cloud native applications
Injectable LUA scripting system, SDK generator, live property editor
Port of OpenAI's Whisper model in C/C++
MLX: An array framework for Apple silicon
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
Thin, unified, C++-flavored wrappers for the CUDA APIs
Port of Facebook's LLaMA model in C/C++
The official GitHub mirror of the Chromium source
Proxy: Next Generation Polymorphism in C++
A cross platform C99 library to get cpu features at runtime
High-efficiency floating-point neural network inference operators
OpenVINO™ Toolkit repository
Clean and efficient FP8 GEMM kernels with fine-grained scaling
A high-performance, zero-overhead, extensible Python compiler
On-device AI across mobile, embedded and edge for PyTorch
Open source codebase for Scale Agentex