ONNX Runtime: cross-platform, high performance ML inferencing
NativeScript for Android using v8
Fast, small, safe, gradually typed embeddable scripting language
Incredibly fast JavaScript runtime, bundler, test runner
API and runtime that allows access to VR hardware
LiteRT is the new name for TensorFlow Lite (TFLite)
MLX: An array framework for Apple silicon
Next generation AWS IoT Client SDK for C++ using AWS Common Runtime
Injectable LUA scripting system, SDK generator, live property editor
A retargetable MLIR-based machine learning compiler runtime toolkit
Thin, unified, C++-flavored wrappers for the CUDA APIs
Extensible WebAssembly runtime for cloud native applications
Port of OpenAI's Whisper model in C/C++
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
Port of Facebook's LLaMA model in C/C++
A cross platform C99 library to get cpu features at runtime
The official GitHub mirror of the Chromium source
A high-performance, zero-overhead, extensible Python compiler
Downloading files over HTTP / HTTPS at runtime.
Proxy: Next Generation Polymorphism in C++
A fast and reliable entity component system (ECS) and much more
Compatibility tool for Steam Play based on Wine and other components
OpenVINO™ Toolkit repository
Application Kernel for Containers
High-efficiency floating-point neural network inference operators