Port of Facebook's LLaMA model in C/C++
Low-latency AI inference engine optimized for mobile devices
AI video generator optimized for low VRAM and older GPUs use
Flux 2 image generation model pure C inference
Leading open-source visualization and observability platform
llama and other large language models on iOS and MacOS offline
AI macOS app for real-time coding interview coaching assistance
Locally run an Instruction-Tuned Chat-Style LLM
mujoco-py allows using MuJoCo from Python 3
Retro Games in Gym
Multiagent simulator of road traffic in Qt/C++ and OpenStreetMap.