Kotlin Multiplatform bindings to Skia
Enables the best performance on NVIDIA RTX Graphics Cards
lightweight, standalone C++ inference engine for Google's Gemma models
AI video generator optimized for low VRAM and older GPUs use
World's fastest and most advanced password recovery utility
Official inference framework for 1-bit LLMs
Simplifies the local serving of AI models from any source
ArrayFire, a general purpose GPU library
Style-Bert-VITS2: Bert-VITS2 with more controllable voice styles
QVAC Fabric: cross-platform LLM inference and fine-tuning
Ollama Telegram bot, with advanced configuration
Python-free Rust inference server
Wan2.1: Open and Advanced Large-Scale Video Generative Model
Driver and tools for controlling Lenovo Legion laptops in Linux
Training neural networks on Apple Neural Engine via APIs
A high-performance, zero-overhead, extensible Python compiler
A Python package for extending the official PyTorch
FlashMLA: Efficient Multi-head Latent Attention Kernels
Learn all about the A17 Pro, A16 Bionic, R1, M1-series
Text and image to video generation: CogVideoX and CogVideo
Khronos Vulkan, OpenGL, and OpenGL ES Conformance Tests
950 line, minimal, extensible LLM inference engine built from scratch
Metal programming in Julia
Bailing is a voice dialogue robot similar to GPT-4o
Lemonade helps users run local LLMs with the highest performance