Run Local LLMs on Any Device. Open-source
The developer-first platform for scaling complex Physical AI workloads
Fast stable diffusion on CPU and AI PC
LLM.swift is a simple and readable library
TTS with kokoro and onnx runtime
YOLOv5 is the world's most loved vision AI
AirLLM 70B inference with single 4GB GPU
Clippy, now with some AI
Diffusion Bee is the easiest way to run Stable Diffusion locally
950 line, minimal, extensible LLM inference engine built from scratch
LLM training in simple, raw C/CUDA
Bringing large-language models and chat to web browsers
A high-performance inference engine for AI models
Lemonade helps users run local LLMs with the highest performance
Implement CPU from scratch and play with large model deployments
Course to get into Large Language Models (LLMs)
Clone a voice in 5 seconds to generate arbitrary speech in real-time
SkyPilot: Run AI and batch jobs on any infra
A sound cloning tool with a web interface, using your voice
Large-Scale Agentic RL for High-Performance CUDA Kernel Generation
A simple, performant and scalable Jax LLM
A SOTA open-source image editing model
An engine-agnostic deep learning framework in Java
Local AI file organization with categorization and rename suggestions
Easily build, customize and control your own LLMs