Run a 1-billion parameter LLM on a $10 board with 256MB RAM
Fastest, smallest, and fully autonomous AI assistant infrastructure
3D reconstruction software
157 models, 30 providers, one command to find what runs on hardware
Topic Modelling for Humans
InvokeAI is a leading creative engine for Stable Diffusion models
Hub of ready-to-use datasets for ML models
Real-time NVIDIA GPU dashboard
Lightweight inference library for ONNX files, written in C++
Open platform for training, serving, and evaluating language models
Open source large-language-model based code completion engine
Explore large language models in 512MB of RAM
llama.go is like llama.cpp in pure Golang
Locally run an Instruction-Tuned Chat-Style LLM
Real-Time Object Detection for Windows and Linux
Simplifies the development of custom machine learning models
Snips Python library to extract meaning from text
Generate embeddings from large-scale graph-structured data
The IRC's Talking Robot
Compact 3B-param multimodal model for efficient on-device reasoning
Lightweight 24B agentic coding model with vision and long context