Port of Facebook's LLaMA model in C/C++
Python bindings for llama.cpp
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
Maid is a cross-platform Flutter app for interfacing with GGUF
Run Local LLMs on Any Device. Open-source
Personal AI, On Personal Devices
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Clippy, now with some AI
React and Electron-based app that executes the FreedomGPT LLM locally
The free, Open Source alternative to OpenAI, Claude and others
Interface for OuteTTS models
Run AI models locally on your machine with node.js bindings for llama
Claude Code, but it runs on your Mac for free
Your Personal AI Assistant; easy to install, deploy on local or coud
Inference Llama 2 in one file of pure C
Run a full local LLM stack with one command using Docker
Vim plugin for LLM-assisted code/text completion
VS Code extension for LLM-assisted code/text completion
Open-source LLM load balancer and serving platform for hosting LLMs
Distribute and run LLMs with a single file
QVAC Fabric: cross-platform LLM inference and fine-tuning
DevoxxGenie is a plugin for IntelliJ IDEA that uses local LLM's
An easy-to-understand framework for LLM samplers
Oobabooga - The definitive Web UI for local AI, with powerful features