Next-gen AI+IoT framework for T2/T3/T5AI/ESP32/and more
The easiest way to use Ollama in .NET
Access large language models from the command-line
Query anything (GitHub, Notion, +40 more) with SQL and let LLMs
Run AI models locally on your machine with node.js bindings for llama
A minimal LLM chat app that runs entirely in your browser
Web app for interacting with any LangGraph agent (PY & TS) via a chat
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Jlama is a modern LLM inference engine for Java
AI edge infrastructure for macOS. Run local or cloud models
Vim plugin for LLM-assisted code/text completion