The easiest way to use Ollama in .NET
Next-gen AI+IoT framework for T2/T3/T5AI/ESP32/and more
Web app for interacting with any LangGraph agent (PY & TS) via a chat
Access large language models from the command-line
A minimal LLM chat app that runs entirely in your browser
AI edge infrastructure for macOS. Run local or cloud models
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference