Bringing large-language models and chat to web browsers
AI assistant that supports knowledge bases, model APIs
Fast, local-first web content extraction for LLMs
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
Fast, flexible LLM inference
Fully private LLM chatbot that runs entirely with a browser
Web app for interacting with any LangGraph agent (PY & TS) via a chat
AI search engine - self-host with local or cloud LLMs
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Query anything (GitHub, Notion, +40 more) with SQL and let LLMs
High-speed Large Language Model Serving for Local Deployment
The free, Open Source alternative to OpenAI, Claude and others
Quick illustration of how one can easily read books together with LLMs
Fast and efficient unstructured data extraction
Automatic question answering for local knowledge bases based on LLM
A minimal LLM chat app that runs entirely in your browser
Run AI models locally on your machine with node.js bindings for llama
Universal LLM Deployment Engine with ML Compilation
LLocalSearch is a completely locally running search aggregator
Masks sensitive data and secrets before they reach AI
local-first semantic code search engine
An elegant AI chat client. Full-featured, lightweight
Web-based tool converts GitHub repository contents
LLM Council works together to answer your hardest questions
the terminal client for Ollama