Bringing large-language models and chat to web browsers
AI assistant that supports knowledge bases, model APIs
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
Fast, flexible LLM inference
Fully private LLM chatbot that runs entirely with a browser
AI search engine - self-host with local or cloud LLMs
Web app for interacting with any LangGraph agent (PY & TS) via a chat
Quick illustration of how one can easily read books together with LLMs
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Query anything (GitHub, Notion, +40 more) with SQL and let LLMs
High-speed Large Language Model Serving for Local Deployment
The free, Open Source alternative to OpenAI, Claude and others
Fast and efficient unstructured data extraction
Universal LLM Deployment Engine with ML Compilation
Run AI models locally on your machine with node.js bindings for llama
LLocalSearch is a completely locally running search aggregator
Masks sensitive data and secrets before they reach AI
A minimal LLM chat app that runs entirely in your browser
local-first semantic code search engine
LLM Council works together to answer your hardest questions
Automatic question answering for local knowledge bases based on LLM
Web-based tool converts GitHub repository contents
the terminal client for Ollama
An elegant AI chat client. Full-featured, lightweight
ChatWiki WeChat official account's AI knowledge base workflow agent