Bringing large-language models and chat to web browsers
Fast, flexible LLM inference
the terminal client for Ollama
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Query anything (GitHub, Notion, +40 more) with SQL and let LLMs
Request recommended movies, TV shows and anime to Jellyseer/Overseer
AI assistant that supports knowledge bases, model APIs
Based on the LangChain/LangGraph framework
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
Fully private LLM chatbot that runs entirely with a browser
Chat with your SQL database
The all-in-one Desktop & Docker AI application with full RAG and AI
Open-source enterprise-level AI knowledge base and MCP
Helps developers deploy LangChain runnables and chains as a REST API
Web app for interacting with any LangGraph agent (PY & TS) via a chat
Full-stack Engineer Agent. Built with Next.js, Claude, shadcn/ui
AI search engine - self-host with local or cloud LLMs
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Retrieval Augmented Generation (RAG) framework
Did you say you like data?
Visual Instruction Tuning: Large Language-and-Vision Assistant
Auto-GPT on the browser
Chat with LLM like Vicuna totally in your browser with WebGPU