Bringing large-language models and chat to web browsers
Fast, flexible LLM inference
Fast, local-first web content extraction for LLMs
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
the terminal client for Ollama
Request recommended movies, TV shows and anime to Jellyseer/Overseer
Based on the LangChain/LangGraph framework
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
AI assistant that supports knowledge bases, model APIs
Chat with your SQL database
Fully private LLM chatbot that runs entirely with a browser
The all-in-one Desktop & Docker AI application with full RAG and AI
Query anything (GitHub, Notion, +40 more) with SQL and let LLMs
Open-source enterprise-level AI knowledge base and MCP
Full-stack Engineer Agent. Built with Next.js, Claude, shadcn/ui
Web app for interacting with any LangGraph agent (PY & TS) via a chat
Helps developers deploy LangChain runnables and chains as a REST API
AI search engine - self-host with local or cloud LLMs
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Retrieval Augmented Generation (RAG) framework
Did you say you like data?
Visual Instruction Tuning: Large Language-and-Vision Assistant
Auto-GPT on the browser
Chat with LLM like Vicuna totally in your browser with WebGPU