Bringing large-language models and chat to web browsers
Fully private LLM chatbot that runs entirely with a browser
Fast, flexible LLM inference
Web app for interacting with any LangGraph agent (PY & TS) via a chat
Quick illustration of how one can easily read books together with LLMs
Convert any URL to an LLM-friendly input with a simple prefix
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
Fast, local-first web content extraction for LLMs
The SOTA Open-Source Browser Agent
The all-in-one Desktop & Docker AI application with full RAG and AI
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
AI assistant that supports knowledge bases, model APIs
the terminal client for Ollama
Production-grade platform for building agentic IM bots
AI Coding agent for the terminal
Helps developers deploy LangChain runnables and chains as a REST API
AI search engine - self-host with local or cloud LLMs
Request recommended movies, TV shows and anime to Jellyseer/Overseer
The powerful Conversational AI JavaScript Library
A modular and comprehensive solution to deploy a Multi-LLM
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Did you say you like data?
Visual Instruction Tuning: Large Language-and-Vision Assistant
Auto-GPT on the browser
Chat with LLM like Vicuna totally in your browser with WebGPU