Run LLMs locally on Cloud Workstations
csghub-server is the backend server for CSGHub
Fast and efficient unstructured data extraction
Fully private LLM chatbot that runs entirely with a browser
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
AI search engine - self-host with local or cloud LLMs
AI assistant that supports knowledge bases, model APIs
Masks sensitive data and secrets before they reach AI
Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge
Official code repo for the O'Reilly Book
Interact with your documents using the power of GPT
Instant, controllable, local pre-trained AI models in Rust
Fast, local-first web content extraction for LLMs
LLocalSearch is a completely locally running search aggregator
AirLLM 70B inference with single 4GB GPU
Local CLI Copilot, powered by Ollama
Quick illustration of how one can easily read books together with LLMs
Chinese Llama-3 LLMs) developed from Meta Llama 3
Universal LLM Deployment Engine with ML Compilation
Query anything (GitHub, Notion, +40 more) with SQL and let LLMs
Plugin for JADX to integrate MCP server
High-speed Large Language Model Serving for Local Deployment
Open source libraries and APIs to build custom preprocessing pipelines
Chat with any codebase in under two minutes | Fully local
Zep: A long-term memory store for LLM / Chatbot applications