Fast stable diffusion on CPU and AI PC
Fast, flexible LLM inference
High-performance browser automation bridge and orchestrator
Browser action engine for AI agents. 10× faster, resilient by design
Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model
Socket based MCP Server for Ghidra
InvokeAI is a leading creative engine for Stable Diffusion models
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
Deep Research framework, combining language models with tools
Model Context Protocol server that integrates AgentQL's data
Mobile and Web client for Codex and Claude Code, with realtime voice
A clean web dashboard for OpenClaw
Java enterprise application development framework
Talk with Azure using MCP
Deploy OpenClaw with one click
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
DevoxxGenie is a plugin for IntelliJ IDEA that uses local LLM's
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Full-stack AI Red Teaming platform
An SMS-forwarding Robot Running on Your Android Device
Python Telegram bot api.
A TypeScript SSE proxy for MCP servers that use stdio transport
The open source coding agent
Stanford CoreNLP, a Java suite of core NLP tools
Google Flights MCP and Python Library