Fast stable diffusion on CPU and AI PC
MCP Server for interacting with manifest v2 compatible browsers
Plugin for JADX to integrate MCP server
Fast, flexible LLM inference
A clean web dashboard for OpenClaw
Run local LLMs like llama, deepseek, kokoro etc. inside your browser
Browser action engine for AI agents. 10× faster, resilient by design
Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model
High-performance browser automation bridge and orchestrator
Socket based MCP Server for Ghidra
Model Context Protocol server that integrates AgentQL's data
InvokeAI is a leading creative engine for Stable Diffusion models
Mobile and Web client for Codex and Claude Code, with realtime voice
Deploy OpenClaw with one click
ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat
Python Telegram bot api.
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Full-stack AI Red Teaming platform
Java enterprise application development framework
Monitor browser logs directly from Cursor
An SMS-forwarding Robot Running on Your Android Device
The open source coding agent
Talk with Azure using MCP
A TypeScript SSE proxy for MCP servers that use stdio transport
A simple native web interface that uses ChatTTS to synthesize text