Qwen3-Coder
Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.
Learn more
Qwen Code
Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results on Agentic Coding, Browser‑Use, and Tool‑Use tasks comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and more.
Learn more
RabbitHoles AI
RabbitHoles AI is an app to have AI conversations on an Infinite canvas. Each node on the canvas is a conversation. Multiple conversations can be connected to share context, along with adding other data sources like PDF files, YouTube videos, etc.
Key Features:
- Multiple Chats On Canvas: Have multiple connected chats with AI on the same canvas.
- Unlimited Canvases: Create unlimited canvases
- Latest Pro Models: Chat with all the popular LLM models from ChatGPT, Claude, Perplexity, Gemini, and Grok (xAI)
Benefits
- No loss of context: As a side effect of branching chats, you control the length of the conversation; this prevents loss of context
- Spatial Conversation: Learn/research faster on a whiteboard like canvas
- Non-linear chats: Our brains don't think or learn linearly, so why should our chatbots be linear?
Use Case
Advanced AI Users can get what they want out of AI by having long explorative conversations with different AI models on an infinite canvas.
Learn more
Repo Prompt
Repo Prompt is a macOS-native AI coding assistant and context engineering tool that helps developers interact with, refine, and modify codebases using large language models by letting users select specific files or folders, build structured prompts with exactly the relevant context, and review and apply AI-generated code changes as diffs rather than rewriting entire files, ensuring precise, auditable modifications. It provides a visual file explorer for project navigation, an intelligent context builder, and CodeMaps that reduce token usage and help models understand project structure, and multi-model support so users can bring their own API keys for providers like OpenAI, Anthropic, Gemini, Azure, or others, keeping all processing local and private unless the user explicitly sends code to an LLM. Repo Prompt works as both a standalone chat/workflow interface and an MCP (Model Context Protocol) server for integration with AI editors.
Learn more