Best AI Tools for Model Context Protocol (MCP) - Page 2

Compare the Top AI Tools that integrate with Model Context Protocol (MCP) as of December 2025 - Page 2

This a list of AI Tools that integrate with Model Context Protocol (MCP). Use the filters on the left to add additional filters for products that have integrations with Model Context Protocol (MCP). View the products that work with Model Context Protocol (MCP) in the table below.

  • 1
    Rube

    Rube

    Rube

    Rube is a universal MCP (Model Context Protocol) server that enables AI chat clients to perform real-world actions across 500+ applications, including Gmail, Slack, GitHub, Notion, and more. Once installed, users authenticate their apps just once, and then, via natural language within their AI chat, they can instruct Rube to execute tasks like sending emails, creating tasks, updating databases, or posting updates. It intelligently manages authentication, API routing, and context handling behind the scenes, allowing for seamless multi-step workflows, such as fetching data from one app and sending it to another, without manual setup. Rube supports both individual and team use: shared connections let teammates access apps through a single, unified interface, while integrations persist across different AI clients. Built on Composio’s secure infrastructure, Rube ensures encrypted OAuth flows and SOC‑2 compliant practice, all wrapped in a streamlined, chat-first automation experience.
  • 2
    Incredible

    Incredible

    Incredible

    Incredible is a no-code automation platform powered by agentic AI models designed for real work across applications, letting users create AI “coworkers” that perform complex, multi-step workflows merely by describing tasks in plain English. These AI agents integrate with hundreds of productivity tools, CRMs, ERPs, email systems, Notion, HubSpot, OneDrive, Trello, Slack, and more to perform actions like content repurposing, CRM health checks, contract reviews, and content calendar updates without writing any code. Its architecture supports parallel execution of hundreds of actions with low latency and handles large datasets efficiently, dramatically reducing token limitations and hallucinations in data-critical tasks. The latest model, Incredible Small 1.0, is available in research preview and via API as a drop-in alternative to other LLM endpoints, offering high-precision data processing, near-zero hallucination, and enterprise-scale automation.