Best Artificial Intelligence Software for Requesty

Compare the Top Artificial Intelligence Software that integrates with Requesty as of July 2025

This a list of Artificial Intelligence software that integrates with Requesty. Use the filters on the left to add additional filters for products that have integrations with Requesty. View the products that work with Requesty in the table below.

What is Artificial Intelligence Software for Requesty?

Artificial Intelligence (AI) software is computer technology designed to simulate human intelligence. It can be used to perform tasks that require cognitive abilities, such as problem-solving, data analysis, visual perception and language translation. AI applications range from voice recognition and virtual assistants to autonomous vehicles and medical diagnostics. Compare and read user reviews of the best Artificial Intelligence software for Requesty currently available using the table below. This list is updated regularly.

  • 1
    CodeGPT

    CodeGPT

    CodeGPT

    Discover the AI Pair Programming extension for VSCode, Create your own AI Copilots with the Playground and Unleash new AI Apps with the API. Unlock the Power of Your Own AI Agents: Integrate Personalized Context and Knowledge Across all Coding Languages. Easily Train Your AI Copilot with Your Own Files on the Playground. Create and Share a Copilot in Just 5 Minutes, or Achieve Custom AI Copilot Solutions Seamlessly Through the API A always free extension for VS Code that boost coding abilities using chat assistant and code completion. Simply download the extension, add your own API key and start AI-coding for free. Enhanced solution that allows AI agents creation with specific context information, so you can design your own AI copilots and integrate it wherever you want! API connection to develop AI-powered apps effortlessly handling all the complexities of fine-tuning LLMs so you can focus on creating without the technical issues.
    Starting Price: Free
  • 2
    Cursor

    Cursor

    Anysphere

    The AI-first Code Editor. Cursor is a next-generation AI-powered code editor designed to accelerate software development with powerful, real-time code assistance. The platform integrates seamlessly with your development workflow, offering advanced features such as natural language code editing, auto-completion, and context-aware suggestions. Cursor predicts your next move and adapts to your coding style, making it an indispensable tool for developers. It also supports a wide range of extensions and themes, ensuring familiarity while introducing cutting-edge AI capabilities. Designed to boost productivity, Cursor simplifies debugging and enhances coding efficiency, empowering developers to work smarter, not harder.
    Starting Price: $20 per month
  • 3
    GPT-4o

    GPT-4o

    OpenAI

    GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time (opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
    Starting Price: $5.00 / 1M tokens
  • 4
    Claude Sonnet 3.5
    Claude Sonnet 3.5 sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone. Claude Sonnet 3.5 operates at twice the speed of Claude Opus 3. This performance boost, combined with cost-effective pricing, makes Claude Sonnet 3.5 ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows. Claude Sonnet 3.5 is now available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits. It is also available via the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model costs $3 per million input tokens and $15 per million output tokens, with a 200K token context window.
    Starting Price: Free
  • 5
    Claude Sonnet 3.7
    Claude Sonnet 3.7, developed by Anthropic, is a cutting-edge AI model that combines rapid response with deep reflective reasoning. This innovative model allows users to toggle between quick, efficient responses and more thoughtful, reflective answers, making it ideal for complex problem-solving. By allowing Claude to self-reflect before answering, it excels at tasks that require high-level reasoning and nuanced understanding. With its ability to engage in deeper thought processes, Claude Sonnet 3.7 enhances tasks such as coding, natural language processing, and critical thinking applications. Available across various platforms, it offers a powerful tool for professionals and organizations seeking a high-performance, adaptable AI.
    Starting Price: Free
  • 6
    GPT-4o mini
    A small model with superior textual intelligence and multimodal reasoning. GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots). Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now even more cost effective.
  • 7
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.
    Starting Price: Free
  • 8
    OpenAI o1-mini
    OpenAI o1-mini is a new, cost-effective AI model designed for enhanced reasoning, particularly excelling in STEM fields like mathematics and coding. It's part of the o1 series, which focuses on solving complex problems by spending more time "thinking" through solutions. Despite being smaller and 80% cheaper than its sibling, the o1-preview, o1-mini performs competitively in coding tasks and mathematical reasoning, making it an accessible option for developers and enterprises looking for efficient AI solutions.
  • 9
    Gemini 2.0 Flash
    The Gemini 2.0 Flash AI model represents the next generation of high-speed, intelligent computing, designed to set new benchmarks in real-time language processing and decision-making. Building on the robust foundation of its predecessor, it incorporates enhanced neural architecture and breakthrough advancements in optimization, enabling even faster and more accurate responses. Gemini 2.0 Flash is designed for applications requiring instantaneous processing and adaptability, such as live virtual assistants, automated trading systems, and real-time analytics. Its lightweight, efficient design ensures seamless deployment across cloud, edge, and hybrid environments, while its improved contextual understanding and multitasking capabilities make it a versatile tool for tackling complex, dynamic workflows with precision and speed.
  • 10
    Gemini 1.5 Pro
    The Gemini 1.5 Pro AI model is a state-of-the-art language model designed to deliver highly accurate, context-aware, and human-like responses across a variety of applications. Built with cutting-edge neural architecture, it excels in natural language understanding, generation, and reasoning tasks. The model is fine-tuned for versatility, supporting tasks like content creation, code generation, data analysis, and complex problem-solving. Its advanced algorithms ensure nuanced comprehension, enabling it to adapt to different domains and conversational styles seamlessly. With a focus on scalability and efficiency, the Gemini 1.5 Pro is optimized for both small-scale implementations and enterprise-level integrations, making it a powerful tool for enhancing productivity and innovation.
  • 11
    Gemini 1.5 Flash
    The Gemini 1.5 Flash AI model is an advanced, high-speed language model engineered for lightning-fast processing and real-time responsiveness. Designed to excel in dynamic and time-sensitive applications, it combines streamlined neural architecture with cutting-edge optimization techniques to deliver exceptional performance without compromising on accuracy. Gemini 1.5 Flash is tailored for scenarios requiring rapid data processing, instant decision-making, and seamless multitasking, making it ideal for chatbots, customer support systems, and interactive applications. Its lightweight yet powerful design ensures it can be deployed efficiently across a range of platforms, from cloud-based environments to edge devices, enabling businesses to scale their operations with unmatched agility.
  • 12
    Llama 3.1
    The open source AI model you can fine-tune, distill and deploy anywhere. Our latest instruction-tuned model is available in 8B, 70B and 405B versions. Using our open ecosystem, build faster with a selection of differentiated product offerings to support your use cases. Choose from real-time inference or batch inference services. Download model weights to further optimize cost per token. Adapt for your application, improve with synthetic data and deploy on-prem or in the cloud. Use Llama system components and extend the model using zero shot tool use and RAG to build agentic behaviors. Leverage 405B high quality data to improve specialized models for specific use cases.
    Starting Price: Free
  • 13
    Llama 3.2
    The open-source AI model you can fine-tune, distill and deploy anywhere is now available in more versions. Choose from 1B, 3B, 11B or 90B, or continue building with Llama 3.1. Llama 3.2 is a collection of large language models (LLMs) pretrained and fine-tuned in 1B and 3B sizes that are multilingual text only, and 11B and 90B sizes that take both text and image inputs and output text. Develop highly performative and efficient applications from our latest release. Use our 1B or 3B models for on device applications such as summarizing a discussion from your phone or calling on-device tools like calendar. Use our 11B or 90B models for image use cases such as transforming an existing image into something new or getting more information from an image of your surroundings.
    Starting Price: Free
  • 14
    Cline

    Cline

    Cline AI Coding Agent

    Autonomous coding agent right in your IDE, capable of creating/editing files, running commands, using the browser, and more with your permission every step of the way. Cline can handle complex software development tasks step-by-step. With tools that let him create & edit files, explore large projects, use the browser, and execute terminal commands (after you grant permission), he can assist you in ways that go beyond code completion or tech support. While autonomous AI scripts traditionally run in sandboxed environments, this extension provides a human-in-the-loop GUI to approve every file change and terminal command, providing a safe and accessible way to explore the potential of agentic AI.
    Starting Price: Free
  • 15
    Llama 3.3
    Llama 3.3 is the latest iteration in the Llama series of language models, developed to push the boundaries of AI-powered understanding and communication. With enhanced contextual reasoning, improved language generation, and advanced fine-tuning capabilities, Llama 3.3 is designed to deliver highly accurate, human-like responses across diverse applications. This version features a larger training dataset, refined algorithms for nuanced comprehension, and reduced biases compared to its predecessors. Llama 3.3 excels in tasks such as natural language understanding, creative writing, technical explanation, and multilingual communication, making it an indispensable tool for businesses, developers, and researchers. Its modular architecture allows for customizable deployment in specialized domains, ensuring versatility and performance at scale.
    Starting Price: Free
  • 16
    Roo Code

    Roo Code

    Roo Code

    ​Roo Code, formerly known as Roo Cline, is an AI-powered autonomous coding agent integrated into Visual Studio Code, designed to enhance software development efficiency. It facilitates natural language interactions, enabling users to generate code, refactor existing code, debug, and update documentation seamlessly. It can read and write files directly within the workspace, execute terminal commands, and automate browser actions. It supports integration with any OpenAI-compatible or custom APIs/models and allows customization through various modes, including Code Mode for general coding tasks, Architect Mode for system design, Ask Mode for inquiries, Debug Mode for troubleshooting, and user-defined Custom Modes for specialized tasks. Roo Code also features the Model Context Protocol (MCP), extending its capabilities by integrating with external tools and APIs.
    Starting Price: Free
  • 17
    Continue

    Continue

    Continue

    The leading open-source AI code assistant. You can connect any models and any context to create custom autocomplete and chat experiences inside the IDE Remain in flow while coding by removing the barriers that block productivity when building software. Accelerate development with a plug-and-play system that makes it easy to get started and integrates with your entire stack. Become a leader in AI by setting up your code assistant to evolve as new capabilities emerge. Continue autocompletes single lines or entire sections of code in any programming language as you type. Attach code or other context to ask questions about functions, files, the entire codebase, and more. Highlight code sections and press a keyboard shortcut to rewrite code from natural language.
    Starting Price: $0/developer/month
  • 18
    Gemini 2.0 Pro
    Gemini 2.0 Pro is Google DeepMind's most advanced AI model, designed to excel in complex tasks such as coding and intricate problem-solving. Currently in its experimental phase, it features an extensive context window of two million tokens, enabling it to process and analyze vast amounts of information efficiently. A standout feature of Gemini 2.0 Pro is its seamless integration with external tools like Google Search and code execution environments, enhancing its ability to provide accurate and comprehensive responses. This model represents a significant advancement in AI capabilities, offering developers and users a powerful resource for tackling sophisticated challenges.
  • 19
    OpenAI o3-mini
    OpenAI o3-mini is a lightweight version of the advanced o3 AI model, offering powerful reasoning capabilities in a more efficient and accessible package. Designed to break down complex instructions into smaller, manageable steps, o3-mini excels in coding tasks, competitive programming, and problem-solving in mathematics and science. This compact model provides the same high-level precision and logic as its larger counterpart but with reduced computational requirements, making it ideal for use in resource-constrained environments. With built-in deliberative alignment, o3-mini ensures safe, ethical, and context-aware decision-making, making it a versatile tool for developers, researchers, and businesses seeking a balance between performance and efficiency.
  • 20
    OpenAI o3-mini-high
    The o3-mini-high model from OpenAI advances AI reasoning by refining deep problem-solving in coding, mathematics, and complex tasks. It features adaptive thinking time with adjustable reasoning modes (low, medium, high) to optimize performance based on task complexity. Outperforming the o1 series by 200 Elo points on Codeforces, it delivers high efficiency at a lower cost while maintaining speed and accuracy. As part of the o3 family, it pushes AI problem-solving boundaries while remaining accessible, offering a free tier and expanded limits for Plus subscribers.
  • Previous
  • You're on page 1
  • Next