50 Integrations with Groq

View a list of Groq integrations and software that integrates with Groq below. Compare the best Groq integrations as well as features, ratings, user reviews, and pricing of software that integrates with Groq. Here are the current Groq integrations in 2025:

  • 1
    Mistral AI

    Mistral AI

    Mistral AI

    Mistral AI is a pioneering artificial intelligence startup specializing in open-source generative AI. The company offers a range of customizable, enterprise-grade AI solutions deployable across various platforms, including on-premises, cloud, edge, and devices. Flagship products include "Le Chat," a multilingual AI assistant designed to enhance productivity in both personal and professional contexts, and "La Plateforme," a developer platform that enables the creation and deployment of AI-powered applications. Committed to transparency and innovation, Mistral AI positions itself as a leading independent AI lab, contributing significantly to open-source AI and policy development.
    Starting Price: Free
    View Software
    Visit Website
  • 2
    AiAssistWorks

    AiAssistWorks

    PT Visi Cerdas Digital

    AiAssistWorks simplifies Google Sheets™ & Docs™ with 100+ AI models, including GPT, Claude, Gemini, Llama, and Groq. No more manual work—automate content creation, data analysis, translation, and more. Whether filling thousands of rows, generating images, or refining text, AiAssistWorks makes AI effortless. -Free Forever – Get 300 executions/month with your API key -No formulas needed – AI handles data filling, formatting, and cleaning -Instant AI writing & editing – Generate, rewrite, summarize, and translate in Docs™ -Bulk filling automation – SEO, PPC ads, social posts, sentiment analysis, and more -Fine-tune models for free – Train Gemini to fit your needs -AI Vision – Convert images to text instantly -Formula Assistant – Auto-generate & explain formulas -Unlimited use – Bring your own API key for full access -Supports OpenRouter, OpenAI, Google Gemini™, Anthropic Claude, Groq, and more. Smarter, faster & cheaper than alternatives. 🚀
    Starting Price: $3/month
  • 3
    TensorFlow

    TensorFlow

    TensorFlow

    An end-to-end open source machine learning platform. TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. Build and train ML models easily using intuitive high-level APIs like Keras with eager execution, which makes for immediate model iteration and easy debugging. Easily train and deploy models in the cloud, on-prem, in the browser, or on-device no matter what language you use. A simple and flexible architecture to take new ideas from concept to code, to state-of-the-art models, and to publication faster. Build, deploy, and experiment easily with TensorFlow.
    Starting Price: Free
  • 4
    OpenAI

    OpenAI

    OpenAI

    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. Apply our API to any language task — semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples or by specifying your task in English. One simple integration gives you access to our constantly-improving AI technology. Explore how you integrate with the API with these sample completions.
  • 5
    BuildShip

    BuildShip

    BuildShip

    BuildShip is a lowcode visual backend builder that lets you ship APIs, scheduled jobs, AI workflows, cloud jobs instantly. Connect with any database, tools and AI models to create complete backend logic flows. Use prebuilt nodes or use AI to create custom logic nodes just for you. By combining the ease of nocode with power of low-code extensibility - BuildShip gives you a scalable way to build your backend fast. Supports powerful use-cases like: 💸 Collect payment and trigger Stripe, RevenueCat, Lemon Squeezy for subscription workflows 🪄 Trigger AI workflows and build AI apps' backend 🔌 Creating APIs for processing your data and CRUD ops on database 📝 Send form submission data to third party tools 🤖 Add AI Assistant Chatbot with OpenAI, Azure, Claude, Groq or any model 💌 Email users, generate leads, enrich data The possibilities are endless with BuildShip ✨
    Starting Price: $25 per month
  • 6
    Bolt.diy

    Bolt.diy

    StackBlitz Labs

    Bolt.diy is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack web applications directly from your browser, with no local setup required. It integrates cutting-edge AI models with an in-browser development environment powered by StackBlitz’s WebContainers, enabling you to install and run npm tools and libraries, run Node.js servers, interact with third-party APIs, and deploy to production from chat. Unlike traditional development environments where AI can only assist in code generation, Bolt.diy gives AI models complete control over the entire environment, including the filesystem, node server, package manager, terminal, and browser console, empowering AI agents to handle the entire app lifecycle, from creation to deployment. Whether you’re an experienced developer, a PM, or a designer, Bolt.diy allows you to build production-grade full-stack applications with ease.
    Starting Price: Free
  • 7
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.
    Starting Price: Free
  • 8
    PyTorch

    PyTorch

    PyTorch

    Transition seamlessly between eager and graph modes with TorchScript, and accelerate the path to production with TorchServe. Scalable distributed training and performance optimization in research and production is enabled by the torch-distributed backend. A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more. PyTorch is well supported on major cloud platforms, providing frictionless development and easy scaling. Select your preferences and run the install command. Stable represents the most currently tested and supported version of PyTorch. This should be suitable for many users. Preview is available if you want the latest, not fully tested and supported, 1.10 builds that are generated nightly. Please ensure that you have met the prerequisites (e.g., numpy), depending on your package manager. Anaconda is our recommended package manager since it installs all dependencies.
  • 9
    Mistral 7B

    Mistral 7B

    Mistral AI

    Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.
    Starting Price: Free
  • 10
    Codestral Mamba
    As a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialized in code generation, available under an Apache 2.0 license. Codestral Mamba is another step in our effort to study and provide new architectures. It is available for free use, modification, and distribution, and we hope it will open new perspectives in architecture research. Mamba models offer the advantage of linear time inference and the theoretical ability to model sequences of infinite length. It allows users to engage with the model extensively with quick responses, irrespective of the input length. This efficiency is especially relevant for code productivity use cases, this is why we trained this model with advanced code and reasoning capabilities, enabling it to perform on par with SOTA transformer-based models.
    Starting Price: Free
  • 11
    Langtail

    Langtail

    Langtail

    Langtail is a cloud-based application development tool designed to help companies debug, test, deploy, and monitor LLM-powered apps with ease. The platform offers a no-code playground for debugging prompts, fine-tuning model parameters, and running LLM tests to prevent issues when models or prompts change. Langtail specializes in LLM testing, including chatbot testing and ensuring robust AI LLM test prompts. With its comprehensive features, Langtail enables teams to: • Test LLM models thoroughly to catch potential issues before they affect production environments. • Deploy prompts as API endpoints for seamless integration. • Monitor model performance in production to ensure consistent outcomes. • Use advanced AI firewall capabilities to safeguard and control AI interactions. Langtail is the ideal solution for teams looking to ensure the quality, stability, and security of their LLM and AI-powered applications.
    Starting Price: $99/month/unlimited users
  • 12
    Codestral

    Codestral

    Mistral AI

    We introduce Codestral, our first-ever code model. Codestral is an open-weight generative AI model explicitly designed for code generation tasks. It helps developers write and interact with code through a shared instruction and completion API endpoint. As it masters code and English, it can be used to design advanced AI applications for software developers. Codestral is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash. It also performs well on more specific ones like Swift and Fortran. This broad language base ensures Codestral can assist developers in various coding environments and projects.
    Starting Price: Free
  • 13
    Mistral Large

    Mistral Large

    Mistral AI

    Mistral Large is Mistral AI's flagship language model, designed for advanced text generation and complex multilingual reasoning tasks, including text comprehension, transformation, and code generation. It supports English, French, Spanish, German, and Italian, offering a nuanced understanding of grammar and cultural contexts. With a 32,000-token context window, it can accurately recall information from extensive documents. The model's precise instruction-following and native function-calling capabilities facilitate application development and tech stack modernization. Mistral Large is accessible through Mistral's platform, Azure AI Studio, and Azure Machine Learning, and can be self-deployed for sensitive use cases. Benchmark evaluations indicate that Mistral Large achieves strong results, making it the world's second-ranked model generally available through an API, next to GPT-4.
    Starting Price: Free
  • 14
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
    Starting Price: Free
  • 15
    Mixtral 8x22B

    Mixtral 8x22B

    Mistral AI

    Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is fluent in English, French, Italian, German, and Spanish. It has strong mathematics and coding capabilities. It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernization at scale. Its 64K tokens context window allows precise information recall from large documents. We build models that offer unmatched cost efficiency for their respective sizes, delivering the best performance-to-cost ratio within models provided by the community. Mixtral 8x22B is a natural continuation of our open model family. Its sparse activation patterns make it faster than any dense 70B model.
    Starting Price: Free
  • 16
    Mathstral

    Mathstral

    Mistral AI

    As a tribute to Archimedes, whose 2311th anniversary we’re celebrating this year, we are proud to release our first Mathstral model, a specific 7B model designed for math reasoning and scientific discovery. The model has a 32k context window published under the Apache 2.0 license. We’re contributing Mathstral to the science community to bolster efforts in advanced mathematical problems requiring complex, multi-step logical reasoning. The Mathstral release is part of our broader effort to support academic projects, it was produced in the context of our collaboration with Project Numina. Akin to Isaac Newton in his time, Mathstral stands on the shoulders of Mistral 7B and specializes in STEM subjects. It achieves state-of-the-art reasoning capacities in its size category across various industry-standard benchmarks. In particular, it achieves 56.6% on MATH and 63.47% on MMLU, with the following MMLU performance difference by subject between Mathstral 7B and Mistral 7B.
    Starting Price: Free
  • 17
    Ministral 3B

    Ministral 3B

    Mistral AI

    Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.
    Starting Price: Free
  • 18
    Ministral 8B

    Ministral 8B

    Mistral AI

    Mistral AI has introduced two advanced models for on-device computing and edge applications, named "les Ministraux": Ministral 3B and Ministral 8B. These models excel in knowledge, commonsense reasoning, function-calling, and efficiency within the sub-10B parameter range. They support up to 128k context length and are designed for various applications, including on-device translation, offline smart assistants, local analytics, and autonomous robotics. Ministral 8B features an interleaved sliding-window attention pattern for faster and more memory-efficient inference. Both models can function as intermediaries in multi-step agentic workflows, handling tasks like input parsing, task routing, and API calls based on user intent with low latency and cost. Benchmark evaluations indicate that les Ministraux consistently outperforms comparable models across multiple tasks. As of October 16, 2024, both models are available, with Ministral 8B priced at $0.1 per million tokens.
    Starting Price: Free
  • 19
    Mistral Small

    Mistral Small

    Mistral AI

    On September 17, 2024, Mistral AI announced several key updates to enhance the accessibility and performance of their AI offerings. They introduced a free tier on "La Plateforme," their serverless platform for tuning and deploying Mistral models as API endpoints, enabling developers to experiment and prototype at no cost. Additionally, Mistral AI reduced prices across their entire model lineup, with significant cuts such as a 50% reduction for Mistral Nemo and an 80% decrease for Mistral Small and Codestral, making advanced AI more cost-effective for users. The company also unveiled Mistral Small v24.09, a 22-billion-parameter model offering a balance between performance and efficiency, suitable for tasks like translation, summarization, and sentiment analysis. Furthermore, they made Pixtral 12B, a vision-capable model with image understanding capabilities, freely available on "Le Chat," allowing users to analyze and caption images without compromising text-based performance.
    Starting Price: Free
  • 20
    Portkey

    Portkey

    Portkey.ai

    Launch production-ready apps with the LMOps stack for monitoring, model management, and more. Replace your OpenAI or other provider APIs with the Portkey endpoint. Manage prompts, engines, parameters, and versions in Portkey. Switch, test, and upgrade models with confidence! View your app performance & user level aggregate metics to optimise usage and API costs Keep your user data secure from attacks and inadvertent exposure. Get proactive alerts when things go bad. A/B test your models in the real world and deploy the best performers. We built apps on top of LLM APIs for the past 2 and a half years and realised that while building a PoC took a weekend, taking it to production & managing it was a pain! We're building Portkey to help you succeed in deploying large language models APIs in your applications. Regardless of you trying Portkey, we're always happy to help!
    Starting Price: $49 per month
  • 21
    Mixtral 8x7B

    Mixtral 8x7B

    Mistral AI

    Mixtral 8x7B is a high-quality sparse mixture of experts model (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT-3.5 on most standard benchmarks.
    Starting Price: Free
  • 22
    Kerlig

    Kerlig

    Kerlig

    Kerlig for macOS brings AI to any app. Bring your own API key for OpenAI, Claude, Gemini Pro, and Groq. Never embarrass yourself with typos again. Fix spelling and grammar in any app before you hit send. Reply on the go with a perfectly crafted message using your tone of voice. Kerlig is your in-context AI writing assistant. Chat with up to 350 pages of documents with Claude models. When you select text in any app and launch Kerlig using a hotkey of your choice, it takes the selected text and allows you to perform various actions like fixing spelling, changing tone, writing a reply, answering questions, etc. Then you can paste the generated text directly into the original app, or copy it to the clipboard and paste it manually. You can chat with PDFs or other long-form documents using OpenAI models, which have a maximum input limit of 8, 16, or 32K tokens. Kerlig is blazing fast, it launches at approximately 150 milliseconds and uses only 60-140MB of memory.
    Starting Price: $27 one-time payment
  • 23
    LibreChat

    LibreChat

    LibreChat

    LibreChat is a free, open source AI chat platform. This web UI offers vast customization, supporting numerous AI providers, services, and integrations. Serves all AI conversations in one place with a familiar interface, and innovative enhancements, for as many users as you need. LibreChat is an AI chat platform that empowers you to harness the capabilities of cutting-edge language models from multiple providers in a unified interface. With its vast customization options, innovative enhancements, and seamless integration of AI services, LibreChat offers an unparalleled conversational experience. It brings together the latest advancements in AI technology. It serves as a centralized hub for all your AI conversations, providing a familiar, user-friendly interface enriched with advanced features and customization capabilities. LibreChat allows you to freely use, modify, and distribute the software without any restrictions or paid subscriptions.
    Starting Price: Free
  • 24
    PI Prompts

    PI Prompts

    PI Prompts

    An intuitive right-hand side panel for ChatGPT, Google Gemini, Claude.ai, Mistral, Groq, and Pi.ai. Reach your prompt library with a click. The PI Prompts Chrome extension is a powerful tool designed to enhance your experience with AI models. The extension simplifies your workflow by eliminating the need for constant copy-pasting of prompts. It comes with convenient options to download and upload prompts in JSON format, so you can share your collection with your friends or even create task-specific collections. As you start writing your prompt in the input box (as normally), this extension quickly filters your right panel by showing the connected prompts. You can download and upload your prompt list anytime, even adding external prompt lists in JSON format. You can edit and delete prompts directly on the panel. Your prompts will be synced between your devices, where you use Chrome. The panel is usable with both light and dark themes.
    Starting Price: Free
  • 25
    OpenLIT

    OpenLIT

    OpenLIT

    OpenLIT is an OpenTelemetry-native application observability tool. It's designed to make the integration process of observability into AI projects with just a single line of code. Whether you're working with popular LLM libraries such as OpenAI and HuggingFace. OpenLIT's native support makes adding it to your projects feel effortless and intuitive. Analyze LLM and GPU performance, and costs to achieve maximum efficiency and scalability. Streams data to let you visualize your data and make quick decisions and modifications. Ensures that data is processed quickly without affecting the performance of your application. OpenLIT UI helps you explore LLM costs, token consumption, performance indicators, and user interactions in a straightforward interface. Connect to popular observability systems with ease, including Datadog and Grafana Cloud, to export data automatically. OpenLIT ensures your applications are monitored seamlessly.
    Starting Price: Free
  • 26
    Langtrace

    Langtrace

    Langtrace

    Langtrace is an open source observability tool that collects and analyzes traces and metrics to help you improve your LLM apps. Langtrace ensures the highest level of security. Our cloud platform is SOC 2 Type II certified, ensuring top-tier protection for your data. Supports popular LLMs, frameworks, and vector databases. Langtrace can be self-hosted and supports OpenTelemetry standard traces, which can be ingested by any observability tool of your choice, resulting in no vendor lock-in. Get visibility and insights into your entire ML pipeline, whether it is a RAG or a fine-tuned model with traces and logs that cut across the framework, vectorDB, and LLM requests. Annotate and create golden datasets with traced LLM interactions, and use them to continuously test and enhance your AI applications. Langtrace includes built-in heuristic, statistical, and model-based evaluations to support this process.
    Starting Price: Free
  • 27
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
    Starting Price: Free
  • 28
    ChatLabs

    ChatLabs

    ChatLabs

    Experience the power of the best AI models in one streamlined platform with ChatLabs. We've got everything from chatting, writing, and web searching to generating incredible art. You can choose the right AI for every task if you prefer using GPT-4, Claude Opus, Gemini, or Llama 3. AI Assistants & Bots Unlock limitless possibilities with customizable AI assistants. Please choose from our pre-built options or design your own, fine-tuning them with your specific files. The only limit is your imagination. Our AI Prompt Library helps you organize frequently used prompts well-structured so that you can access them quickly and efficiently—no need for repetition. AI Art & Image Creation: Generate breathtaking visuals using our advanced AI tools like FLUX.1, DALL-E 3, and Stable Diffusion 3. Whether It's for personal or professional use, the possibilities are endless.
    Starting Price: $9.99 per month
  • 29
    SWE-Kit

    SWE-Kit

    Composio

    SweKit let’s you build PR agents to review code, suggest improvements, enforce coding standards, identify potential issues, automate merge approvals, and provide feedback on best practices, streamlining the review process and enhancing code quality. Automate writing new features, debug complex issues, create and run tests, optimize code for performance, refactor for maintainability, and ensure best practices across the codebase, accelerating development and efficiency. Use highly optimized code analysis, advanced code indexing, and intelligent file navigation tools to explore and interact with large codebases effortlessly. Ask questions, trace dependencies, uncover logic flows, and gain instant insights, enabling seamless communication with complex code structures. Keep your documentation in sync with your code. Automatically update Mintlify documentation whenever changes are made to the codebase, ensuring that your docs stay accurate, up-to-date, and ready for your team and users.
    Starting Price: $49 per month
  • 30
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 31
    BlueGPT

    BlueGPT

    BlueGPT

    Enjoy the power of the best AI models on a single platform. Take advantage of the generative AI's power with your team and stay ahead of the competition. Search for discussions, create folders, add tags, export data, and much more. Use the power of AI directly on the internet. Switch between models in the same chat, text, images, and web search. Enjoy exclusive prompts categorized by marketing, social media, HR, sales, and much more. Write your prompts once and reuse them endlessly directly in BlueGPT. Create your personal assistants and enjoy those from the community. Choose the interface that suits you and create a space where you feel comfortable. Upload any file and start asking questions about its content. Search for discussions, create folders, add tags, export data, and much more. Sync and back up your chat data across multiple devices. Access all AI models in one place, and create content faster at less cost.
    Starting Price: €24 per month
  • 32
    Databutton

    Databutton

    Databutton

    Ship your idea in days, not weeks, with Databutton, the world's first fully AI app developer. Describe what you want, and use natural language, screenshots, or diagrams to get React UIs built by AI. Power your product with any service. Connect your app to any API or model to realize its full potential. Prompt Databutton to build Python APIs that scrape websites fetch data across systems, and more. Ship value to your customers continuously, we handle the security and infrastructure for you. Whether you're an indie hacker building a micro SaaS or an existing business delivering online, we have a plan that fits just what you need. We're always evaluating the best models available for the agentic framework you interact with in Databutton. If you want to build an app utilizing a model or service, you will either need to bring your own API key/secrets for use in your application or request an API key/secrets from your application’s users.
    Starting Price: $20 per month
  • 33
    Entry Point AI

    Entry Point AI

    Entry Point AI

    Entry Point AI is the modern AI optimization platform for proprietary and open source language models. Manage prompts, fine-tunes, and evals all in one place. When you reach the limits of prompt engineering, it’s time to fine-tune a model, and we make it easy. Fine-tuning is showing a model how to behave, not telling. It works together with prompt engineering and retrieval-augmented generation (RAG) to leverage the full potential of AI models. Fine-tuning can help you to get better quality from your prompts. Think of it like an upgrade to few-shot learning that bakes the examples into the model itself. For simpler tasks, you can train a lighter model to perform at or above the level of a higher-quality model, greatly reducing latency and cost. Train your model not to respond in certain ways to users, for safety, to protect your brand, and to get the formatting right. Cover edge cases and steer model behavior by adding examples to your dataset.
    Starting Price: $49 per month
  • 34
    AnotherWrapper

    AnotherWrapper

    AnotherWrapper

    AnotherWrapper is an all-in-one Next.js AI starter kit designed to accelerate the development and launch of AI-powered applications. It offers over 10 ready-to-use AI demo apps, including chatbots, text and image generation tools, and audio transcription services, all integrated with state-of-the-art AI models like GPT-4, Claude 3, LLaMA 3, DALL·E, and SDXL. The platform provides pre-configured APIs, authentication, database management, payment processing, and analytics, enabling developers to focus on building their products without the complexities of setting up infrastructure. With customizable UI components and support for Tailwind CSS, daisyUI, and shading themes, AnotherWrapper facilitates the creation of responsive and visually appealing user interfaces. It also includes programmatic SEO features to enhance visibility and search engine rankings. By leveraging AnotherWrapper, developers can significantly reduce development time, launching AI applications in days.
    Starting Price: $229 per month
  • 35
    AgentAuth

    AgentAuth

    Composio

    AgentAuth is a specialized authentication platform designed to facilitate secure and seamless access for AI agents to over 250 third-party applications and services. It offers comprehensive support for various authentication protocols, ensuring reliable connections with automatic token refresh. The platform integrates seamlessly with leading agentic frameworks such as LangChain, CrewAI, and LlamaIndex, enhancing the capabilities of AI agents. AgentAuth provides a unified dashboard for complete visibility into user-connected accounts, enabling efficient monitoring and issue resolution. It also offers white-labeling options, allowing customization of the authentication process to align with product branding and OAuth developer applications. Committed to high-security standards, AgentAuth complies with SOC 2 Type II and GDPR, employing strong encryption for data protection.
    Starting Price: $99 per month
  • 36
    AgentForge

    AgentForge

    AgentForge

    AgentForge is a comprehensive SaaS platform that streamlines the creation and customization of AI agents. It offers a fully integrated NextJS boilerplate, enabling users to build, deploy, and test AI applications efficiently. The platform includes pre-built AI agents, customizable graphs, reusable UI components, and an interactive playground for experimentation. AgentForge seamlessly integrates with popular AI tools such as Langchain, Langgraph, Langsmith, OpenAI, Groq, and Llamma, providing all the necessary building blocks for AI application development. With features like observability through Langsmith and over 20 themes via daisyUI, it caters to both small projects and more advanced needs. The platform's straightforward pricing model involves a one-time payment for lifetime access to all features, updates, and improvements, eliminating recurring subscription fees. AgentForge is designed to simplify AI development, making it accessible for developers and businesses.
    Starting Price: $99 per month
  • 37
    AptlyStar.ai

    AptlyStar.ai

    AptlyStar.ai

    AptlyStar.ai, powered by Aptly Technology Corporation, is an AI platform providing innovative solutions to enhance customer support and enhance workflow automation. With its intuitive tools, AptlyStar empowers businesses to build and deploy AI-powered agents, driving efficiency and productivity across teams.
    Starting Price: $20/month
  • 38
    AI Crypto-Kit
    AI Crypto-Kit empowers developers to build crypto agents by seamlessly integrating leading Web3 platforms like Coinbase, OpenSea, and more to automate real-world crypto/DeFi workflows. Developers can build AI-powered crypto automation in minutes, including applications such as trading agents, community reward systems, Coinbase wallet management, portfolio tracking, market analysis, and yield farming. The platform offers capabilities engineered for crypto agents, including fully managed agent authentication with support for OAuth, API keys, JWT, and automatic token refresh; optimization for LLM function calling to ensure enterprise-grade reliability; support for over 20 agentic frameworks like Pippin, LangChain, and LlamaIndex; integration with more than 30 Web3 platforms, including Binance, Aave, OpenSea, and Chainlink; and SDKs and APIs for agentic app interactions, available in Python and TypeScript.
  • 39
    ONNX

    ONNX

    ONNX

    ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. Develop in your preferred framework without worrying about downstream inferencing implications. ONNX enables you to use your preferred framework with your chosen inference engine. ONNX makes it easier to access hardware optimizations. Use ONNX-compatible runtimes and libraries designed to maximize performance across hardware. Our active community thrives under our open governance structure, which provides transparency and inclusion. We encourage you to engage and contribute.
  • 40
    Tune AI

    Tune AI

    NimbleBox

    Leverage the power of custom models to build your competitive advantage. With our enterprise Gen AI stack, go beyond your imagination and offload manual tasks to powerful assistants instantly – the sky is the limit. For enterprises where data security is paramount, fine-tune and deploy generative AI models on your own cloud, securely.
  • 41
    Le Chat

    Le Chat

    Mistral AI

    Le Chat is a conversational entry point to interact with the various models from Mistral AI. It offers a pedagogical and fun way to explore Mistral AI’s technology. Le Chat can use Mistral Large or Mistral Small under the hood, or a prototype model called Mistral Next, designed to be brief and concise. We are hard at work to make our models as useful and as little opinionated as possible, although much remain to be improved! Thanks to a tunable system-level moderation mechanism, Le Chat warns you in a non-invasive way when you’re pushing the conversation in directions where the assistant may produce sensitive or controversial content.
    Starting Price: Free
  • 42
    EvalsOne

    EvalsOne

    EvalsOne

    An intuitive yet comprehensive evaluation platform to iteratively optimize your AI-driven products. Streamline LLMOps workflow, build confidence, and gain a competitive edge. EvalsOne is your all-in-one toolbox for optimizing your application evaluation process. Imagine a Swiss Army knife for AI, equipped to tackle any evaluation scenario you throw its way. Suitable for crafting LLM prompts, fine-tuning RAG processes, and evaluating AI agents. Choose from rule-based or LLM-based approaches to automate the evaluation process. Integrate human evaluation seamlessly, leveraging the power of expert judgment. Applicable to all LLMOps stages from development to production environments. EvalsOne provides an intuitive process and interface, that empowers teams across the AI lifecycle, from developers to researchers and domain experts. Easily create evaluation runs and organize them in levels. Quickly iterate and perform in-depth analysis through forked runs.
  • 43
    Mirascope

    Mirascope

    Mirascope

    Mirascope is an open-source library built on Pydantic 2.0 for the most clean, and extensible prompt management and LLM application building experience. Mirascope is a powerful, flexible, and user-friendly library that simplifies the process of working with LLMs through a unified interface that works across various supported providers, including OpenAI, Anthropic, Mistral, Gemini, Groq, Cohere, LiteLLM, Azure AI, Vertex AI, and Bedrock. Whether you're generating text, extracting structured information, or developing complex AI-driven agent systems, Mirascope provides the tools you need to streamline your development process and create powerful, robust applications. Response models in Mirascope allow you to structure and validate the output from LLMs. This feature is particularly useful when you need to ensure that the LLM's response adheres to a specific format or contains certain fields.
  • 44
    Literal AI

    Literal AI

    Literal AI

    Literal AI is a collaborative platform designed to assist engineering and product teams in developing production-grade Large Language Model (LLM) applications. It offers a suite of tools for observability, evaluation, and analytics, enabling efficient tracking, optimization, and integration of prompt versions. Key features include multimodal logging, encompassing vision, audio, and video, prompt management with versioning and AB testing capabilities, and a prompt playground for testing multiple LLM providers and configurations. Literal AI integrates seamlessly with various LLM providers and AI frameworks, such as OpenAI, LangChain, and LlamaIndex, and provides SDKs in Python and TypeScript for easy instrumentation of code. The platform also supports the creation of experiments against datasets, facilitating continuous improvement and preventing regressions in LLM applications.
  • 45
    Langflow

    Langflow

    Langflow

    Langflow is a low-code AI builder designed to create agentic and retrieval-augmented generation applications. It offers a visual interface that allows developers to construct complex AI workflows through drag-and-drop components, facilitating rapid experimentation and prototyping. The platform is Python-based and agnostic to any model, API, or database, enabling seamless integration with various tools and stacks. Langflow supports the development of intelligent chatbots, document analysis systems, and multi-agent applications. It provides features such as dynamic input variables, fine-tuning capabilities, and the ability to create custom components. Additionally, Langflow integrates with numerous services, including Cohere, Bing, Anthropic, HuggingFace, OpenAI, and Pinecone, among others. Developers can utilize pre-built components or code their own, enhancing flexibility in AI application development. The platform also offers a free cloud service for quick deployment and test
  • 46
    FactSnap

    FactSnap

    FactSnap

    FactSnap is a Chrome extension that serves as a reliable fact-checking companion, allowing users to verify information while browsing the web. By highlighting text and clicking the FactSnap icon, users receive immediate feedback on the accuracy of statements, categorized as accurate, suspicious, or incorrect. The tool leverages AI to compare claims against relevant online sources, promoting critical thinking and helping combat the spread of misinformation. FactSnap does not require users to create an account and is designed with user privacy in mind, processing and collecting data related to usage without storing personal data unless explicitly provided. It utilizes third-party tools such as Groq, Llama 3, exa.sh, and GPT-4o for fact-checking, each adhering to its own data privacy standards. Developed by Studio NAND, FactSnap is part of the AI4Democracy initiative, aiming to foster a healthier, more informed society.
  • 47
    Witsy

    Witsy

    Witsy

    Witsy is a desktop application that provides access to all generative AI models from top AI providers. It is a one-stop solution for all your generative AI needs. Witsy is a BYOK (Bring Your Own Keys) AI application, meaning you need to have API keys for the LLM providers you want to use. Alternatively, you can use Ollama to run models locally on your machine for free and use them in Witsy. Witsy itself does not collect or process any personal data. All your data is kept on your computer and never leaves it. Witsy does not use cookies or any other tracking mechanisms. All features of Witsy are accessible through keyboard shortcuts. You can trigger the chat or scratchpad, execute commands, and more. You can also customize the shortcuts to fit your needs. You can also talk to the AI model in the scratchpad for the ultimate document creation experience. Work with Witsy as if you were collaborating with a peer.
  • 48
    Orate

    Orate

    Orate

    Orate is an AI toolkit for speech that enables developers to create realistic, human-like speech and transcribe audio through a unified API compatible with leading AI providers such as OpenAI, ElevenLabs, and AssemblyAI. The platform offers text-to-speech functionality, allowing users to convert text into lifelike speech using a simple API that integrates seamlessly with various providers. For instance, by importing the 'speak' function from Orate and the desired provider, developers can generate speech from text prompts. Additionally, Orate provides speech-to-text capabilities, transforming spoken words into meaningful text with unparalleled accuracy, speed, and reliability. By importing the 'transcribe' function and the chosen provider, users can transcribe audio files into text. The toolkit also supports speech-to-speech transformations, enabling users to change the voice of their audio using a straightforward voice-to-voice API compatible with leading AI providers.
  • 49
    Hunch

    Hunch

    Hunch

    Supercharge your work with Hunch, all the best AI models in one no-code app. Chain together AI tasks in a visual, no-code workspace, share your work as a tool for your whole team to use. Far more than just a workflow automation tool, Hunch is a visual canvas for thinking, experimenting, exploring and working with AI on complex tasks.
  • 50
    WordRaptor

    WordRaptor

    Curtis Duggan Software

    WordRaptor your all-in-one SEO powerhouse that you buy once and use forever. From keyword generation to content strategy, batch drafting to image management, and manual or automated publishing – we've got you covered. We're putting privacy and power back in your hands, right on your Mac. Say goodbye to expensive monthly AI writer subscriptions. Generate content from titles, keywords, or descriptions Provide custom instructions for tailored content Automatically generate meta titles, descriptions, and open graph metadata Manage high-volume content with the article queue system Choose your preferred AI model with your own API key Store all sensitive data locally and securely on your Mac Publish articles to Wix, WordPress, Ghost, Shopify and Webflow
    Starting Price: $39.99
  • Previous
  • You're on page 1
  • Next