Business Software for StackAI - Page 3

Top Software that integrates with StackAI as of October 2025 - Page 3

StackAI Clear Filters
  • 1
    HubSpot Customer Platform
    Put your customers first and grow better with HubSpot’s AI-powered customer platform. Connect your front office teams through a complete view of the customer journey. Use AI-powered tools to deliver a seamless customer experience Easily adapt to emerging industry trends and technologies. Traditional CRMs alone aren't enough to drive growth. Most aren’t designed for customer connection, which is critical in an AI-driven world where customers can explore, evaluate, and buy with efficiency. HubSpot’s customer platform is so much more. It’s powered by Smart CRM that combines customer data with AI to help you adapt, products for engaging customers across the entire journey, and an ecosystem of integrations, education, and community. It’s built for businesses to connect with customers, and grow better.
    Starting Price: Free
  • 2
    Braze

    Braze

    Braze

    Braze is a customer engagement platform that delivers messaging experiences across push, email, apps, and more. Braze is built specifically for today’s mobile-first world and tomorrow’s ambient computing future. Braze is set apart as the platform that allows for real-time and continuous data streaming, replacing decades-old databases that aren’t built for today’s on-demand, always-connected customer. With data, technology, and teams working together in unison, the Braze platform makes marketing more authentic, brands more human, and customers more satisfied with every experience. Braze is a venture-backed company with hundreds of employees. Offices are located in New York City, San Francisco, London, and Singapore. Recognized by Forbes Cloud 100 at #85, ranked #225 on Inc.'s 500 Fastest Growing Private Companies, and listed as #21 in the Deloitte Technology Fast 500 List, and recognized by The New York Times as ‘The Next Wave of ‘Unicorn’ Start-Ups’.
  • 3
    Veeva Vault

    Veeva Vault

    Veeva Systems

    Bridging content gaps across the enterprise for global harmonization, while supporting local autonomy. Veeva Vault is a true cloud enterprise content management platform and suite of applications specifically built for life sciences. Traditionally, companies have had to deploy applications for content and separate applications to manage associated data. Veeva Vault is the only content management platform with the unique capability to manage both content and data. Companies can now eliminate system, site, and country silos and streamline end-to-end processes across commercial, medical, clinical, regulatory, quality, and safety. Because all Vault applications are built on the same core platform, companies gain additional efficiency and compliance through the streamlined flow of documents across regions and departments. Content stays accessible, current, and in context across the entire development and commercial lifecycle.
  • 4
    PostgreSQL

    PostgreSQL

    PostgreSQL Global Development Group

    PostgreSQL is a powerful, open-source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. There is a wealth of information to be found describing how to install and use PostgreSQL through the official documentation. The open-source community provides many helpful places to become familiar with PostgreSQL, discover how it works, and find career opportunities. Learm more on how to engage with the community. The PostgreSQL Global Development Group has released an update to all supported versions of PostgreSQL, including 15.1, 14.6, 13.9, 12.13, 11.18, and 10.23. This release fixes 25 bugs reported over the last several months. This is the final release of PostgreSQL 10. PostgreSQL 10 will no longer receive security and bug fixes. If you are running PostgreSQL 10 in a production environment, we suggest that you make plans to upgrade.
  • 5
    Together AI

    Together AI

    Together AI

    Whether prompt engineering, fine-tuning, or training, we are ready to meet your business demands. Easily integrate your new model into your production application using the Together Inference API. With the fastest performance available and elastic scaling, Together AI is built to scale with your needs as you grow. Inspect how models are trained and what data is used to increase accuracy and minimize risks. You own the model you fine-tune, not your cloud provider. Change providers for whatever reason, including price changes. Maintain complete data privacy by storing data locally or in our secure cloud.
    Starting Price: $0.0001 per 1k tokens
  • 6
    Groq

    Groq

    Groq

    Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today. An LPU inference engine, with LPU standing for Language Processing Unit, is a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as AI language applications (LLMs). The LPU is designed to overcome the two LLM bottlenecks, compute density and memory bandwidth. An LPU has greater computing capacity than a GPU and CPU in regards to LLMs. This reduces the amount of time per word calculated, allowing sequences of text to be generated much faster. Additionally, eliminating external memory bottlenecks enables the LPU inference engine to deliver orders of magnitude better performance on LLMs compared to GPUs. Groq supports standard machine learning frameworks such as PyTorch, TensorFlow, and ONNX for inference.
  • 7
    Claude Haiku 3
    Claude Haiku 3 is the fastest and most affordable model in its intelligence class. With state-of-the-art vision capabilities and strong performance on industry benchmarks, Haiku is a versatile solution for a wide range of enterprise applications. The model is now available alongside Sonnet and Opus in the Claude API and on claude.ai for our Claude Pro subscribers.
  • 8
    Claude Sonnet 4.5
    Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection.
  • 9
    Oracle Database
    Oracle database products offer customers cost-optimized and high-performance versions of Oracle Database, the world's leading converged, multi-model database management system, as well as in-memory, NoSQL, and MySQL databases. Oracle Autonomous Database, available on-premises via Oracle Cloud@Customer or in the Oracle Cloud Infrastructure, enables customers to simplify relational database environments and reduce management workloads. Oracle Autonomous Database eliminates the complexity of operating and securing Oracle Database while giving customers the highest levels of performance, scalability, and availability. Oracle Database can be deployed on-premises when customers have data residency and network latency concerns. Customers with applications that are dependent on specific Oracle database versions have complete control over the versions they run and when those versions change.
  • 10
    Cerebras

    Cerebras

    Cerebras

    We’ve built the fastest AI accelerator, based on the largest processor in the industry, and made it easy to use. With Cerebras, blazing fast training, ultra low latency inference, and record-breaking time-to-solution enable you to achieve your most ambitious AI goals. How ambitious? We make it not just possible, but easy to continuously train language models with billions or even trillions of parameters – with near-perfect scaling from a single CS-2 system to massive Cerebras Wafer-Scale Clusters such as Andromeda, one of the largest AI supercomputers ever built.