Best Artificial Intelligence Software for Databricks Data Intelligence Platform - Page 3

Compare the Top Artificial Intelligence Software that integrates with Databricks Data Intelligence Platform as of October 2025 - Page 3

This a list of Artificial Intelligence software that integrates with Databricks Data Intelligence Platform. Use the filters on the left to add additional filters for products that have integrations with Databricks Data Intelligence Platform. View the products that work with Databricks Data Intelligence Platform in the table below.

  • 1
    TiMi

    TiMi

    TIMi

    With TIMi, companies can capitalize on their corporate data to develop new ideas and make critical business decisions faster and easier than ever before. The heart of TIMi’s Integrated Platform. TIMi’s ultimate real-time AUTO-ML engine. 3D VR segmentation and visualization. Unlimited self service business Intelligence. TIMi is several orders of magnitude faster than any other solution to do the 2 most important analytical tasks: the handling of datasets (data cleaning, feature engineering, creation of KPIs) and predictive modeling. TIMi is an “ethical solution”: no “lock-in” situation, just excellence. We guarantee you a work in all serenity and without unexpected extra costs. Thanks to an original & unique software infrastructure, TIMi is optimized to offer you the greatest flexibility for the exploration phase and the highest reliability during the production phase. TIMi is the ultimate “playground” that allows your analysts to test the craziest ideas!
  • 2
    Privacera

    Privacera

    Privacera

    At the intersection of data governance, privacy, and security, Privacera’s unified data access governance platform maximizes the value of data by providing secure data access control and governance across hybrid- and multi-cloud environments. The hybrid platform centralizes access and natively enforces policies across multiple cloud services—AWS, Azure, Google Cloud, Databricks, Snowflake, Starburst and more—to democratize trusted data enterprise-wide without compromising compliance with regulations such as GDPR, CCPA, LGPD, or HIPAA. Trusted by Fortune 500 customers across finance, insurance, retail, healthcare, media, public and the federal sector, Privacera is the industry’s leading data access governance platform that delivers unmatched scalability, elasticity, and performance. Headquartered in Fremont, California, Privacera was founded in 2016 to manage cloud data privacy and security by the creators of Apache Ranger™ and Apache Atlas™.
  • 3
    MLflow

    MLflow

    MLflow

    MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components. Record and query experiments: code, data, config, and results. Package data science code in a format to reproduce runs on any platform. Deploy machine learning models in diverse serving environments. Store, annotate, discover, and manage models in a central repository. The MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files when running your machine learning code and for later visualizing the results. MLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs. An MLflow Project is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. In addition, the Projects component includes an API and command-line tools for running projects.
  • 4
    Tonic

    Tonic

    Tonic

    Tonic automatically creates mock data that preserves key characteristics of secure datasets so that developers, data scientists, and salespeople can work conveniently without breaching privacy. Tonic mimics your production data to create de-identified, realistic, and safe data for your test environments. With Tonic, your data is modeled from your production data to help you tell an identical story in your testing environments. Safe, useful data created to mimic your real-world data, at scale. Generate data that looks, acts, and feels just like your production data and safely share it across teams, businesses, and international borders. PII/PHI identification, obfuscation, and transformation. Proactively protect your sensitive data with automatic scanning, alerts, de-identification, and mathematical guarantees of data privacy. Advanced sub setting across diverse database types. Collaboration, compliance, and data workflows — perfectly automated.
  • 5
    NVIDIA RAPIDS
    The RAPIDS suite of software libraries, built on CUDA-X AI, gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes. Accelerate your Python data science toolchain with minimal code changes and no new tools to learn. Increase machine learning model accuracy by iterating on models faster and deploying them more frequently.
  • 6
    Secuvy AI
    Secuvy is a next-generation cloud platform to automate data security, privacy compliance and governance via AI-driven workflows. Best in class data intelligence especially for unstructured data. Secuvy is a next-generation cloud platform to automate data security, privacy compliance and governance via ai-driven workflows. Best in class data intelligence especially for unstructured data. Automated data discovery, customizable subject access requests, user validations, data maps & workflows for privacy regulations such as ccpa, gdpr, lgpd, pipeda and other global privacy laws. Data intelligence to find sensitive and privacy information across multiple data stores at rest and in motion. In a world where data is growing exponentially, our mission is to help organizations to protect their brand, automate processes, and improve trust with customers. With ever-expanding data sprawls we wish to reduce human efforts, costs & errors for handling Sensitive Data.
  • 7
    OPAQUE

    OPAQUE

    OPAQUE Systems

    OPAQUE Systems offers a leading confidential AI platform that enables organizations to securely run AI, machine learning, and analytics workflows on sensitive data without compromising privacy or compliance. Their technology allows enterprises to unleash AI innovation risk-free by leveraging confidential computing and cryptographic verification, ensuring data sovereignty and regulatory adherence. OPAQUE integrates seamlessly into existing AI stacks via APIs, notebooks, and no-code solutions, eliminating the need for costly infrastructure changes. The platform provides verifiable audit trails and attestation for complete transparency and governance. Customers like Ant Financial have benefited by using previously inaccessible data to improve credit risk models. With OPAQUE, companies accelerate AI adoption while maintaining uncompromising security and control.
  • 8
    DataSentics

    DataSentics

    DataSentics

    Making data science & machine learning have a real impact on organizations. We are an AI product studio, a group of 100 experienced data scientists and data engineers with a combination of experience both from the agile world of digital start-ups as well as major international corporations. We don’t end with nice slides and dashboards. The result that counts is an automated data solution in production integrated inside a real process. We do not report clickers but data scientists and data engineers. We have a strong focus on productionalizing data science solutions in the cloud with high standards of CI and automation. Building the greatest concentration of the smartest and most creative data scientists and engineers by being the most exciting and fulfilling place for them to work in Central Europe. Giving them the freedom to use our critical mass of expertise to find and iterate on the most promising data-driven opportunities, both for our clients and our own products.
  • 9
    Wallaroo.AI

    Wallaroo.AI

    Wallaroo.AI

    Wallaroo facilitates the last-mile of your machine learning journey, getting ML into your production environment to impact the bottom line, with incredible speed and efficiency. Wallaroo is purpose-built from the ground up to be the easy way to deploy and manage ML in production, unlike Apache Spark, or heavy-weight containers. ML with up to 80% lower cost and easily scale to more data, more models, more complex models. Wallaroo is designed to enable data scientists to quickly and easily deploy their ML models against live data, whether to testing environments, staging, or prod. Wallaroo supports the largest set of machine learning training frameworks possible. You’re free to focus on developing and iterating on your models while letting the platform take care of deployment and inference at speed and scale.
  • 10
    Feast

    Feast

    Tecton

    Make your offline data available for real-time predictions without having to build custom pipelines. Ensure data consistency between offline training and online inference, eliminating train-serve skew. Standardize data engineering workflows under one consistent framework. Teams use Feast as the foundation of their internal ML platforms. Feast doesn’t require the deployment and management of dedicated infrastructure. Instead, it reuses existing infrastructure and spins up new resources when needed. You are not looking for a managed solution and are willing to manage and maintain your own implementation. You have engineers that are able to support the implementation and management of Feast. You want to run pipelines that transform raw data into features in a separate system and integrate with it. You have unique requirements and want to build on top of an open source solution.
  • 11
    Amazon SageMaker Feature Store
    Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, share, and manage features for machine learning (ML) models. Features are inputs to ML models used during training and inference. For example, in an application that recommends a music playlist, features could include song ratings, listening duration, and listener demographics. Features are used repeatedly by multiple teams and feature quality is critical to ensure a highly accurate model. Also, when features used to train models offline in batch are made available for real-time inference, it’s hard to keep the two feature stores synchronized. SageMaker Feature Store provides a secured and unified store for feature use across the ML lifecycle. Store, share, and manage ML model features for training and inference to promote feature reuse across ML applications. Ingest features from any data source including streaming and batch such as application logs, service logs, clickstreams, sensors, etc.
  • 12
    Amazon SageMaker Data Wrangler
    Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow (including data selection, cleansing, exploration, visualization, and processing at scale) from a single visual interface. You can use SQL to select the data you want from a wide variety of data sources and import it quickly. Next, you can use the Data Quality and Insights report to automatically verify data quality and detect anomalies, such as duplicate rows and target leakage. SageMaker Data Wrangler contains over 300 built-in data transformations so you can quickly transform data without writing any code. Once you have completed your data preparation workflow, you can scale it to your full datasets using SageMaker data processing jobs; train, tune, and deploy models.
  • 13
    Sana

    Sana

    Sana Labs

    One home for all your learning and knowledge. Sana is an AI-powered learning platform that empowers teams to find, share, and harness the knowledge they need to achieve their missions. Give everyone a more immersive learning experience by blending live collaborative sessions with personalized self-paced courses. All from one platform. Lower the barrier to sharing knowledge by letting Sana Assistant generate questions, explanations, images, and even entire courses from scratch. Empower anyone to keep up the energy and engagement with interactive quizzes, Q&A, polls, stickynotes, reflection cards, recordings, and more. Integrate Sana with all your team apps and make your entire company’s knowledge searchable in under 100ms. Github, Google Workspace, Notion, Slack, Salesforce. You name it, Sana can query it.
  • 14
    Robust Intelligence

    Robust Intelligence

    Robust Intelligence

    The Robust Intelligence Platform integrates seamlessly into your ML lifecycle to eliminate model failures. The platform detects your model’s vulnerabilities, prevents aberrant data from entering your AI system, and detects statistical data issues like drift. At the core of our test-based approach is a single test. Each test measures your model’s robustness to a specific type of production model failure. Stress Testing runs hundreds of these tests to measure model production readiness. The results of these tests are used to auto-configure a custom AI Firewall that protects the model against the specific forms of failure to which a given model is susceptible. Finally, Continuous Testing runs these tests during production, providing automated root cause analysis informed by the underlying cause of any single test failure. Using all three elements of the Robust Intelligence platform together helps ensure ML Integrity.
  • 15
    TextQL

    TextQL

    TextQL

    The platform indexes BI tools and semantic layers, documents data in dbt, and uses OpenAI and language models to provide self-serve power analytics. With TextQL, non-technical users can easily and quickly work with data by asking questions in their work context (Slack/Teams/email) and getting automated answers quickly and safely. The platform also leverages NLP and semantic layers, including the dbt Labs semantic layer, to ensure reasonable solutions. TextQL's elegant handoffs to human analysts, when required, dramatically simplify the whole question-to-answer process with AI. At TextQL, our mission is to empower business teams to access the data that they're looking for in less than a minute. To accomplish this, we help data teams surface and create documentation for their data so that business teams can trust that their reports are up to date.
  • 16
    Qualytics

    Qualytics

    Qualytics

    Helping enterprises proactively manage their full data quality lifecycle through contextual data quality checks, anomaly detection and remediation. Expose anomalies and metadata to help teams take corrective actions. Automatically trigger remediation workflows to resolve errors quickly and efficiently. Maintain high data quality and prevent errors from affecting business decisions. The SLA chart provides an overview of SLA, including the total number of SLA monitoring that have been performed and any violations that have occurred. This chart can help you identify areas of your data that may require further investigation or improvement.
  • 17
    LlamaIndex

    LlamaIndex

    LlamaIndex

    LlamaIndex is a “data framework” to help you build LLM apps. Connect semi-structured data from API's like Slack, Salesforce, Notion, etc. LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. LlamaIndex provides the key tools to augment your LLM applications with data. Connect your existing data sources and data formats (API's, PDF's, documents, SQL, etc.) to use with a large language model application. Store and index your data for different use cases. Integrate with downstream vector store and database providers. LlamaIndex provides a query interface that accepts any input prompt over your data and returns a knowledge-augmented response. Connect unstructured sources such as documents, raw text files, PDF's, videos, images, etc. Easily integrate structured data sources from Excel, SQL, etc. Provides ways to structure your data (indices, graphs) so that this data can be easily used with LLMs.
  • 18
    Wherobots

    Wherobots

    Wherobots

    Wherobots enables users to easily develop, test, and deploy geospatial data analytics and AI pipelines within the user's existing data stack. That can be deployed in the cloud. Users do not have to worry about the hassle of resource administration, workload scalability, and geospatial processing support/optimization. Connect your Wherobots account to the cloud database where the data is stored using our SaaS web interface. Develop your geospatial data science, machine learning, or analytics application using Sedona Developer Tool. Schedule automatic deployment of your geospatial pipeline to the cloud data platform and monitor the performance in Wherobots. Consume the outcome of your geospatial analytics task. The consumption model can be through a single geospatial map visualization or API calls.
  • 19
    Acryl Data

    Acryl Data

    Acryl Data

    No more data catalog ghost towns. Acryl Cloud drives fast time-to-value via Shift Left practices for data producers and an intuitive UI for data consumers. Continuously detect data quality incidents in real-time, automate anomaly detection to prevent breakages, and drive fast resolution when they do occur. Acryl Cloud supports both push-based and pull-based metadata ingestion for easy maintenance, ensuring information is trustworthy, up-to-date, and definitive. Data should be operational. Go beyond simple visibility and use automated Metadata Tests to continuously expose data insights and surface new areas for improvement. Reduce confusion and accelerate resolution with clear asset ownership, automatic detection, streamlined alerts, and time-based lineage for tracing root causes.
  • 20
    Modelbit

    Modelbit

    Modelbit

    Don't change your day-to-day, works with Jupyter Notebooks and any other Python environment. Simply call modelbi.deploy to deploy your model, and let Modelbit carry it — and all its dependencies — to production. ML models deployed with Modelbit can be called directly from your warehouse as easily as calling a SQL function. They can also be called as a REST endpoint directly from your product. Modelbit is backed by your git repo. GitHub, GitLab, or home grown. Code review. CI/CD pipelines. PRs and merge requests. Bring your whole git workflow to your Python ML models. Modelbit integrates seamlessly with Hex, DeepNote, Noteable and more. Take your model straight from your favorite cloud notebook into production. Sick of VPC configurations and IAM roles? Seamlessly redeploy your SageMaker models to Modelbit. Immediately reap the benefits of Modelbit's platform with the models you've already built.
  • 21
    Virtualitics

    Virtualitics

    Virtualitics

    Embedded AI and rich 3D visualizations empower analysts to deliver transformational business strategies, never miss the critical insights in your data again. Virtualitics’ Intelligent Exploration empowers analysts with embedded AI-guided exploration, automatically surfacing insights that drive transformative action. Understand what you’re seeing with AI-guided exploration, explained in plain language so nothing gets missed. Drill into all the relevant data, no matter type or complexity, to discover key relationships in seconds. Increase engagement and understanding with rich 3D visualizations that bring data stories to life. Analyze data from new angles with 3D and VR data visualizations that make deciphering complex findings easier. Share strategic insight with annotated discoveries and clear explanations for all stakeholders.
  • 22
    Cranium

    Cranium

    Cranium

    The AI revolution is here. Innovation is moving at light speed, and the regulation landscape is constantly evolving. How can you make sure that your AI systems — and those of your vendors — remain secure, trustworthy, and compliant? Cranium helps cybersecurity and data science teams understand everywhere that AI is impacting their systems, data or services. Secure your organization’s AI and machine learning systems to ensure they are compliant and trustworthy, without interrupting your workflow. Protect against adversarial threats without impacting how your team trains, tests and deploys AI models. Increase AI regulatory awareness and alignment within your organization. Showcase the security and trustworthiness of your AI systems.
  • 23
    Qlik Staige

    Qlik Staige

    QlikTech

    Harness the power of Qlik® Staige™ to make AI real by delivering a trusted data foundation, automation, actionable predictions, and company-wide impact. AI isn’t just experiments and initiatives — it’s an entire ecosystem of files, scripts, and results. Wherever your investments, we’ve partnered with top sources to bring you integrations that save time, enable management, and validate quality. Automate the delivery of real-time data into AWS data warehouses or data lakes, and make it easily accessible through a governed catalog. Through our new integration with Amazon Bedrock, you can easily connect to foundational large language models (LLMs) including A21 Labs, Amazon Titan, Anthropic, Cohere, and Meta. Seamless integration with Amazon Bedrock makes it easier for AWS customers to leverage large language models with analytics for AI-driven insights.
  • 24
    Validio

    Validio

    Validio

    See how your data assets are used: popularity, utilization, and schema coverage. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Find and filter the data you need based on metadata tags and descriptions. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Drive data governance and ownership across your organization. Stream-lake-warehouse lineage to facilitate data ownership and collaboration. Automatically generated field-level lineage map to understand the entire data ecosystem. Anomaly detection learns from your data and seasonality patterns, with automatic backfill from historical data. Machine learning-based thresholds are trained per data segment, trained on actual data instead of metadata only.
  • 25
    BREVIAN

    BREVIAN

    BREVIAN

    AI agent that understands data from your ticketing systems, CRMs, knowledge bases and release notes, and assists your support teams to resolve tickets faster. Automate routing tickets to the right team based on ticket content. Go from reactive support to proactive support by detecting emerging issues earlier. Categorize tickets into common topics and trends, and improve knowledge bases to reduce ticket volume. BREVIAN AI agents have security and safety built-in, so you don’t need to integrate different products to be enterprise-ready. Implement consistent controls independent of your data and models. Business teams can even combine different agents to build an intelligent network across their enterprise data. Knowledge extraction for structured and unstructured data including images.
  • 26
    Fluent

    Fluent

    Fluent

    Self-serve your company's data insights with AI. Fluent is your AI data analyst, it helps you explore your data and uncover the questions you should be asking. No complicated UI and no learning curve; just type in your question and Fluent will do the rest. Fluent works with you to clarify your question, so you can get the insight you need. With Fluent, you can uncover the questions you should be asking to get more out of your data. Fluent lets you build a shared understanding of your data, with real-time collaboration and a shared data dictionary. No more silos, no more confusion, and no more disagreements on how to define revenue. Fluent integrates with Slack and Teams, so you can get data insights without leaving your chat. Fluent gives verifiable output with a visual display of the exact SQL journey and AI logic for each query. Curate tailored datasets with clear usage guidelines and quality control, enhancing data integrity and access management.
  • 27
    Quest

    Quest

    Quest

    Empower growth & marketing teams to boost engagement and conversion 10x faster without data or engineering teams. Enhance your app with AI-driven personalized UI, seamlessly integrated with your data stack for a tailored user experience. Tailoring the onboarding process can significantly improve user activation. By considering factors like user roles, company type, and intent, we can create custom quick-start guides and tours. By allowing users to customize menu items or shop items based on their interests and behavior, we can create a personalized browsing experience. By leveraging insights such as purchase history, browsing behavior, and demographics, we can deliver targeted promotions and discounts that resonate with each user. Our analytical models find common traits for users who are not active and create thousands of segments to activate them. Easily find winning variants among millions of variants to power users across the customer journey.
  • 28
    Velotix

    Velotix

    Velotix

    Velotix empowers organizations to maximize the value of their data while ensuring security and compliance in a rapidly evolving regulatory landscape. The Velotix Data Security Platform offers automated policy management, dynamic access controls, and comprehensive data discovery, all driven by advanced AI. With seamless integration across multi-cloud environments, Velotix enables secure, self-service data access, optimizing data utilization without compromising on governance. Trusted by leading enterprises across financial services, healthcare, telecommunications, and more, Velotix is reshaping data governance for the ‘need to share’ era.
  • 29
    BeagleGPT

    BeagleGPT

    BeagleGPT

    Proactive data and insights nudges for each user according to their usage pattern, automated heuristic rules, data updates, and user-cohort learnings. The semantic layer is finetuned for organizations with their nomenclatures and terminologies. User roles and preferences are considered while building responses for them. Advanced modules to answer how, why and so what scenarios. A single subscription covers the entire organization, truly propelling data democratization. Beagle is built to nudge you and your team toward data-driven decision-making. It is your personal data assistant that delivers all data-related updates and alerts in your message box. With in-built self-service functionalities, Beagle reduces the total cost of ownership by huge margins. Beagle connects with other dashboards to enhance their power and increase their reach in the organization.
  • 30
    OneTrust Data & AI Governance
    OneTrust's Data & AI Governance solution is an integrated platform designed to establish data and AI policies by consolidating insights from data, metadata, models, and risk assessments, providing comprehensive visibility into data products and AI development. It accelerates data-driven innovation by increasing the speed of approval for data products and AI systems. The solution enhances business continuity through continuous monitoring of data and AI systems, ensuring regulatory compliance, effective risk management, and reduced application downtime. It simplifies compliance by centrally defining, orchestrating, and natively enforcing data policies. Key features include consistent scanning, classification, and tagging of sensitive data to ensure the reliable application of data governance policies across structured and unstructured sources. It promotes responsible data usage by enforcing role-based access within a robust data governance framework.