Compare the Top Data Catalog Software that integrates with Apache Kafka as of October 2025

This a list of Data Catalog software that integrates with Apache Kafka. Use the filters on the left to add additional filters for products that have integrations with Apache Kafka. View the products that work with Apache Kafka in the table below.

What is Data Catalog Software for Apache Kafka?

Data catalog software is a tool used to organize, manage, and provide easy access to an organization's data assets. It helps businesses create a centralized inventory of all available data, such as databases, datasets, reports, and documents, allowing users to search, classify, and understand their data assets more efficiently. Features often include metadata management, data lineage tracking, data governance, collaboration tools, and integration with data management systems. By providing a clear overview of data sources and their relationships, data catalog software facilitates data discovery, improves data quality, ensures compliance, and enhances collaboration across teams. Compare and read user reviews of the best Data Catalog software for Apache Kafka currently available using the table below. This list is updated regularly.

  • 1
    Y42

    Y42

    Datos-Intelligence GmbH

    Y42 is the first fully managed Modern DataOps Cloud. It is purpose-built to help companies easily design production-ready data pipelines on top of their Google BigQuery or Snowflake cloud data warehouse. Y42 provides native integration of best-of-breed open-source data tools, comprehensive data governance, and better collaboration for data teams. With Y42, organizations enjoy increased accessibility to data and can make data-driven decisions quickly and efficiently.
  • 2
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 3
    Secuvy AI
    Secuvy is a next-generation cloud platform to automate data security, privacy compliance and governance via AI-driven workflows. Best in class data intelligence especially for unstructured data. Secuvy is a next-generation cloud platform to automate data security, privacy compliance and governance via ai-driven workflows. Best in class data intelligence especially for unstructured data. Automated data discovery, customizable subject access requests, user validations, data maps & workflows for privacy regulations such as ccpa, gdpr, lgpd, pipeda and other global privacy laws. Data intelligence to find sensitive and privacy information across multiple data stores at rest and in motion. In a world where data is growing exponentially, our mission is to help organizations to protect their brand, automate processes, and improve trust with customers. With ever-expanding data sprawls we wish to reduce human efforts, costs & errors for handling Sensitive Data.
  • 4
    Kyrah

    Kyrah

    Kyrah

    Kyrah facilitates enterprise data management across your cloud data estate, data exploration, storage assets grouping, security policy enforcement and permissions management. It makes all changes transparent, secure and GDPR-compliant with its easily configurable and fully automated change request mechanism. And last, but not least, it offers full suitability of all events through activity log. Is seamless self-service data provisioning with shopping cart style checkout. Is a single pane of glass for your enterprise data estate via storage map with data usage heatmap. Enables faster time to market by unifying people, process and data provisioning in a single platform and user interface. Provides a single pane of glass for an organization to understand its data landscape including features such as data use a heatmap, data sensitivity and more. Enables enforcement of data compliance to ensure organizations comply with data sovereignty requirements, thus avoiding any potential fines.
  • 5
    rudol

    rudol

    rudol

    Unify your data catalog, reduce communication overhead and enable quality control to any member of your company, all without deploying or installing anything. rudol is a data quality platform that helps companies understand all their data sources, no matter where they come from; reduces excessive communication in reporting processes or urgencies; and enables data quality diagnosing and issue prevention to all the company, through easy steps With rudol, each organization is able to add data sources from a growing list of providers and BI tools with a standardized structure, including MySQL, PostgreSQL, Airflow, Redshift, Snowflake, Kafka, S3*, BigQuery*, MongoDB*, Tableau*, PowerBI*, Looker* (* in development). So, regardless of where it’s coming from, people can understand where and how the data is stored, read and collaborate with its documentation, or easily contact data owners using our integrations.
    Starting Price: $0
  • 6
    Acryl Data

    Acryl Data

    Acryl Data

    No more data catalog ghost towns. Acryl Cloud drives fast time-to-value via Shift Left practices for data producers and an intuitive UI for data consumers. Continuously detect data quality incidents in real-time, automate anomaly detection to prevent breakages, and drive fast resolution when they do occur. Acryl Cloud supports both push-based and pull-based metadata ingestion for easy maintenance, ensuring information is trustworthy, up-to-date, and definitive. Data should be operational. Go beyond simple visibility and use automated Metadata Tests to continuously expose data insights and surface new areas for improvement. Reduce confusion and accelerate resolution with clear asset ownership, automatic detection, streamlined alerts, and time-based lineage for tracing root causes.
  • 7
    Validio

    Validio

    Validio

    See how your data assets are used: popularity, utilization, and schema coverage. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Find and filter the data you need based on metadata tags and descriptions. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Drive data governance and ownership across your organization. Stream-lake-warehouse lineage to facilitate data ownership and collaboration. Automatically generated field-level lineage map to understand the entire data ecosystem. Anomaly detection learns from your data and seasonality patterns, with automatic backfill from historical data. Machine learning-based thresholds are trained per data segment, trained on actual data instead of metadata only.
  • Previous
  • You're on page 1
  • Next