Best Data Management Software for DataHub - Page 2

Compare the Top Data Management Software that integrates with DataHub as of October 2025 - Page 2

This a list of Data Management software that integrates with DataHub. Use the filters on the left to add additional filters for products that have integrations with DataHub. View the products that work with DataHub in the table below.

  • 1
    Apache Superset
    Superset is fast, lightweight, intuitive, and loaded with options that make it easy for users of all skill sets to explore and visualize their data, from simple line charts to highly detailed geospatial charts. Superset can connect to any SQL based datasource through SQLAlchemy, including modern cloud native databases and engines at petabyte scale. Superset is lightweight and highly scalable, leveraging the power of your existing data infrastructure without requiring yet another ingestion layer.
  • 2
    Apache NiFi

    Apache NiFi

    Apache Software Foundation

    An easy to use, powerful, and reliable system to process and distribute data. Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. Some of the high-level capabilities and objectives of Apache NiFi include web-based user interface, offering a seamless experience between design, control, feedback, and monitoring. Highly configurable, loss tolerant, low latency, high throughput, and dynamic prioritization. Flow can be modified at runtime, back pressure, data provenance, track dataflow from beginning to end, designed for extension. Build your own processors and more. Enables rapid development and effective testing. Secure, SSL, SSH, HTTPS, encrypted content, and much more. Multi-tenant authorization and internal authorization/policy management. NiFi is comprised of a number of web applications (web UI, web API, documentation, custom UI's, etc). So, you'll need to set up your mapping to the root path.
  • 3
    Apache Hudi

    Apache Hudi

    Apache Corporation

    Hudi is a rich platform to build streaming data lakes with incremental data pipelines on a self-managing database layer, while being optimized for lake engines and regular batch processing. Hudi maintains a timeline of all actions performed on the table at different instants of time that helps provide instantaneous views of the table, while also efficiently supporting retrieval of data in the order of arrival. A Hudi instant consists of the following components. Hudi provides efficient upserts, by mapping a given hoodie key consistently to a file id, via an indexing mechanism. This mapping between record key and file group/file id, never changes once the first version of a record has been written to a file. In short, the mapped file group contains all versions of a group of records.
  • 4
    SQLAlchemy

    SQLAlchemy

    SQLAlchemy

    SQLAlchemy is the Python SQL toolkit and object-relational mapper that gives application developers the full power and flexibility of SQL. SQL databases behave less like object collections the more size and performance start to matter; object collections behave less like tables and rows the more abstraction starts to matter. SQLAlchemy aims to accommodate both of these principles. SQLAlchemy considers the database to be a relational algebra engine, not just a collection of tables. Rows can be selected from not only tables but also joins and other select statements; any of these units can be composed into a larger structure. SQLAlchemy's expression language builds on this concept from its core. SQLAlchemy is most famous for its object-relational mapper (ORM), an optional component that provides the data mapper pattern.
  • 5
    Great Expectations

    Great Expectations

    Great Expectations

    Great Expectations is a shared, open standard for data quality. It helps data teams eliminate pipeline debt, through data testing, documentation, and profiling. We recommend deploying within a virtual environment. If you’re not familiar with pip, virtual environments, notebooks, or git, you may want to check out the Supporting. There are many amazing companies using great expectations these days. Check out some of our case studies with companies that we've worked closely with to understand how they are using great expectations in their data stack. Great expectations cloud is a fully managed SaaS offering. We're taking on new private alpha members for great expectations cloud, a fully managed SaaS offering. Alpha members get first access to new features and input to the roadmap.
  • 6
    Feast

    Feast

    Tecton

    Make your offline data available for real-time predictions without having to build custom pipelines. Ensure data consistency between offline training and online inference, eliminating train-serve skew. Standardize data engineering workflows under one consistent framework. Teams use Feast as the foundation of their internal ML platforms. Feast doesn’t require the deployment and management of dedicated infrastructure. Instead, it reuses existing infrastructure and spins up new resources when needed. You are not looking for a managed solution and are willing to manage and maintain your own implementation. You have engineers that are able to support the implementation and management of Feast. You want to run pipelines that transform raw data into features in a separate system and integrate with it. You have unique requirements and want to build on top of an open source solution.
  • 7
    MariaDB

    MariaDB

    MariaDB

    MariaDB Platform is a complete enterprise open source database solution. It has the versatility to support transactional, analytical and hybrid workloads as well as relational, JSON and hybrid data models. And it has the scalability to grow from standalone databases and data warehouses to fully distributed SQL for executing millions of transactions per second and performing interactive, ad hoc analytics on billions of rows. MariaDB can be deployed on prem on commodity hardware, is available on all major public clouds and through MariaDB SkySQL as a fully managed cloud database. To learn more, visit mariadb.com.
  • 8
    Apache Airflow

    Apache Airflow

    The Apache Software Foundation

    Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. Airflow pipelines are defined in Python, allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Easily define your own operators and extend libraries to fit the level of abstraction that suits your environment. Airflow pipelines are lean and explicit. Parametrization is built into its core using the powerful Jinja templating engine. No more command-line or XML black-magic! Use standard Python features to create your workflows, including date time formats for scheduling and loops to dynamically generate tasks. This allows you to maintain full flexibility when building your workflows.