Business Software for DataHub - Page 2

Top Software that integrates with DataHub as of July 2025 - Page 2

DataHub Clear Filters
  • 1
    Active Directory
    Active Directory stores information about objects on the network and makes this information easy for administrators and users to find and use. Active Directory uses a structured data store as the basis for a logical, hierarchical organization of directory information. This data store, also known as the directory, contains information about Active Directory objects. These objects typically include shared resources such as servers, volumes, printers, and the network user and computer accounts. For more information about the Active Directory data store, see Directory data store. Security is integrated with Active Directory through logon authentication and access control to objects in the directory. With a single network logon, administrators can manage directory data and organization throughout their network, and authorized network users can access resources anywhere on the network. Policy-based administration eases the management of even the most complex network.
    Starting Price: $1 per user per month
  • 2
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 3
    Prefect

    Prefect

    Prefect

    Prefect Cloud is a command center for your workflows. Deploy from Prefect core and instantly gain complete oversight and control. Cloud's beautiful UI lets you keep an eye on the health of your infrastructure. Stream realtime state updates and logs, kick off new runs, and receive critical information exactly when you need it. With Prefect's Hybrid Model, your code and data remain on-prem while Prefect Cloud's managed orchestration keeps everything running smoothly. The Cloud scheduler service runs asynchronously to ensure your runs start on time, every time. Advanced scheduling options allow for scheduled parameter value changes as well as the execution environment for each run! Configure custom notifications and actions when your workflows change state. Monitor the health of all agents connected to your cloud instance and receive custom alerts when an agent goes offline.
    Starting Price: $0.0025 per successful task
  • 4
    LDAP Admin Tool
    The Professional Edition of LDAP Admin Tool contains more features like predefined customizable searches for both LDAP (common ldap objects one click searches) & Active Directory (over 200 common one click searches). This is the edition of LDAP Admin Tool you’ll want to use if you use your machine mainly in a professional setting. For example, most business users and administrators will need this edition to quickly search directory tree using one click searches and schedule export tasks. While assigning members to groups it is often necessary to know nested assignments. With our software's you can view the updated nested members of groups while assigning members to groups. SQLLDAP is easy sql like syntax to query and update LDAP. With our software's you are now able to build and edit query visually with a drag and drop function using keywords and attributes.
    Starting Price: $95 per year
  • 5
    JSON

    JSON

    JSON

    JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language. JSON is built on two structures: 1. A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array. 2. An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence. These are universal data structures. Virtually all modern programming languages support them in one form or another.
    Starting Price: Free
  • 6
    Alibaba Cloud Data Integration
    Alibaba Cloud Data Integration is a comprehensive data synchronization platform that facilitates both real-time and offline data exchange across various data sources, networks, and locations. It supports data synchronization between more than 400 pairs of disparate data sources, including RDS databases, semi-structured storage, non-structured storage (such as audio, video, and images), NoSQL databases, and big data storage. The platform also enables real-time data reading and writing between data sources such as Oracle, MySQL, and DataHub. Data Integration allows users to schedule offline tasks by setting specific trigger times, including year, month, day, hour, and minute, simplifying the configuration of periodic incremental data extraction. It integrates seamlessly with DataWorks data modeling, providing an operations and maintenance integrated workflow. The platform leverages the computing capability of Hadoop clusters to synchronize HDFS data to MaxCompute.
  • 7
    Mode

    Mode

    Mode Analytics

    Understand how users are interacting with your product and identify opportunity areas to inform your product decisions. Mode empowers one Stitch analyst to do the work of a full data team through speed, flexibility, and collaboration. Build dashboards for annual revenue, then use chart visualizations to identify anomalies quickly. Create polished, investor-ready reports or share analysis with teams for collaboration. Connect your entire tech stack to Mode and identify upstream issues to improve performance. Speed up workflows across teams with APIs and webhooks. Understand how users are interacting with your product and identify opportunity areas to inform your product decisions. Leverage marketing and product data to fix weak spots in your funnel, improve landing-page performance, and understand churn before it happens.
  • 8
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 9
    SAP HANA
    SAP HANA in-memory database is for transactional and analytical workloads with any data type — on a single data copy. It breaks down the transactional and analytical silos in organizations, for quick decision-making, on premise and in the cloud. Innovate without boundaries on a database management system, where you can develop intelligent and live solutions for quick decision-making on a single data copy. And with advanced analytics, you can support next-generation transactional processing. Build data solutions with cloud-native scalability, speed, and performance. With the SAP HANA Cloud database, you can gain trusted, business-ready information from a single solution, while enabling security, privacy, and anonymization with proven enterprise reliability. An intelligent enterprise runs on insight from data – and more than ever, this insight must be delivered in real time.
  • 10
    Metabase

    Metabase

    Metabase

    Meet the easy, open source way for everyone in your company to ask questions and learn from data. Connect to your data and get it in front of your team. Dashboards (like this one) are easy to build, share, and explore. Anyone on your team can get answers to questions about your data with just a few clicks, whether it's the CEO or Customer Support. When the questions get more complicated, SQL and our notebook editor are there for the data savvy. Visual joins, multiple aggregations and filtering steps give you the tools to dig deeper into your data. Add variables to your queries to create interactive visualizations that users can tweak and explore. Set up alerts and scheduled reports to get the right data in front of the right people at the right time. Start in a couple clicks with the hosted version, or use Docker to get up and running on your own for free. Connect to your existing data, invite your team, and you have a BI solution that would usually take a sales call.
  • 11
    Oracle Cloud Infrastructure
    Oracle Cloud Infrastructure supports traditional workloads and delivers modern cloud development tools. It is architected to detect and defend against modern threats, so you can innovate more. Combine low cost with high performance to lower your TCO. Oracle Cloud is a Generation 2 enterprise cloud that delivers powerful compute and networking performance and includes a comprehensive portfolio of infrastructure and platform cloud services. Built from the ground up to meet the needs of mission-critical applications, Oracle Cloud supports all legacy workloads while delivering modern cloud development tools, enabling enterprises to bring their past forward as they build their future. Our Generation 2 Cloud is the only one built to run Oracle Autonomous Database, the industry's first and only self-driving database. Oracle Cloud offers a comprehensive cloud computing portfolio, from application development and business analytics to data management, integration, security, AI & blockchain.
  • 12
    PostgreSQL

    PostgreSQL

    PostgreSQL Global Development Group

    PostgreSQL is a powerful, open-source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. There is a wealth of information to be found describing how to install and use PostgreSQL through the official documentation. The open-source community provides many helpful places to become familiar with PostgreSQL, discover how it works, and find career opportunities. Learm more on how to engage with the community. The PostgreSQL Global Development Group has released an update to all supported versions of PostgreSQL, including 15.1, 14.6, 13.9, 12.13, 11.18, and 10.23. This release fixes 25 bugs reported over the last several months. This is the final release of PostgreSQL 10. PostgreSQL 10 will no longer receive security and bug fixes. If you are running PostgreSQL 10 in a production environment, we suggest that you make plans to upgrade.
  • 13
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 14
    Presto

    Presto

    Presto

    #1 Contactless Dining Solution. $0 Monthly Fee. With over 100 million monthly active users and 300,000+ shipped systems, we are the largest provider of contactless dining technology across the world. Learn more about our best-selling product. Our Contactless Dining Solution enables your restaurant to provide an end-to-end contactless dining experience to your guests. Guests can view the complete menu, place orders, pay at the table, and much more—without the need for any human contact. Sign up today and be completely contactless in just 3 days. There are no recurring fees (regular payment processing charges apply) and there is no need to change your existing POS system. The solution is available globally but supplies are limited and demand is high, so reserve your spot now. With over 100 million guests already using Presto every month and 300,000+ systems shipped, we are the largest provider of contactless dining technology in the U.S. and Europe.
  • 15
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 16
    Apache Superset
    Superset is fast, lightweight, intuitive, and loaded with options that make it easy for users of all skill sets to explore and visualize their data, from simple line charts to highly detailed geospatial charts. Superset can connect to any SQL based datasource through SQLAlchemy, including modern cloud native databases and engines at petabyte scale. Superset is lightweight and highly scalable, leveraging the power of your existing data infrastructure without requiring yet another ingestion layer.
  • 17
    Apache NiFi

    Apache NiFi

    Apache Software Foundation

    An easy to use, powerful, and reliable system to process and distribute data. Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. Some of the high-level capabilities and objectives of Apache NiFi include web-based user interface, offering a seamless experience between design, control, feedback, and monitoring. Highly configurable, loss tolerant, low latency, high throughput, and dynamic prioritization. Flow can be modified at runtime, back pressure, data provenance, track dataflow from beginning to end, designed for extension. Build your own processors and more. Enables rapid development and effective testing. Secure, SSL, SSH, HTTPS, encrypted content, and much more. Multi-tenant authorization and internal authorization/policy management. NiFi is comprised of a number of web applications (web UI, web API, documentation, custom UI's, etc). So, you'll need to set up your mapping to the root path.
  • 18
    Apache Hudi

    Apache Hudi

    Apache Corporation

    Hudi is a rich platform to build streaming data lakes with incremental data pipelines on a self-managing database layer, while being optimized for lake engines and regular batch processing. Hudi maintains a timeline of all actions performed on the table at different instants of time that helps provide instantaneous views of the table, while also efficiently supporting retrieval of data in the order of arrival. A Hudi instant consists of the following components. Hudi provides efficient upserts, by mapping a given hoodie key consistently to a file id, via an indexing mechanism. This mapping between record key and file group/file id, never changes once the first version of a record has been written to a file. In short, the mapped file group contains all versions of a group of records.
  • 19
    SQLAlchemy

    SQLAlchemy

    SQLAlchemy

    SQLAlchemy is the Python SQL toolkit and object-relational mapper that gives application developers the full power and flexibility of SQL. SQL databases behave less like object collections the more size and performance start to matter; object collections behave less like tables and rows the more abstraction starts to matter. SQLAlchemy aims to accommodate both of these principles. SQLAlchemy considers the database to be a relational algebra engine, not just a collection of tables. Rows can be selected from not only tables but also joins and other select statements; any of these units can be composed into a larger structure. SQLAlchemy's expression language builds on this concept from its core. SQLAlchemy is most famous for its object-relational mapper (ORM), an optional component that provides the data mapper pattern.
  • 20
    Great Expectations

    Great Expectations

    Great Expectations

    Great Expectations is a shared, open standard for data quality. It helps data teams eliminate pipeline debt, through data testing, documentation, and profiling. We recommend deploying within a virtual environment. If you’re not familiar with pip, virtual environments, notebooks, or git, you may want to check out the Supporting. There are many amazing companies using great expectations these days. Check out some of our case studies with companies that we've worked closely with to understand how they are using great expectations in their data stack. Great expectations cloud is a fully managed SaaS offering. We're taking on new private alpha members for great expectations cloud, a fully managed SaaS offering. Alpha members get first access to new features and input to the roadmap.
  • 21
    Feast

    Feast

    Tecton

    Make your offline data available for real-time predictions without having to build custom pipelines. Ensure data consistency between offline training and online inference, eliminating train-serve skew. Standardize data engineering workflows under one consistent framework. Teams use Feast as the foundation of their internal ML platforms. Feast doesn’t require the deployment and management of dedicated infrastructure. Instead, it reuses existing infrastructure and spins up new resources when needed. You are not looking for a managed solution and are willing to manage and maintain your own implementation. You have engineers that are able to support the implementation and management of Feast. You want to run pipelines that transform raw data into features in a separate system and integrate with it. You have unique requirements and want to build on top of an open source solution.
  • 22
    MariaDB

    MariaDB

    MariaDB

    MariaDB Platform is a complete enterprise open source database solution. It has the versatility to support transactional, analytical and hybrid workloads as well as relational, JSON and hybrid data models. And it has the scalability to grow from standalone databases and data warehouses to fully distributed SQL for executing millions of transactions per second and performing interactive, ad hoc analytics on billions of rows. MariaDB can be deployed on prem on commodity hardware, is available on all major public clouds and through MariaDB SkySQL as a fully managed cloud database. To learn more, visit mariadb.com.
  • 23
    Iceberg

    Iceberg

    Elevent

    Don't guess how much brands pay for sponsorship. Get the facts with market comparatives, just like the real estate market. Our tool works in conjunction with our valuation service to give you a range of real sponsorship deals in your market for a similar property. Sports, music or venue-naming rights. Compare apples to apples. Whether you're a service provider, a major sponsor or a title partner, we've got the data you need to drive sponsorship negotiations. Find existing comparables to drive sponsorship negotiation. Elevent has anonymized database files with thousands of real sponsorship deals to create an accurate pricing window on sponsorship agreements in North America.
  • 24
    Apache Airflow

    Apache Airflow

    The Apache Software Foundation

    Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. Airflow pipelines are defined in Python, allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Easily define your own operators and extend libraries to fit the level of abstraction that suits your environment. Airflow pipelines are lean and explicit. Parametrization is built into its core using the powerful Jinja templating engine. No more command-line or XML black-magic! Use standard Python features to create your workflows, including date time formats for scheduling and loops to dynamically generate tasks. This allows you to maintain full flexibility when building your workflows.