Business Software for Apache Airflow

Top Software that integrates with Apache Airflow as of June 2025

Compare business software, products, and services to find the best solution for your business or organization. Use the filters on the left to drill down by category, pricing, features, organization size, organization type, region, user reviews, integrations, and more. View and sort the products and solutions that match your needs in the results below.

  • 1
    Stonebranch

    Stonebranch

    Stonebranch

    Universal Automation Center (UAC) is a real-time IT automation platform designed to centrally manage and orchestrate tasks and processes across hybrid IT environments - from on-prem to the cloud. Universal Automation Center (UAC) is a software platform designed to automate and orchestrate your IT and business processes, securely manage file transfers, and centralize the management of disparate IT job scheduling and workload automation solutions. With our event-driven automation technology, it is now possible to achieve real-time automation across your entire hybrid IT environment. Real-time hybrid IT automation and managed file transfers (MFT) for any type of cloud, mainframe, distributed or hybrid environment. Start automating, managing and orchestrating file transfers from mainframe or disparate systems to the AWS or Azure cloud and vice versa with no ramp-up time or cost-intensive hardware investments.
    View Software
    Visit Website
  • 2
    DataBuck

    DataBuck

    FirstEigen

    DataBuck is an AI-powered data validation platform that automates risk detection across dynamic, high-volume, and evolving data environments. DataBuck empowers your teams to: ✅ Enhance trust in analytics and reports, ensuring they are built on accurate and reliable data. ✅ Reduce maintenance costs by minimizing manual intervention. ✅ Scale operations 10x faster compared to traditional tools, enabling seamless adaptability in ever-changing data ecosystems. By proactively addressing system risks and improving data accuracy, DataBuck ensures your decision-making is driven by dependable insights. Proudly recognized in Gartner’s 2024 Market Guide for #DataObservability, DataBuck goes beyond traditional observability practices with its AI/ML innovations to deliver autonomous Data Trustability—empowering you to lead with confidence in today’s data-driven world.
    View Software
    Visit Website
  • 3
    Coursebox AI

    Coursebox AI

    Coursebox

    Transform your content into engaging eLearning experiences with Coursebox, the #1 AI-powered eLearning authoring tool. Our platform automates the course creation process, allowing you to design a structured course in seconds. Simply make edits, add any missing elements, and your course is ready to go. Whether you want to publish privately, share publicly, sell your course, or export it to your LMS, Coursebox has you covered. With a focus on mobile-first learning, Coursebox ensures your learners are more engaged and motivated through interactive and visual content, including videos, quizzes, and more. Take advantage of our branded learning management system with native mobile apps, offering you the flexibility to use custom hosting and a custom domain. Coursebox is the ultimate solution for organizations and individuals looking to rapidly scale their training and assessment programs.
    Starting Price: $99 per month
    View Software
    Visit Website
  • 4
    Netdata

    Netdata

    Netdata, Inc.

    The open-source observability platform everyone needs! Netdata collects metrics per second and presents them in beautiful low-latency dashboards. It is designed to run on all of your physical and virtual servers, cloud deployments, Kubernetes clusters, and edge/IoT devices, to monitor your systems, containers, and applications. It scales nicely from just a single server to thousands of servers, even in complex multi/mixed/hybrid cloud environments, and given enough disk space it can keep your metrics for years. KEY FEATURES: 💥 Collects metrics from 800+ integrations 💪 Real-Time, Low-Latency, High-Resolution 😶‍🌫️ Unsupervised Anomaly Detection 🔥 Powerful Visualization 🔔 Out of box Alerts 📖 systemd Journal Logs Explorer 😎 Low Maintenance ⭐ Open and Extensible Try Netdata today and feel the pulse of your infrastructure, with high-resolution metrics, journal logs and real-time visualizations.
    Leader badge
    Starting Price: Free
  • 5
    Sifflet

    Sifflet

    Sifflet

    Automatically cover thousands of tables with ML-based anomaly detection and 50+ custom metrics. Comprehensive data and metadata monitoring. Exhaustive mapping of all dependencies between assets, from ingestion to BI. Enhanced productivity and collaboration between data engineers and data consumers. Sifflet seamlessly integrates into your data sources and preferred tools and can run on AWS, Google Cloud Platform, and Microsoft Azure. Keep an eye on the health of your data and alert the team when quality criteria aren’t met. Set up in a few clicks the fundamental coverage of all your tables. Configure the frequency of runs, their criticality, and even customized notifications at the same time. Leverage ML-based rules to detect any anomaly in your data. No need for an initial configuration. A unique model for each rule learns from historical data and from user feedback. Complement the automated rules with a library of 50+ templates that can be applied to any asset.
  • 6
    Microsoft Purview
    Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data. Easily create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Empower data consumers to find valuable, trustworthy data. Automated data discovery, lineage identification, and data classification across on-premises, multicloud, and SaaS sources. Unified map of your data assets and their relationships for more effective governance. Semantic search enables data discovery using business or technical terms. Insight into the location and movement of sensitive data across your hybrid data landscape. Establish the foundation for effective data usage and governance with Purview Data Map. Automate and manage metadata from hybrid sources. Classify data using built-in and custom classifiers and Microsoft Information Protection sensitivity labels.
    Starting Price: $0.342
  • 7
    Ray

    Ray

    Anyscale

    Develop on your laptop and then scale the same Python code elastically across hundreds of nodes or GPUs on any cloud, with no changes. Ray translates existing Python concepts to the distributed setting, allowing any serial application to be easily parallelized with minimal code changes. Easily scale compute-heavy machine learning workloads like deep learning, model serving, and hyperparameter tuning with a strong ecosystem of distributed libraries. Scale existing workloads (for eg. Pytorch) on Ray with minimal effort by tapping into integrations. Native Ray libraries, such as Ray Tune and Ray Serve, lower the effort to scale the most compute-intensive machine learning workloads, such as hyperparameter tuning, training deep learning models, and reinforcement learning. For example, get started with distributed hyperparameter tuning in just 10 lines of code. Creating distributed apps is hard. Ray handles all aspects of distributed execution.
    Starting Price: Free
  • 8
    Dagster

    Dagster

    Dagster Labs

    Dagster is a next-generation orchestration platform for the development, production, and observation of data assets. Unlike other data orchestration solutions, Dagster provides you with an end-to-end development lifecycle. Dagster gives you control over your disparate data tools and empowers you to build, test, deploy, run, and iterate on your data pipelines. It makes you and your data teams more productive, your operations more robust, and puts you in complete control of your data processes as you scale. Dagster brings a declarative approach to the engineering of data pipelines. Your team defines the data assets required, quickly assessing their status and resolving any discrepancies. An assets-based model is clearer than a tasks-based one and becomes a unifying abstraction across the whole workflow.
    Starting Price: $0
  • 9
    Oxla

    Oxla

    Oxla

    Purpose-built for compute, memory, and storage efficiency, Oxla is a self-hosted data warehouse optimized for large-scale, low-latency analytics with robust time-series support. Cloud data warehouses aren’t for everyone. At scale, long-term cloud compute costs outweigh short-term infrastructure savings, and regulated industries require full control over data beyond VPC and BYOC deployments. Oxla outperforms both legacy and cloud warehouses through efficiency, enabling scale for growing datasets with predictable costs, on-prem or in any cloud. Easily deploy, run, and maintain Oxla with Docker and YAML to power diverse workloads in a single, self-hosted data warehouse.
    Starting Price: $50 per CPU core / monthly
  • 10
    intermix.io

    intermix.io

    Intermix.io

    Capture metadata from your data warehouse and tools that connect to it. Track the workloads you care about, and retroactively understand user engagement, cost, and performance of data products. Complete visibility into your data platform, who is touching your data, and how it’s being used. In these interviews, we’re sharing how data teams build and deliver data products at their company. We also cover tech stacks, best practices and other lessons learned. intermix.io gives you end-to-end visibility with an easy-to-use SaaS dashboard. Collaborate with your entire team, create custom reports, and get everything you need to understand what’s going on in your data platform, across your cloud data warehouse and the tools that connect to it. intermix.io is a SaaS product that collects metadata from your data warehouse with absolutely zero coding required. We never need access to data you've copied into your data warehouse.
    Starting Price: $295 per month
  • 11
    IRI FieldShield

    IRI FieldShield

    IRI, The CoSort Company

    IRI FieldShield® is powerful and affordable data discovery and masking software for PII in structured and semi-structured sources, big and small. Use FieldShield utilities in Eclipse to profile, search and mask data at rest (static data masking), and the FieldShield SDK to mask (or unmask) data in motion (dynamic data masking). Classify PII centrally, find it globally, and mask it consistently. Preserve realism and referential integrity via encryption, pseudonymization, redaction, and other rules for production and test environments. Delete, deliver, or anonymize data subject to DPA, FERPA, GDPR, GLBA, HIPAA, PCI, POPI, SOX, etc. Verify compliance via human- and machine-readable search reports, job audit logs, and re-identification risk scores. Optionally mask data as you map it. Apply FieldShield functions in IRI Voracity ETL, federation, migration, replication, subsetting, or analytic jobs. Or, run FieldShield from Actifio, Commvault or Windocks to mask DB clones.
  • 12
    Prophecy

    Prophecy

    Prophecy

    Prophecy enables many more users - including visual ETL developers and Data Analysts. All you need to do is point-and-click and write a few SQL expressions to create your pipelines. As you use the Low-Code designer to build your workflows - you are developing high quality, readable code for Spark and Airflow that is committed to your Git. Prophecy gives you a gem builder - for you to quickly develop and rollout your own Frameworks. Examples are Data Quality, Encryption, new Sources and Targets that extend the built-in ones. Prophecy provides best practices and infrastructure as managed services – making your life and operations simple! With Prophecy, your workflows are high performance and use scale-out performance & scalability of the cloud.
    Starting Price: $299 per month
  • 13
    BentoML

    BentoML

    BentoML

    Serve your ML model in any cloud in minutes. Unified model packaging format enabling both online and offline serving on any platform. 100x the throughput of your regular flask-based model server, thanks to our advanced micro-batching mechanism. Deliver high-quality prediction services that speak the DevOps language and integrate perfectly with common infrastructure tools. Unified format for deployment. High-performance model serving. DevOps best practices baked in. The service uses the BERT model trained with the TensorFlow framework to predict movie reviews' sentiment. DevOps-free BentoML workflow, from prediction service registry, deployment automation, to endpoint monitoring, all configured automatically for your team. A solid foundation for running serious ML workloads in production. Keep all your team's models, deployments, and changes highly visible and control access via SSO, RBAC, client authentication, and auditing logs.
    Starting Price: Free
  • 14
    Ascend

    Ascend

    Ascend

    Ascend gives data teams a unified and automated platform to ingest, transform, and orchestrate their entire data engineering and analytics engineering workloads, 10X faster than ever before.​ Ascend helps gridlocked teams break through constraints to build, manage, and optimize the increasing number of data workloads required. Backed by DataAware intelligence, Ascend works continuously in the background to guarantee data integrity and optimize data workloads, reducing time spent on maintenance by up to 90%. Build, iterate on, and run data transformations easily with Ascend’s multi-language flex-code interface enabling the use of SQL, Python, Java, and, Scala interchangeably. Quickly view data lineage, data profiles, job and user logs, system health, and other critical workload metrics at a glance. Ascend delivers native connections to a growing library of common data sources with our Flex-Code data connectors.
    Starting Price: $0.98 per DFC
  • 15
    DQOps

    DQOps

    DQOps

    DQOps is an open-source data quality platform designed for data quality and data engineering teams that makes data quality visible to business sponsors. The platform provides an efficient user interface to quickly add data sources, configure data quality checks, and manage issues. DQOps comes with over 150 built-in data quality checks, but you can also design custom checks to detect any business-relevant data quality issues. The platform supports incremental data quality monitoring to support analyzing data quality of very big tables. Track data quality KPI scores using our built-in or custom dashboards to show progress in improving data quality to business sponsors. DQOps is DevOps-friendly, allowing you to define data quality definitions in YAML files stored in Git, run data quality checks directly from your data pipelines, or automate any action with a Python Client. DQOps works locally or as a SaaS platform.
    Starting Price: $499 per month
  • 16
    Decube

    Decube

    Decube

    Decube is a data management platform that helps organizations manage their data observability, data catalog, and data governance needs. It provides end-to-end visibility into data and ensures its accuracy, consistency, and trustworthiness. Decube's platform includes data observability, a data catalog, and data governance components that work together to provide a comprehensive solution. The data observability tools enable real-time monitoring and detection of data incidents, while the data catalog provides a centralized repository for data assets, making it easier to manage and govern data usage and access. The data governance tools provide robust access controls, audit reports, and data lineage tracking to demonstrate compliance with regulatory requirements. Decube's platform is customizable and scalable, making it easy for organizations to tailor it to meet their specific data management needs and manage data across different systems, data sources, and departments.
  • 17
    ZenML

    ZenML

    ZenML

    Simplify your MLOps pipelines. Manage, deploy, and scale on any infrastructure with ZenML. ZenML is completely free and open-source. See the magic with just two simple commands. Set up ZenML in a matter of minutes, and start with all the tools you already use. ZenML standard interfaces ensure that your tools work together seamlessly. Gradually scale up your MLOps stack by switching out components whenever your training or deployment requirements change. Keep up with the latest changes in the MLOps world and easily integrate any new developments. Define simple and clear ML workflows without wasting time on boilerplate tooling or infrastructure code. Write portable ML code and switch from experimentation to production in seconds. Manage all your favorite MLOps tools in one place with ZenML's plug-and-play integrations. Prevent vendor lock-in by writing extensible, tooling-agnostic, and infrastructure-agnostic code.
    Starting Price: Free
  • 18
    Kedro

    Kedro

    Kedro

    Kedro is the foundation for clean data science code. It borrows concepts from software engineering and applies them to machine-learning projects. A Kedro project provides scaffolding for complex data and machine-learning pipelines. You spend less time on tedious "plumbing" and focus instead on solving new problems. Kedro standardizes how data science code is created and ensures teams collaborate to solve problems easily. Make a seamless transition from development to production with exploratory code that you can transition to reproducible, maintainable, and modular experiments. A series of lightweight data connectors is used to save and load data across many different file formats and file systems.
    Starting Price: Free
  • 19
    Secoda

    Secoda

    Secoda

    With Secoda AI on top of your metadata, you can now get contextual search results from across your tables, columns, dashboards, metrics, and queries. Secoda AI can also help you generate documentation and queries from your metadata, saving your team hundreds of hours of mundane work and redundant data requests. Easily search across all columns, tables, dashboards, events, and metrics. AI-powered search lets you ask any question to your data and get a contextual answer, fast. Get answers to questions. Integrate data discovery into your workflow without disrupting it with our API. Perform bulk updates, tag PII data, manage tech debt, build custom integrations, identify the least used resources, and more. Eliminate manual error and have total trust in your knowledge repository.
    Starting Price: $50 per user per month
  • 20
    Yandex Data Proc
    You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.
    Starting Price: $0.19 per hour
  • 21
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 22
    Tobiko

    Tobiko

    Tobiko

    Tobiko is a data transformation platform that ships data faster, more efficiently, and with fewer mistakes, backward compatible with databases. Make a dev environment without rebuilding the entire DAG. Tobiko only changes what's necessary. Don't rebuild everything when you add a column. You already built your change. Tobiko promotes prod instantly without redoing your work. Avoid debugging clunky Jinja and define your models in SQL. Tobiko works at a startup and at an enterprise scale. Tobiko understands the SQL you write and improves developer productivity by finding issues at compile time. Audits and data differences provide validation and make it easy to trust the datasets you produce. Every change is analyzed and is automatically categorized as either breaking or non-breaking. When mistakes happen, seamlessly roll back to the prior version, allowing teams to reduce downtime in production.
    Starting Price: Free
  • 23
    Stackable

    Stackable

    Stackable

    The Stackable data platform was designed with openness and flexibility in mind. It provides you with a curated selection of the best open source data apps like Apache Kafka, Apache Druid, Trino, and Apache Spark. While other current offerings either push their proprietary solutions or deepen vendor lock-in, Stackable takes a different approach. All data apps work together seamlessly and can be added or removed in no time. Based on Kubernetes, it runs everywhere, on-prem or in the cloud. stackablectl and a Kubernetes cluster are all you need to run your first stackable data platform. Within minutes, you will be ready to start working with your data. Configure your one-line startup command right here. Similar to kubectl, stackablectl is designed to easily interface with the Stackable Data Platform. Use the command line utility to deploy and manage stackable data apps on Kubernetes. With stackablectl, you can create, delete, and update components.
    Starting Price: Free
  • 24
    emma

    emma

    emma

    emma empowers you with the freedom to choose the best cloud, providers, and environments, to adapt to changing demands, without adding complexity or compromising on control. Simplifies cloud management by unifying services and automating key tasks, reducing complexity. Optimizes cloud resources automatically, ensuring full utilization and reducing overhead. Enables flexibility by supporting open standards, freeing businesses from vendor lock-in. Monitors and optimizes data traffic in real time, preventing cost spikes by reallocating resources efficiently. Create your cloud infrastructure across providers and environments, on-prem, private, hybrid, or public. Manage your unified cloud environment from a single, intuitive interface. Gain the visibility you need to improve infrastructure performance and reduce spend. Take back control over your entire cloud environment and ensure regulatory compliance.
    Starting Price: $99 per month
  • 25
    DataHub

    DataHub

    DataHub

    DataHub is an open source metadata platform designed to streamline data discovery, observability, and governance across diverse data ecosystems. It enables organizations to effortlessly discover trustworthy data, with experiences tailored for each person and eliminates breaking changes with detailed cross-platform and column-level lineage. DataHub builds confidence in your data by providing a comprehensive view of business, operational, and technical context, all in one place. The platform offers automated data quality checks and AI-driven anomaly detection, notifying teams when issues arise and centralizing incident tracking. With detailed lineage, documentation, and ownership information, DataHub facilitates swift issue resolution. It also automates governance programs by classifying assets as they evolve, minimizing manual work through GenAI documentation, AI-driven classification, and smart propagation. DataHub's extensible architecture supports over 70 native integrations.
    Starting Price: Free
  • 26
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 27
    AT&T Alien Labs Open Threat Exchange
    The world's largest open threat intelligence community that enables collaborative defense with actionable, community-powered threat data. Threat sharing in the security industry remains mainly ad-hoc and informal, filled with blind spots, frustration, and pitfalls. Our vision is for companies and government agencies to gather and share relevant, timely, and accurate information about new or ongoing cyberattacks and threats as quickly as possible to avoid major breaches (or minimize the damage from an attack). The Alien Labs Open Threat Exchange (OTX™) delivers the first truly open threat intelligence community that makes this vision a reality. OTX provides open access to a global community of threat researchers and security professionals. It now has more than 100,000 participants in 140 countries, who contribute over 19 million threat indicators daily. It delivers community-generated threat data, enables collaborative research, and automates the update of your security infrastructure.
  • 28
    CrateDB

    CrateDB

    CrateDB

    The enterprise database for time series, documents, and vectors. Store any type of data and combine the simplicity of SQL with the scalability of NoSQL. CrateDB is an open source distributed database running queries in milliseconds, whatever the complexity, volume and velocity of data.
  • 29
    Beats

    Beats

    Elastic

    Beats is a free and open platform for single-purpose data shippers. They send data from hundreds or thousands of machines and systems to Logstash or Elasticsearch. Beats are open source data shippers that you install as agents on your servers to send operational data to Elasticsearch. Elastic provides Beats for capturing data and event logs. Beats can send data directly to Elasticsearch or via Logstash, where you can further process and enhance the data, before visualizing it in Kibana. Want to get up and running quickly with infrastructure metrics monitoring and centralized log analytics? Try out the Metrics app and the Logs app in Kibana. For more details, see Analyze metrics and Monitor logs. Whether you’re collecting from security devices, cloud, containers, hosts, or OT, Filebeat helps you keep the simple things simple by offering a lightweight way to forward and centralize logs and files.
    Starting Price: $16 per month
  • 30
    IRI Voracity

    IRI Voracity

    IRI, The CoSort Company

    Voracity is the only high-performance, all-in-one data management platform accelerating AND consolidating the key activities of data discovery, integration, migration, governance, and analytics. Voracity helps you control your data in every stage of the lifecycle, and extract maximum value from it. Only in Voracity can you: 1) CLASSIFY, profile and diagram enterprise data sources 2) Speed or LEAVE legacy sort and ETL tools 3) MIGRATE data to modernize and WRANGLE data to analyze 4) FIND PII everywhere and consistently MASK it for referential integrity 5) Score re-ID risk and ANONYMIZE quasi-identifiers 6) Create and manage DB subsets or intelligently synthesize TEST data 7) Package, protect and provision BIG data 8) Validate, scrub, enrich and unify data to improve its QUALITY 9) Manage metadata and MASTER data. Use Voracity to comply with data privacy laws, de-muck and govern the data lake, improve the reliability of your analytics, and create safe, smart test data
  • Previous
  • You're on page 1
  • 2
  • Next