Alternatives to Azkaban

Compare Azkaban alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Azkaban in 2024. Compare features, ratings, user reviews, pricing, and more from Azkaban competitors and alternatives in order to make an informed decision for your business.

  • 1
    Rivery

    Rivery

    Rivery

    Rivery’s SaaS ETL platform provides a fully-managed solution for data ingestion, transformation, orchestration, reverse ETL and more, with built-in support for your development and deployment lifecycles. Key Features: Data Workflow Templates: Extensive library of pre-built templates that enable teams to instantly create powerful data pipelines with the click of a button. Fully managed: No-code, auto-scalable, and hassle-free platform. Rivery takes care of the back end, allowing teams to spend time on priorities rather than maintenance. Multiple Environments: Construct and clone custom environments for specific teams or projects. Reverse ETL: Automatically send data from cloud warehouses to business applications, marketing clouds, CPD’s, and more.
    Starting Price: $0.75 Per Credit
  • 2
    Yandex Data Proc
    You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.
    Starting Price: $0.19 per hour
  • 3
    Gravity Data

    Gravity Data

    Gravity

    Gravity's mission is to make streaming data easy from over 100 sources while only paying for what you use. Gravity removes the reliance on engineering teams to deliver streaming pipelines with a simple interface to get streaming up and running in minutes from databases, event data and APIs. Everyone in the data team can now build with simple point and click so that you can focus on building apps, services and customer experiences. Full Execution trace and detailed error messaging for quick diagnosis and resolution. We have implemented new, feature-rich ways for you to quickly get started. From bulk set-up, default schemas and data selection to different job modes and statuses. Spend less time wrangling with infrastructure and more time analysing data while allowing our intelligent engine to keep your pipelines running. Gravity integrates with your systems for notifications and orchestration.
  • 4
    StreamScape

    StreamScape

    StreamScape

    Make use of Reactive Programming on the back-end without the need for specialized languages or cumbersome frameworks. Triggers, Actors and Event Collections make it easy to build data pipelines and work with data streams using simple SQL-like syntax, shielding users from the complexities of distributed system development. Extensible Data Modeling is a key feature that supports rich semantics and schema definition for representing real-world things. On-the-fly validation and data shaping rules support a variey of formats like XML and JSON, allowing you to easily describe and evolve your schema, keeping pace with changing business requirements. If you can describe it, we can query it. Know SQL and Javascript? Then you already know how to use the data engine. Whatever the format, a powerful query language lets you instantly test logic expressions and functions, speeding up development and simplifying deployment for unmatched data agility.
  • 5
    Spring Cloud Data Flow
    Microservice-based streaming and batch data processing for Cloud Foundry and Kubernetes. Spring Cloud Data Flow provides tools to create complex topologies for streaming and batch data pipelines. The data pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. Spring Cloud Data Flow supports a range of data processing use cases, from ETL to import/export, event streaming, and predictive analytics. The Spring Cloud Data Flow server uses Spring Cloud Deployer, to deploy data pipelines made of Spring Cloud Stream or Spring Cloud Task applications onto modern platforms such as Cloud Foundry and Kubernetes. A selection of pre-built stream and task/batch starter apps for various data integration and processing scenarios facilitate learning and experimentation. Custom stream and task applications, targeting different middleware or data services, can be built using the familiar Spring Boot style programming model.
  • 6
    Data Virtuality

    Data Virtuality

    Data Virtuality

    Connect and centralize data. Transform your existing data landscape into a flexible data powerhouse. Data Virtuality is a data integration platform for instant data access, easy data centralization and data governance. Our Logical Data Warehouse solution combines data virtualization and materialization for the highest possible performance. Build your single source of data truth with a virtual layer on top of your existing data environment for high data quality, data governance, and fast time-to-market. Hosted in the cloud or on-premises. Data Virtuality has 3 modules: Pipes, Pipes Professional, and Logical Data Warehouse. Cut down your development time by up to 80%. Access any data in minutes and automate data workflows using SQL. Use Rapid BI Prototyping for significantly faster time-to-market. Ensure data quality for accurate, complete, and consistent data. Use metadata repositories to improve master data management.
  • 7
    Skyvia

    Skyvia

    Devart

    Data integration, backup, management, and connectivity. 100 % cloud based platform that offers contemporary cloud agility and scalability, eliminating the need of deployment or manual upgrades. No coding wizard-based solution that meets the needs of both IT professionals and business users with no technical skills. With flexible pricing plans for each product, Skyvia suites for businesses of any size, from a small startup to an enterprise company. Connect your cloud, on-premise, and flat data to automate workflows. Automate data collection from disparate cloud sources to a database or data warehouse. Transfer your business data between cloud apps automatically in just a few clicks. Protect all your cloud data and keep it secure in one place. Share data in real time via REST API to connect with multiple OData consumers. Query and manage any data from the browser via SQL or intuitive visual Query Builder.
  • 8
    Hazelcast

    Hazelcast

    Hazelcast

    In-Memory Computing Platform. The digital world is different. Microseconds matter. That's why the world's largest organizations rely on us to power their most time-sensitive applications at scale. New data-enabled applications can deliver transformative business power – if they meet today’s requirement of immediacy. Hazelcast solutions complement virtually any database to deliver results that are significantly faster than a traditional system of record. Hazelcast’s distributed architecture provides redundancy for continuous cluster up-time and always available data to serve the most demanding applications. Capacity grows elastically with demand, without compromising performance or availability. The fastest in-memory data grid, combined with third-generation high-speed event processing, delivered through the cloud.
  • 9
    Pantomath

    Pantomath

    Pantomath

    Organizations continuously strive to be more data-driven, building dashboards, analytics, and data pipelines across the modern data stack. Unfortunately, most organizations struggle with data reliability issues leading to poor business decisions and lack of trust in data as an organization, directly impacting their bottom line. Resolving complex data issues is a manual and time-consuming process involving multiple teams all relying on tribal knowledge to manually reverse engineer complex data pipelines across different platforms to identify root-cause and understand the impact. Pantomath is a data pipeline observability and traceability platform for automating data operations. It continuously monitors datasets and jobs across the enterprise data ecosystem providing context to complex data pipelines by creating automated cross-platform technical pipeline lineage.
  • 10
    BigBI

    BigBI

    BigBI

    BigBI enables data specialists to build their own powerful big data pipelines interactively & efficiently, without any coding! BigBI unleashes the power of Apache Spark enabling: Scalable processing of real Big Data (up to 100X faster) Integration of traditional data (SQL, batch files) with modern data sources including semi-structured (JSON, NoSQL DBs, Elastic, Hadoop), and unstructured (Text, Audio, video), Integration of streaming data, cloud data, AI/ML & graphs
  • 11
    QuerySurge
    QuerySurge leverages AI to automate the data validation and ETL testing of Big Data, Data Warehouses, Business Intelligence Reports and Enterprise Apps/ERPs with full DevOps functionality for continuous testing. Use Cases - Data Warehouse & ETL Testing - Hadoop & NoSQL Testing - DevOps for Data / Continuous Testing - Data Migration Testing - BI Report Testing - Enterprise App/ERP Testing QuerySurge Features - Projects: Multi-project support - AI: automatically create datas validation tests based on data mappings - Smart Query Wizards: Create tests visually, without writing SQL - Data Quality at Speed: Automate the launch, execution, comparison & see results quickly - Test across 200+ platforms: Data Warehouses, Hadoop & NoSQL lakes, databases, flat files, XML, JSON, BI Reports - DevOps for Data & Continuous Testing: RESTful API with 60+ calls & integration with all mainstream solutions - Data Analytics & Data Intelligence:  Analytics dashboard & reports
  • 12
    Google Cloud Dataflow
    Unified stream and batch data processing that's serverless, fast, and cost-effective. Fully managed data processing service. Automated provisioning and management of processing resources. Horizontal autoscaling of worker resources to maximize resource utilization. OSS community-driven innovation with Apache Beam SDK. Reliable and consistent exactly-once processing. Streaming data analytics with speed. Dataflow enables fast, simplified streaming data pipeline development with lower data latency. Allow teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Allow teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Dataflow automates provisioning and management of processing resources to minimize latency and maximize utilization.
  • 13
    dbt

    dbt

    dbt Labs

    Version control, quality assurance, documentation and modularity allow data teams to collaborate like software engineering teams. Analytics errors should be treated with the same level of urgency as bugs in a production product. Much of an analytic workflow is manual. We believe workflows should be built to execute with a single command. Data teams use dbt to codify business logic and make it accessible to the entire organization—for use in reporting, ML modeling, and operational workflows. Built-in CI/CD ensures that changes to data models move appropriately through development, staging, and production environments. dbt Cloud also provides guaranteed uptime and custom SLAs.
    Starting Price: $50 per user per month
  • 14
    CData Sync

    CData Sync

    CData Software

    CData Sync is a universal data pipeline that delivers automated continuous replication between hundreds of SaaS applications & cloud data sources and any major database or data warehouse, on-premise or in the cloud. Replicate data from hundreds of cloud data sources to popular database destinations, such as SQL Server, Redshift, S3, Snowflake, BigQuery, and more. Configuring replication is easy: login, select the data tables to replicate, and select a replication interval. Done. CData Sync extracts data iteratively, causing minimal impact on operational systems by only querying and updating data that has been added or changed since the last update. CData Sync offers the utmost flexibility across full and partial replication scenarios and ensures that critical data is stored safely in your database of choice. Download a 30-day free trial of the Sync application or request more information at www.cdata.com/sync
  • 15
    K2View

    K2View

    K2View

    At K2View, we believe that every enterprise should be able to leverage its data to become as disruptive and agile as the best companies in its industry. We make this possible through our patented Data Product Platform, which creates and manages a complete and compliant dataset for every business entity – on demand, and in real time. The dataset is always in sync with its underlying sources, adapts to changes in the source structures, and is instantly accessible to any authorized data consumer. Data Product Platform fuels many operational use cases, including customer 360, data masking and tokenization, test data management, data migration, legacy application modernization, data pipelining and more – to deliver business outcomes in less than half the time, and at half the cost, of any other alternative. The platform inherently supports modern data architectures – data mesh, data fabric, and data hub – and deploys in cloud, on-premise, or hybrid environments.
  • 16
    Lightbend

    Lightbend

    Lightbend

    Lightbend provides technology that enables developers to easily build data-centric applications that bring the most demanding, globally distributed applications and streaming data pipelines to life. Companies worldwide turn to Lightbend to solve the challenges of real-time, distributed data in support of their most business-critical initiatives. Akka Platform provides the building blocks that make it easy for businesses to build, deploy, and run large-scale applications that support digitally transformative initiatives. Accelerate time-to-value and reduce infrastructure and cloud costs with reactive microservices that take full advantage of the distributed nature of the cloud and are resilient to failure, highly efficient, and operative at any scale. Native support for encryption, data shredding, TLS enforcement, and continued compliance with GDPR. Framework for quick construction, deployment and management of streaming data pipelines.
  • 17
    TIBCO Data Fabric
    More data sources, more silos, more complexity, and constant change. Data architectures are challenged to keep pace—a big problem for today's data-driven organizations, and one that puts your business at risk. A data fabric is a modern distributed data architecture that includes shared data assets and optimized data fabric pipelines that you can use to address today's data challenges in a unified way. Optimized data management and integration capabilities so you can intelligently simplify, automate, and accelerate your data pipelines. Easy-to-deploy and adapt distributed data architecture that fits your complex, ever-changing technology landscape. Accelerate time to value by unlocking your distributed on-premises, cloud, and hybrid cloud data, no matter where it resides, and delivering it wherever it's needed at the pace of business.
  • 18
    Prefect

    Prefect

    Prefect

    Prefect Cloud is a command center for your workflows. Deploy from Prefect core and instantly gain complete oversight and control. Cloud's beautiful UI lets you keep an eye on the health of your infrastructure. Stream realtime state updates and logs, kick off new runs, and receive critical information exactly when you need it. With Prefect's Hybrid Model, your code and data remain on-prem while Prefect Cloud's managed orchestration keeps everything running smoothly. The Cloud scheduler service runs asynchronously to ensure your runs start on time, every time. Advanced scheduling options allow for scheduled parameter value changes as well as the execution environment for each run! Configure custom notifications and actions when your workflows change state. Monitor the health of all agents connected to your cloud instance and receive custom alerts when an agent goes offline.
    Starting Price: $0.0025 per successful task
  • 19
    Arcion

    Arcion

    Arcion Labs

    Deploy production-ready change data capture pipelines for high-volume, real-time data replication - without a single line of code. Supercharged Change Data Capture. Enjoy automatic schema conversion, end-to-end replication, flexible deployment, and more with Arcion’s distributed Change Data Capture (CDC). Leverage Arcion’s zero data loss architecture for guaranteed end-to-end data consistency, built-in checkpointing, and more without any custom code. Leave scalability and performance concerns behind with a highly-distributed, highly parallel architecture supporting 10x faster data replication. Reduce DevOps overhead with Arcion Cloud, the only fully-managed CDC offering. Enjoy autoscaling, built-in high availability, monitoring console, and more. Simplify & standardize data pipelines architecture, and zero downtime workload migration from on-prem to cloud.
    Starting Price: $2,894.76 per month
  • 20
    Quix

    Quix

    Quix

    Building real-time apps and services require lots of components running in concert: Kafka, VPC hosting, infrastructure as code, container orchestration, observability, CI/CD, persistent volumes, databases, and much more. The Quix platform takes care of all the moving parts. You just connect your data and start building. That’s it. No provisioning clusters or configuring resources. Use Quix connectors to ingest transaction messages streamed from your financial processing systems in a virtual private cloud or on-premise data center. All data in transit is encrypted end-to-end and compressed with G-Zip and Protobuf for security and efficiency. Detect fraudulent patterns with machine learning models or rule-based algorithms. Create fraud warning messages as troubleshooting tickets or display them in support dashboards.
    Starting Price: $50 per month
  • 21
    Apache Kafka

    Apache Kafka

    The Apache Software Foundation

    Apache Kafka® is an open-source, distributed streaming platform. Scale production clusters up to a thousand brokers, trillions of messages per day, petabytes of data, hundreds of thousands of partitions. Elastically expand and contract storage and processing. Stretch clusters efficiently over availability zones or connect separate clusters across geographic regions. Process streams of events with joins, aggregations, filters, transformations, and more, using event-time and exactly-once processing. Kafka’s out-of-the-box Connect interface integrates with hundreds of event sources and event sinks including Postgres, JMS, Elasticsearch, AWS S3, and more. Read, write, and process streams of events in a vast array of programming languages.
  • 22
    Datazoom

    Datazoom

    Datazoom

    Improving the experience, efficiency, and profitability of streaming video requires data. Datazoom enables video publishers to better operate distributed architectures through centralizing, standardizing, and integrating data in real-time to create a more powerful data pipeline and improve observability, adaptability, and optimization solutions. Datazoom is a video data platform that continually gathers data from endpoints, like a CDN or a video player, through an ecosystem of collectors. Once the data is gathered, it is normalized using standardized data definitions. This data is then sent through available connectors to analytics platforms like Google BigQuery, Google Analytics, and Splunk and can be visualized in tools such as Looker and Superset. Datazoom is your key to a more effective and efficient data pipeline. Get the data you need in real-time. Don’t wait for your data when you need to resolve an issue immediately.
  • 23
    Data Taps

    Data Taps

    Data Taps

    Build your data pipelines like Lego blocks with Data Taps. Add new metrics layers, zoom in, and investigate with real-time streaming SQL. Build with others, share and consume data, globally. Refine and update without hassle. Use multiple models/schemas during schema evolution. Built to scale with AWS Lambda and S3.
  • 24
    definity

    definity

    definity

    Monitor and control everything your data pipelines do with zero code changes. Monitor data and pipelines in motion to proactively prevent downtime and quickly root cause issues. Optimize pipeline runs and job performance to save costs and keep SLAs. Accelerate code deployments and platform upgrades while maintaining reliability and performance. Data & performance checks in line with pipeline runs. Checks on input data, before pipelines even run. Automatic preemption of runs. definity takes away the effort to build deep end-to-end coverage, so you are protected at every step, across every dimension. definity shifts observability to post-production to achieve ubiquity, increase coverage, and reduce manual effort. definity agents automatically run with every pipeline, with zero footprints. Unified view of data, pipelines, infra, lineage, and code for every data asset. Detect in run-time and avoid async checks. Auto-preempt runs, even on inputs.
  • 25
    Osmos

    Osmos

    Osmos

    With Osmos, your customers can easily clean their messy data files and import them directly into your operational system without writing a line of code. At the core, we have an AI-powered data transformation engine that enables users to map, validate, and clean data with only a few clicks. Your account will be charged or credited based on the percentage of the billing cycle left at the time the plan was changed. An eCommerce company automates ingestion of product catalog data from multiple distributors and vendors into their database. A manufacturing company automates the data ingestion of purchase orders from email attachments into Netsuite. Automatically clean up and reformat incoming data to match your destination schema. Never deal with custom scripts and spreadsheets again.
    Starting Price: $299 per month
  • 26
    CloverDX

    CloverDX

    CloverDX

    Design, debug, run and troubleshoot data transformations and jobflows in a developer-friendly visual designer. Orchestrate data workloads that require tasks to be carried out in the right sequence, orchestrate multiple systems with the transparency of visual workflows. Deploy data workloads easily into a robust enterprise runtime environment. In cloud or on-premise. Make data available to people, applications and storage under a single unified platform. Manage your data workloads and related processes together in a single platform. No task is too complex. We’ve built CloverDX on years of experience with large enterprise projects. Developer-friendly open architecture and flexibility lets you package and hide the complexity for non-technical users. Manage the entire lifecycle of a data pipeline from design, deployment to evolution and testing. Get things done fast with the help of our in-house customer success teams.
    Starting Price: $5000.00/one-time
  • 27
    Google Cloud Composer
    Cloud Composer's managed nature and Apache Airflow compatibility allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources. End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline. Author, schedule, and monitor your workflows through a single orchestration tool—whether your pipeline lives on-premises, in multiple clouds, or fully within Google Cloud. Ease your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud. Create workflows that connect data, processing, and services across clouds to give you a unified data environment.
    Starting Price: $0.074 per vCPU hour
  • 28
    Lyftrondata

    Lyftrondata

    Lyftrondata

    Whether you want to build a governed delta lake, data warehouse, or simply want to migrate from your traditional database to a modern cloud data warehouse, do it all with Lyftrondata. Simply create and manage all of your data workloads on one platform by automatically building your pipeline and warehouse. Analyze it instantly with ANSI SQL, BI/ML tools, and share it without worrying about writing any custom code. Boost the productivity of your data professionals and shorten your time to value. Define, categorize, and find all data sets in one place. Share these data sets with other experts with zero codings and drive data-driven insights. This data sharing ability is perfect for companies that want to store their data once, share it with other experts, and use it multiple times, now and in the future. Define dataset, apply SQL transformations or simply migrate your SQL data processing logic to any cloud data warehouse.
  • 29
    AWS Data Pipeline
    AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premises data silos.
    Starting Price: $1 per month
  • 30
    Nextflow

    Nextflow

    Seqera Labs

    Data-driven computational pipelines. Nextflow enables scalable and reproducible scientific workflows using software containers. It allows the adaptation of pipelines written in the most common scripting languages. Its fluent DSL simplifies the implementation and deployment of complex parallel and reactive workflows on clouds and clusters. Nextflow is built around the idea that Linux is the lingua franca of data science. Nextflow allows you to write a computational pipeline by making it simpler to put together many different tasks. You may reuse your existing scripts and tools and you don't need to learn a new language or API to start using it. Nextflow supports Docker and Singularity containers technology. This, along with the integration of the GitHub code-sharing platform, allows you to write self-contained pipelines, manage versions, and rapidly reproduce any former configuration. Nextflow provides an abstraction layer between your pipeline's logic and the execution layer.
    Starting Price: Free
  • 31
    Dropbase

    Dropbase

    Dropbase

    Centralize offline data, import files, process and clean up data. Export to a live database with 1 click. Streamline data workflows. Centralize offline data and make it accessible to your team. Bring offline files to Dropbase. Multiple formats. Any way you like. Process and format data. Add, edit, re-order, and delete processing steps. 1-click exports. Export to database, endpoints, or download code with 1 click. Instant REST API access. Query Dropbase data securely with REST API access keys. Onboard data where you need it. Combine and process datasets to fit the desired format or data model. No code. Process your data pipelines using a spreadsheet interface. Track every step. Flexible. Use a library of pre-built processing functions. Or write your own. 1-click exports. Export to database or generate endpoints with 1 click. Manage databases. Manage and databases and credentials.
    Starting Price: $19.97 per user per month
  • 32
    Datastreamer

    Datastreamer

    Datastreamer

    Integrate unstructured external data into your organization in minutes. Datastreamer is a turnkey data platform to source, unify, and enrich unstructured external data with 95% less work than building pipelines in-house. Customers use Datastreamer to feed specialized AI models and accelerate insights in Threat Intelligence, KYC/AML, Financial Analysis and more. Feed your analytics products or specialized AI models with billions of data pieces from social media, blogs, news, forums, dark web data, and more. Our platform unifies source data into a common schema so you can use content from multiple sources simultaneously. Leverage our pre-integrated data partners or connect data from any data supplier. Tap into our powerful AI models to enhance data with components like sentiment analysis and PII redaction. Scale data pipelines with less costs by plugging into our managed infrastructure that is optimized to handle massive volumes of text data.
  • 33
    SmokePing

    SmokePing

    SmokePing

    SmokePing is a deluxe latency measurement tool. It can measure, store and display latency, latency distribution, and packet loss. SmokePing uses RRDtool to maintain a long-term data store and to draw pretty graphs, giving up-to-the-minute information on the state of each network connection. Click on any graph in detail mode and use the mouse to mark your area of interest in the navigator graph. Show information from multiple targets in a graph. With one central Smokeping Master node, you can run a series of Slave nodes, taking their configuration from the master. This allows you to ping a single target from multiple locations. The standard deviation is now used in several places to give a number for the variation in round trip times as depicted by the smoke. Wide variety of probes, ranging from simple ping to web requests and custom protocols. Master/slave deployment model to run measurements from multiple sources in parallel.
    Starting Price: Free
  • 34
    Kraken CI

    Kraken CI

    Michal Nowikowski

    Modern CI/CD, open-source, on-premise system that is highly scalable and focused on testing. Features: - flexible workflow planning using Starlark/Python - distributed building and testing - various executors: bare metal, Docker, LXD - highly scalable to thousands of executors - sophisticated test results analysis - integrated with AWS EC2 and ECS, Azure VM, with autoscaling - supported webhooks from GitHub, GitLab and Gitea - email and Slack notifications
    Starting Price: free
  • 35
    PragmaDev Process
    PragmaDev Process is a simple and powerful tool that aims at helping business process modelers to verify their models. Complex organizations or systems operations are based on processes described in graphical models. The most popular notation is BPMN (Business Process Model and Notation). It describes what the different participants in a process do and how they interact with each other. These processes must be thoroughly discussed before they are applied in a real situation. Any misunderstanding of the process might lead to a catastrophic situation in operation. Based on BPMN standard semantic, the modeler can execute the process step by step. The tool will outline the possible choices at each step of execution. There is no possible human interpretation leading to misunderstanding. PragmaDev Process is a BPMN editor, executor, and verifier. It is the outcome of a 2 years research project with the French Army, Eurocontrol, and Airbus DS.
    Starting Price: €90 per month
  • 36
    Apache Gobblin

    Apache Gobblin

    Apache Software Foundation

    A distributed data integration framework that simplifies common aspects of Big Data integration such as data ingestion, replication, organization, and lifecycle management for both streaming and batch data ecosystems. Runs as a standalone application on a single box. Also supports embedded mode. Runs as an mapreduce application on multiple Hadoop versions. Also supports Azkaban for launching mapreduce jobs. Runs as a standalone cluster with primary and worker nodes. This mode supports high availability and can run on bare metals as well. Runs as an elastic cluster on public cloud. This mode supports high availability. Gobblin as it exists today is a framework that can be used to build different data integration applications like ingest, replication, etc. Each of these applications is typically configured as a separate job and executed through a scheduler like Azkaban.
  • 37
    H2

    H2

    H2

    Welcome to H2, the Java SQL database. In embedded mode, an application opens a database from within the same JVM using JDBC. This is the fastest and easiest connection mode. The disadvantage is that a database may only be open in one virtual machine (and class loader) at any time. As in all modes, both persistent and in-memory databases are supported. There is no limit on the number of database open concurrently, or on the number of open connections. The mixed mode is a combination of the embedded and the server mode. The first application that connects to a database does that in embedded mode, but also starts a server so that other applications (running in different processes or virtual machines) can concurrently access the same data. The local connections are as fast as if the database is used in just the embedded mode, while the remote connections are a bit slower.
  • 38
    Gueststream VRPc

    Gueststream VRPc

    Gueststream

    The Vacation Rental Platform Central (VRPc) is a system that receives your inventory data and builds webpages and a booking system with that data. The data is usually uploaded from a Property Management System, Distribution Channel or can be used in a Stand-Alone mode. Because there is one instance of the main code, the VRPc has many advantages over the “one-off” model that is used by our competitors. The VRPc software runs on a state-of-the-art server that is designed and optimized for this purpose only. One instance of the code means that fixes or enhancements apply automatically to all customer systems. This keeps product costs low. Automatically deliver the correct design layout for the device used, add rates, taxes and commissions to booking, bundle items like; tickets, tours and meal plans into booking, control online inventory by complexes, location or type, include inventory from multiple property managers.
    Starting Price: $199 per month
  • 39
    Airbyte

    Airbyte

    Airbyte

    Get all your ELT data pipelines running in minutes, even your custom ones. Let your team focus on insights and innovation. Unify your data integration pipelines in one open-source ELT platform. Airbyte addresses all your data team's connector needs, however custom they are and whatever your scale. The data integration platform that can scale with your custom or high-volume needs. From high-volume databases to the long tail of API sources. Leverage Airbyte’s long tail of high-quality connectors that adapt to schema and API changes. Extensible to unify all native & custom ELT. Edit pre-built open-source connectors, or build new ones with our connector development kit in a few hours. Transparent and scalable pricing. Finally, a transparent and predictable cost-based pricing that scales with your data needs. You don’t need to worry about volume anymore. No more need for custom systems for your in-house scripts or database replication.
    Starting Price: $2.50 per credit
  • 40
    Astera Centerprise
    Astera Centerprise is a complete on-premise data integration solution that helps extract, transform, profile, cleanse, and integrate data from disparate sources in a code-free, drag-and-drop environment. The software is designed to cater to enterprise-level data integration needs and is used by Fortune 500 companies, like Wells Fargo, Xerox, HP, and more. Through process orchestration, workflow automation, job scheduling, instant data preview, and more, enterprises can easily get accurate, consolidated data for their day-to-day decision making at the speed of business.
  • 41
    Astro

    Astro

    Astronomer

    For data teams looking to increase the availability of trusted data, Astronomer provides Astro, a modern data orchestration platform, powered by Apache Airflow, that enables the entire data team to build, run, and observe data pipelines-as-code. Astronomer is the commercial developer of Airflow, the de facto standard for expressing data flows as code, used by hundreds of thousands of teams across the world.
  • 42
    BettrData

    BettrData

    BettrData

    Our automated data operations platform will allow businesses to reduce or reallocate the number of full-time employees needed to support their data operations. This is traditionally a very manual and expensive process, and our product packages it all together to simplify the process and significantly reduce costs. With so much problematic data in business, most companies cannot give appropriate attention to the quality of their data because they are too busy processing it. By using our product, you automatically become a proactive business when it comes to data quality. With clear visibility of all incoming data and a built-in alerting system, our platform ensures that your data quality standards are met. We are a first-of-its-kind solution that has taken many costly manual processes and put them into a single platform. The BettrData.io platform is ready to use after a simple installation and several straightforward configurations.
  • 43
    Apache Airflow

    Apache Airflow

    The Apache Software Foundation

    Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. Airflow pipelines are defined in Python, allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Easily define your own operators and extend libraries to fit the level of abstraction that suits your environment. Airflow pipelines are lean and explicit. Parametrization is built into its core using the powerful Jinja templating engine. No more command-line or XML black-magic! Use standard Python features to create your workflows, including date time formats for scheduling and loops to dynamically generate tasks. This allows you to maintain full flexibility when building your workflows.
  • 44
    Alooma

    Alooma

    Google

    Alooma enables data teams to have visibility and control. It brings data from your various data silos together into BigQuery, all in real time. Set up and flow data in minutes or customize, enrich, and transform data on the stream before it even hits the data warehouse. Never lose an event. Alooma's built in safety nets ensure easy error handling without pausing your pipeline. Any number of data sources, from low to high volume, Alooma’s infrastructure scales to your needs.
  • 45
    Amazon MWAA

    Amazon MWAA

    Amazon

    Amazon Managed Workflows for Apache Airflow (MWAA) is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud at scale. Apache Airflow is an open-source tool used to programmatically author, schedule, and monitor sequences of processes and tasks referred to as “workflows.” With Managed Workflows, you can use Airflow and Python to create workflows without having to manage the underlying infrastructure for scalability, availability, and security. Managed Workflows automatically scales its workflow execution capacity to meet your needs, and is integrated with AWS security services to help provide you with fast and secure access to data.
    Starting Price: $0.49 per hour
  • 46
    Azure Event Hubs
    Event Hubs is a fully managed, real-time data ingestion service that’s simple, trusted, and scalable. Stream millions of events per second from any source to build dynamic data pipelines and immediately respond to business challenges. Keep processing data during emergencies using the geo-disaster recovery and geo-replication features. Integrate seamlessly with other Azure services to unlock valuable insights. Allow existing Apache Kafka clients and applications to talk to Event Hubs without any code changes—you get a managed Kafka experience without having to manage your own clusters. Experience real-time data ingestion and microbatching on the same stream. Focus on drawing insights from your data instead of managing infrastructure. Build real-time big data pipelines and respond to business challenges right away.
    Starting Price: $0.03 per hour
  • 47
    Actifio

    Actifio

    Google

    Automate self-service provisioning and refresh of enterprise workloads, integrate with existing toolchain. High-performance data delivery and re-use for data scientists through a rich set of APIs and automation. Recover any data across any cloud from any point in time – at the same time – at scale, beyond legacy solutions. Minimize the business impact of ransomware / cyber attacks by recovering quickly with immutable backups. Unified platform to better protect, secure, retain, govern, or recover your data on-premises or in the cloud. Actifio’s patented software platform turns data silos into data pipelines. Virtual Data Pipeline (VDP) delivers full-stack data management — on-premises, hybrid or multi-cloud – from rich application integration, SLA-based orchestration, flexible data movement, and data immutability and security.
  • 48
    Conduktor

    Conduktor

    Conduktor

    We created Conduktor, the all-in-one friendly interface to work with the Apache Kafka ecosystem. Develop and manage Apache Kafka with confidence. With Conduktor DevTools, the all-in-one Apache Kafka desktop client. Develop and manage Apache Kafka with confidence, and save time for your entire team. Apache Kafka is hard to learn and to use. Made by Kafka lovers, Conduktor best-in-class user experience is loved by developers. Conduktor offers more than just an interface over Apache Kafka. It provides you and your teams the control of your whole data pipeline, thanks to our integration with most technologies around Apache Kafka. Provide you and your teams the most complete tool on top of Apache Kafka.
  • 49
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 50
    Dagster Cloud

    Dagster Cloud

    Dagster Labs

    Dagster is a next-generation orchestration platform for the development, production, and observation of data assets. Unlike other data orchestration solutions, Dagster provides you with an end-to-end development lifecycle. Dagster gives you control over your disparate data tools and empowers you to build, test, deploy, run, and iterate on your data pipelines. It makes you and your data teams more productive, your operations more robust, and puts you in complete control of your data processes as you scale. Dagster brings a declarative approach to the engineering of data pipelines. Your team defines the data assets required, quickly assessing their status and resolving any discrepancies. An assets-based model is clearer than a tasks-based one and becomes a unifying abstraction across the whole workflow.
    Starting Price: $0