Alternatives to Amazon MWAA

Compare Amazon MWAA alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Amazon MWAA in 2024. Compare features, ratings, user reviews, pricing, and more from Amazon MWAA competitors and alternatives in order to make an informed decision for your business.

  • 1
    Rivery

    Rivery

    Rivery

    Rivery’s SaaS ETL platform provides a fully-managed solution for data ingestion, transformation, orchestration, reverse ETL and more, with built-in support for your development and deployment lifecycles. Key Features: Data Workflow Templates: Extensive library of pre-built templates that enable teams to instantly create powerful data pipelines with the click of a button. Fully managed: No-code, auto-scalable, and hassle-free platform. Rivery takes care of the back end, allowing teams to spend time on priorities rather than maintenance. Multiple Environments: Construct and clone custom environments for specific teams or projects. Reverse ETL: Automatically send data from cloud warehouses to business applications, marketing clouds, CPD’s, and more.
    Starting Price: $0.75 Per Credit
  • 2
    Apache Airflow

    Apache Airflow

    The Apache Software Foundation

    Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. Airflow pipelines are defined in Python, allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Easily define your own operators and extend libraries to fit the level of abstraction that suits your environment. Airflow pipelines are lean and explicit. Parametrization is built into its core using the powerful Jinja templating engine. No more command-line or XML black-magic! Use standard Python features to create your workflows, including date time formats for scheduling and loops to dynamically generate tasks. This allows you to maintain full flexibility when building your workflows.
  • 3
    Google Cloud Composer
    Cloud Composer's managed nature and Apache Airflow compatibility allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources. End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline. Author, schedule, and monitor your workflows through a single orchestration tool—whether your pipeline lives on-premises, in multiple clouds, or fully within Google Cloud. Ease your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud. Create workflows that connect data, processing, and services across clouds to give you a unified data environment.
    Starting Price: $0.074 per vCPU hour
  • 4
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 5
    Yandex Data Proc
    You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.
    Starting Price: $0.19 per hour
  • 6
    Astro

    Astro

    Astronomer

    For data teams looking to increase the availability of trusted data, Astronomer provides Astro, a modern data orchestration platform, powered by Apache Airflow, that enables the entire data team to build, run, and observe data pipelines-as-code. Astronomer is the commercial developer of Airflow, the de facto standard for expressing data flows as code, used by hundreds of thousands of teams across the world.
  • 7
    Nextflow Tower

    Nextflow Tower

    Seqera Labs

    Nextflow Tower is an intuitive centralized command post that enables large-scale collaborative data analysis. With Tower, users can easily launch, manage, and monitor scalable Nextflow data analysis pipelines and compute environments on-premises or on most clouds. Researchers can focus on the science that matters rather than worrying about infrastructure engineering. Compliance is simplified with predictable, auditable pipeline execution and the ability to reliably reproduce results obtained with specific data sets and pipeline versions on demand. Nextflow Tower is developed and supported by Seqera Labs, the creators and maintainers of the open-source Nextflow project. This means that users get high-quality support directly from the source. Unlike third-party frameworks that incorporate Nextflow, Tower is deeply integrated and can help users benefit from Nextflow's complete set of capabilities.
  • 8
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 9
    Azure Event Hubs
    Event Hubs is a fully managed, real-time data ingestion service that’s simple, trusted, and scalable. Stream millions of events per second from any source to build dynamic data pipelines and immediately respond to business challenges. Keep processing data during emergencies using the geo-disaster recovery and geo-replication features. Integrate seamlessly with other Azure services to unlock valuable insights. Allow existing Apache Kafka clients and applications to talk to Event Hubs without any code changes—you get a managed Kafka experience without having to manage your own clusters. Experience real-time data ingestion and microbatching on the same stream. Focus on drawing insights from your data instead of managing infrastructure. Build real-time big data pipelines and respond to business challenges right away.
    Starting Price: $0.03 per hour
  • 10
    Prefect

    Prefect

    Prefect

    Prefect Cloud is a command center for your workflows. Deploy from Prefect core and instantly gain complete oversight and control. Cloud's beautiful UI lets you keep an eye on the health of your infrastructure. Stream realtime state updates and logs, kick off new runs, and receive critical information exactly when you need it. With Prefect's Hybrid Model, your code and data remain on-prem while Prefect Cloud's managed orchestration keeps everything running smoothly. The Cloud scheduler service runs asynchronously to ensure your runs start on time, every time. Advanced scheduling options allow for scheduled parameter value changes as well as the execution environment for each run! Configure custom notifications and actions when your workflows change state. Monitor the health of all agents connected to your cloud instance and receive custom alerts when an agent goes offline.
    Starting Price: $0.0025 per successful task
  • 11
    Actifio

    Actifio

    Google

    Automate self-service provisioning and refresh of enterprise workloads, integrate with existing toolchain. High-performance data delivery and re-use for data scientists through a rich set of APIs and automation. Recover any data across any cloud from any point in time – at the same time – at scale, beyond legacy solutions. Minimize the business impact of ransomware / cyber attacks by recovering quickly with immutable backups. Unified platform to better protect, secure, retain, govern, or recover your data on-premises or in the cloud. Actifio’s patented software platform turns data silos into data pipelines. Virtual Data Pipeline (VDP) delivers full-stack data management — on-premises, hybrid or multi-cloud – from rich application integration, SLA-based orchestration, flexible data movement, and data immutability and security.
  • 12
    Dataplane

    Dataplane

    Dataplane

    The concept behind Dataplane is to make it quicker and easier to construct a data mesh with robust data pipelines and automated workflows for businesses and teams of all sizes. In addition to being more user friendly, there has been an emphasis on scaling, resilience, performance and security.
    Starting Price: Free
  • 13
    CloverDX

    CloverDX

    CloverDX

    Design, debug, run and troubleshoot data transformations and jobflows in a developer-friendly visual designer. Orchestrate data workloads that require tasks to be carried out in the right sequence, orchestrate multiple systems with the transparency of visual workflows. Deploy data workloads easily into a robust enterprise runtime environment. In cloud or on-premise. Make data available to people, applications and storage under a single unified platform. Manage your data workloads and related processes together in a single platform. No task is too complex. We’ve built CloverDX on years of experience with large enterprise projects. Developer-friendly open architecture and flexibility lets you package and hide the complexity for non-technical users. Manage the entire lifecycle of a data pipeline from design, deployment to evolution and testing. Get things done fast with the help of our in-house customer success teams.
    Starting Price: $5000.00/one-time
  • 14
    Dagster Cloud

    Dagster Cloud

    Dagster Labs

    Dagster is a next-generation orchestration platform for the development, production, and observation of data assets. Unlike other data orchestration solutions, Dagster provides you with an end-to-end development lifecycle. Dagster gives you control over your disparate data tools and empowers you to build, test, deploy, run, and iterate on your data pipelines. It makes you and your data teams more productive, your operations more robust, and puts you in complete control of your data processes as you scale. Dagster brings a declarative approach to the engineering of data pipelines. Your team defines the data assets required, quickly assessing their status and resolving any discrepancies. An assets-based model is clearer than a tasks-based one and becomes a unifying abstraction across the whole workflow.
  • 15
    Nextflow

    Nextflow

    Seqera Labs

    Data-driven computational pipelines. Nextflow enables scalable and reproducible scientific workflows using software containers. It allows the adaptation of pipelines written in the most common scripting languages. Its fluent DSL simplifies the implementation and deployment of complex parallel and reactive workflows on clouds and clusters. Nextflow is built around the idea that Linux is the lingua franca of data science. Nextflow allows you to write a computational pipeline by making it simpler to put together many different tasks. You may reuse your existing scripts and tools and you don't need to learn a new language or API to start using it. Nextflow supports Docker and Singularity containers technology. This, along with the integration of the GitHub code-sharing platform, allows you to write self-contained pipelines, manage versions, and rapidly reproduce any former configuration. Nextflow provides an abstraction layer between your pipeline's logic and the execution layer.
    Starting Price: Free
  • 16
    AWS Data Pipeline
    AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premises data silos.
    Starting Price: $1 per month
  • 17
    Upsolver

    Upsolver

    Upsolver

    Upsolver makes it incredibly simple to build a governed data lake and to manage, integrate and prepare streaming data for analysis. Define pipelines using only SQL on auto-generated schema-on-read. Easy visual IDE to accelerate building pipelines. Add Upserts and Deletes to data lake tables. Blend streaming and large-scale batch data. Automated schema evolution and reprocessing from previous state. Automatic orchestration of pipelines (no DAGs). Fully-managed execution at scale. Strong consistency guarantee over object storage. Near-zero maintenance overhead for analytics-ready data. Built-in hygiene for data lake tables including columnar formats, partitioning, compaction and vacuuming. 100,000 events per second (billions daily) at low cost. Continuous lock-free compaction to avoid “small files” problem. Parquet-based tables for fast queries.
  • 18
    Gravity Data

    Gravity Data

    Gravity

    Gravity's mission is to make streaming data easy from over 100 sources while only paying for what you use. Gravity removes the reliance on engineering teams to deliver streaming pipelines with a simple interface to get streaming up and running in minutes from databases, event data and APIs. Everyone in the data team can now build with simple point and click so that you can focus on building apps, services and customer experiences. Full Execution trace and detailed error messaging for quick diagnosis and resolution. We have implemented new, feature-rich ways for you to quickly get started. From bulk set-up, default schemas and data selection to different job modes and statuses. Spend less time wrangling with infrastructure and more time analysing data while allowing our intelligent engine to keep your pipelines running. Gravity integrates with your systems for notifications and orchestration.
  • 19
    Fosfor Spectra
    Spectra is a comprehensive DataOps (data ingestion, transformation, and preparation) platform to build and manage complex, varied data pipelines using a low-code user interface with domain-specific features to deliver data solutions at speed and at scale. Maximize your ROI with faster time-to-market and time-to-value, and reduced cost of ownership. Get access to over 50 pre-built native connectors with readily available data processing functions such as sort, look-up, join, transform, grouping and many more. Process structured, semi-structured, and unstructured data in batch or real-time streaming data. Optimize and control infrastructure spending for your organization by efficiently managing data processing and data pipeline. Spectra’s pushdown transformation capabilities with Snowflake Data Cloud enables enterprises to leverage Snowflake’s high-performing processing power and scalable architecture.
  • 20
    Quix

    Quix

    Quix

    Building real-time apps and services require lots of components running in concert: Kafka, VPC hosting, infrastructure as code, container orchestration, observability, CI/CD, persistent volumes, databases, and much more. The Quix platform takes care of all the moving parts. You just connect your data and start building. That’s it. No provisioning clusters or configuring resources. Use Quix connectors to ingest transaction messages streamed from your financial processing systems in a virtual private cloud or on-premise data center. All data in transit is encrypted end-to-end and compressed with G-Zip and Protobuf for security and efficiency. Detect fraudulent patterns with machine learning models or rule-based algorithms. Create fraud warning messages as troubleshooting tickets or display them in support dashboards.
    Starting Price: $50 per month
  • 21
    StreamNative

    StreamNative

    StreamNative

    StreamNative redefines streaming infrastructure by seamlessly integrating Kafka, MQ, and other protocols into a single, unified platform, providing unparalleled flexibility and efficiency for modern data processing needs. StreamNative offers a unified solution that adapts to the diverse requirements of streaming and messaging in a microservices-driven environment. By providing a comprehensive and intelligent approach to messaging and streaming, StreamNative empowers organizations to navigate the complexities and scalability of the modern data ecosystem with efficiency and agility. Apache Pulsar’s unique architecture decouples the message serving layer from the message storage layer to deliver a mature cloud-native data-streaming platform. Scalable and elastic to adapt to rapidly changing event traffic and business needs. Scale-up to millions of topics with architecture that decouples computing and storage.
    Starting Price: $1,000 per month
  • 22
    Google Cloud Dataflow
    Unified stream and batch data processing that's serverless, fast, and cost-effective. Fully managed data processing service. Automated provisioning and management of processing resources. Horizontal autoscaling of worker resources to maximize resource utilization. OSS community-driven innovation with Apache Beam SDK. Reliable and consistent exactly-once processing. Streaming data analytics with speed. Dataflow enables fast, simplified streaming data pipeline development with lower data latency. Allow teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Allow teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Dataflow automates provisioning and management of processing resources to minimize latency and maximize utilization.
  • 23
    Chalk

    Chalk

    Chalk

    Powerful data engineering workflows, without the infrastructure headaches. Complex streaming, scheduling, and data backfill pipelines, are all defined in simple, composable Python. Make ETL a thing of the past, fetch all of your data in real-time, no matter how complex. Incorporate deep learning and LLMs into decisions alongside structured business data. Make better predictions with fresher data, don’t pay vendors to pre-fetch data you don’t use, and query data just in time for online predictions. Experiment in Jupyter, then deploy to production. Prevent train-serve skew and create new data workflows in milliseconds. Instantly monitor all of your data workflows in real-time; track usage, and data quality effortlessly. Know everything you computed and data replay anything. Integrate with the tools you already use and deploy to your own infrastructure. Decide and enforce withdrawal limits with custom hold times.
    Starting Price: Free
  • 24
    Airbyte

    Airbyte

    Airbyte

    Get all your ELT data pipelines running in minutes, even your custom ones. Let your team focus on insights and innovation. Unify your data integration pipelines in one open-source ELT platform. Airbyte addresses all your data team's connector needs, however custom they are and whatever your scale. The data integration platform that can scale with your custom or high-volume needs. From high-volume databases to the long tail of API sources. Leverage Airbyte’s long tail of high-quality connectors that adapt to schema and API changes. Extensible to unify all native & custom ELT. Edit pre-built open-source connectors, or build new ones with our connector development kit in a few hours. Transparent and scalable pricing. Finally, a transparent and predictable cost-based pricing that scales with your data needs. You don’t need to worry about volume anymore. No more need for custom systems for your in-house scripts or database replication.
    Starting Price: $2.50 per credit
  • 25
    K2View

    K2View

    K2View

    At K2View, we believe that every enterprise should be able to leverage its data to become as disruptive and agile as the best companies in its industry. We make this possible through our patented Data Product Platform, which creates and manages a complete and compliant dataset for every business entity – on demand, and in real time. The dataset is always in sync with its underlying sources, adapts to changes in the source structures, and is instantly accessible to any authorized data consumer. Data Product Platform fuels many operational use cases, including customer 360, data masking and tokenization, test data management, data migration, legacy application modernization, data pipelining and more – to deliver business outcomes in less than half the time, and at half the cost, of any other alternative. The platform inherently supports modern data architectures – data mesh, data fabric, and data hub – and deploys in cloud, on-premise, or hybrid environments.
  • 26
    Pandio

    Pandio

    Pandio

    Connecting systems to scale AI initiatives is complex, expensive, and prone to fail. Pandio’s cloud-native managed solution simplifies your data pipelines to harness the power of AI. Access your data from anywhere at any time in order to query, analyze, and drive to insight. Big data analytics without the big cost. Enable data movement seamlessly. Streaming, queuing and pub-sub with unmatched throughput, latency, and durability. Design, train, and deploy machine learning models locally in less than 30 minutes. Accelerate your path to ML and democratize the process across your organization. And it doesn’t require months (or years) of disappointment. Pandio’s AI-driven architecture automatically orchestrates your models, data, and ML tools. Pandio works with your existing stack to accelerate your ML initiatives. Orchestrate your models and messages across your organization.
    Starting Price: $1.40 per hour
  • 27
    Stripe Data Pipeline
    Stripe Data Pipeline sends all your up-to-date Stripe data and reports to Snowflake or Amazon Redshift in a few clicks. Centralize your Stripe data with other business data to close your books faster and unlock richer business insights. Set up Stripe Data Pipeline in minutes and automatically receive your Stripe data and reports in your data warehouse on an ongoing basis–no code required. Create a single source of truth to speed up your financial close and access better insights. Identify your best-performing payment methods, analyze fraud by location, and more. Send your Stripe data directly to your data warehouse without involving a third-party extract, transform, and load (ETL) pipeline. Offload ongoing maintenance with a pipeline that’s built into Stripe. No matter how much data you have, your data is always complete and accurate. Automate data delivery at scale, minimize security risks, and avoid data outages and delays.
    Starting Price: 3¢ per transaction
  • 28
    Informatica Data Engineering
    Ingest, prepare, and process data pipelines at scale for AI and analytics in the cloud. Informatica’s comprehensive data engineering portfolio provides everything you need to process and prepare big data engineering workloads to fuel AI and analytics: robust data integration, data quality, streaming, masking, and data preparation capabilities. Rapidly build intelligent data pipelines with CLAIRE®-powered automation, including automatic change data capture (CDC) Ingest thousands of databases and millions of files, and streaming events. Accelerate time-to-value ROI with self-service access to trusted, high-quality data. Get unbiased, real-world insights on Informatica data engineering solutions from peers you trust. Reference architectures for sustainable data engineering solutions. AI-powered data engineering in the cloud delivers the trusted, high quality data your analysts and data scientists need to transform business.
  • 29
    Lumada IIoT

    Lumada IIoT

    Hitachi

    Embed sensors for IoT use cases and enrich sensor data with control system and environment data. Integrate this in real time with enterprise data and deploy predictive algorithms to discover new insights and harvest your data for meaningful use. Use analytics to predict maintenance problems, understand asset utilization, reduce defects and optimize processes. Harness the power of connected devices to deliver remote monitoring and diagnostics services. Employ IoT Analytics to predict safety hazards and comply with regulations to reduce worksite accidents. Lumada Data Integration: Rapidly build and deploy data pipelines at scale. Integrate data from lakes, warehouses and devices, and orchestrate data flows across all environments. By building ecosystems with customers and business partners in various business areas, we can accelerate digital innovation to create new value for a new society.
  • 30
    Lightbend

    Lightbend

    Lightbend

    Lightbend provides technology that enables developers to easily build data-centric applications that bring the most demanding, globally distributed applications and streaming data pipelines to life. Companies worldwide turn to Lightbend to solve the challenges of real-time, distributed data in support of their most business-critical initiatives. Akka Platform provides the building blocks that make it easy for businesses to build, deploy, and run large-scale applications that support digitally transformative initiatives. Accelerate time-to-value and reduce infrastructure and cloud costs with reactive microservices that take full advantage of the distributed nature of the cloud and are resilient to failure, highly efficient, and operative at any scale. Native support for encryption, data shredding, TLS enforcement, and continued compliance with GDPR. Framework for quick construction, deployment and management of streaming data pipelines.
  • 31
    Meltano

    Meltano

    Meltano

    Meltano provides the ultimate flexibility in deployment options. Own your data stack, end to end. Ever growing connector library of 300+ connectors have been running in production for years. Run workflows in isolated environments, execute end-to-end tests, and version control everything. Open source gives you the power to build your ideal data stack. Define your entire project as code and collaborate confidently with your team. The Meltano CLI enables you to rapidly create your project, making it easy to start replicating data. Meltano is designed to be the best way to run dbt to manage your transformations. Your entire data stack is defined in your project, making it simple to deploy it to production. Validate your changes in development before moving to CI, and in staging before moving to production.
  • 32
    Conduktor

    Conduktor

    Conduktor

    We created Conduktor, the all-in-one friendly interface to work with the Apache Kafka ecosystem. Develop and manage Apache Kafka with confidence. With Conduktor DevTools, the all-in-one Apache Kafka desktop client. Develop and manage Apache Kafka with confidence, and save time for your entire team. Apache Kafka is hard to learn and to use. Made by Kafka lovers, Conduktor best-in-class user experience is loved by developers. Conduktor offers more than just an interface over Apache Kafka. It provides you and your teams the control of your whole data pipeline, thanks to our integration with most technologies around Apache Kafka. Provide you and your teams the most complete tool on top of Apache Kafka.
  • 33
    Arcion

    Arcion

    Arcion Labs

    Deploy production-ready change data capture pipelines for high-volume, real-time data replication - without a single line of code. Supercharged Change Data Capture. Enjoy automatic schema conversion, end-to-end replication, flexible deployment, and more with Arcion’s distributed Change Data Capture (CDC). Leverage Arcion’s zero data loss architecture for guaranteed end-to-end data consistency, built-in checkpointing, and more without any custom code. Leave scalability and performance concerns behind with a highly-distributed, highly parallel architecture supporting 10x faster data replication. Reduce DevOps overhead with Arcion Cloud, the only fully-managed CDC offering. Enjoy autoscaling, built-in high availability, monitoring console, and more. Simplify & standardize data pipelines architecture, and zero downtime workload migration from on-prem to cloud.
    Starting Price: $2,894.76 per month
  • 34
    Trifacta

    Trifacta

    Trifacta

    The fastest way to prep data and build data pipelines in the cloud. Trifacta provides visual and intelligent guidance to accelerate data preparation so you can get to insights faster. Poor data quality can sink any analytics project. Trifacta helps you understand your data so you can quickly and accurately clean it up. All the power with none of the code. Trifacta provides visual and intelligent guidance so you can get to insights faster. Manual, repetitive data preparation processes don’t scale. Trifacta helps you build, deploy and manage self-service data pipelines in minutes not months.
  • 35
    Skyvia

    Skyvia

    Devart

    Data integration, backup, management, and connectivity. 100 % cloud based platform that offers contemporary cloud agility and scalability, eliminating the need of deployment or manual upgrades. No coding wizard-based solution that meets the needs of both IT professionals and business users with no technical skills. With flexible pricing plans for each product, Skyvia suites for businesses of any size, from a small startup to an enterprise company. Connect your cloud, on-premise, and flat data to automate workflows. Automate data collection from disparate cloud sources to a database or data warehouse. Transfer your business data between cloud apps automatically in just a few clicks. Protect all your cloud data and keep it secure in one place. Share data in real time via REST API to connect with multiple OData consumers. Query and manage any data from the browser via SQL or intuitive visual Query Builder.
  • 36
    Dropbase

    Dropbase

    Dropbase

    Centralize offline data, import files, process and clean up data. Export to a live database with 1 click. Streamline data workflows. Centralize offline data and make it accessible to your team. Bring offline files to Dropbase. Multiple formats. Any way you like. Process and format data. Add, edit, re-order, and delete processing steps. 1-click exports. Export to database, endpoints, or download code with 1 click. Instant REST API access. Query Dropbase data securely with REST API access keys. Onboard data where you need it. Combine and process datasets to fit the desired format or data model. No code. Process your data pipelines using a spreadsheet interface. Track every step. Flexible. Use a library of pre-built processing functions. Or write your own. 1-click exports. Export to database or generate endpoints with 1 click. Manage databases. Manage and databases and credentials.
    Starting Price: $19.97 per user per month
  • 37
    Azkaban

    Azkaban

    Azkaban

    Azkaban is a distributed Workflow Manager, implemented at LinkedIn to solve the problem of Hadoop job dependencies. We had jobs that needed to run in order, from ETL jobs to data analytics products. After version 3.0, we provide two modes: the stand alone “solo-server” mode and distributed multiple-executor mode. The following describes the differences between the two modes. In solo server mode, the DB is embedded H2 and both web server and executor server run in the same process. This should be useful if one just wants to try things out. It can also be used on small scale use cases. The multiple executor mode is for most serious production environment. Its DB should be backed by MySQL instances with master-slave set up. The web server and executor servers should ideally run in different hosts so that upgrading and maintenance shouldn’t affect users. This multiple host setup brings in robust and scalable aspect to Azkaban.
  • 38
    Datastreamer

    Datastreamer

    Datastreamer

    Integrate unstructured external data into your organization in minutes. Datastreamer is a turnkey data platform to source, unify, and enrich unstructured external data with 95% less work than building pipelines in-house. Customers use Datastreamer to feed specialized AI models and accelerate insights in Threat Intelligence, KYC/AML, Financial Analysis and more. Feed your analytics products or specialized AI models with billions of data pieces from social media, blogs, news, forums, dark web data, and more. Our platform unifies source data into a common schema so you can use content from multiple sources simultaneously. Leverage our pre-integrated data partners or connect data from any data supplier. Tap into our powerful AI models to enhance data with components like sentiment analysis and PII redaction. Scale data pipelines with less costs by plugging into our managed infrastructure that is optimized to handle massive volumes of text data.
  • 39
    BigBI

    BigBI

    BigBI

    BigBI enables data specialists to build their own powerful big data pipelines interactively & efficiently, without any coding! BigBI unleashes the power of Apache Spark enabling: Scalable processing of real Big Data (up to 100X faster) Integration of traditional data (SQL, batch files) with modern data sources including semi-structured (JSON, NoSQL DBs, Elastic, Hadoop), and unstructured (Text, Audio, video), Integration of streaming data, cloud data, AI/ML & graphs
  • 40
    Apache Kafka

    Apache Kafka

    The Apache Software Foundation

    Apache Kafka® is an open-source, distributed streaming platform. Scale production clusters up to a thousand brokers, trillions of messages per day, petabytes of data, hundreds of thousands of partitions. Elastically expand and contract storage and processing. Stretch clusters efficiently over availability zones or connect separate clusters across geographic regions. Process streams of events with joins, aggregations, filters, transformations, and more, using event-time and exactly-once processing. Kafka’s out-of-the-box Connect interface integrates with hundreds of event sources and event sinks including Postgres, JMS, Elasticsearch, AWS S3, and more. Read, write, and process streams of events in a vast array of programming languages.
  • 41
    DataKitchen

    DataKitchen

    DataKitchen

    Reclaim control of your data pipelines and deliver value instantly, without errors. The DataKitchen™ DataOps platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing, and monitoring to development and deployment. You’ve already got the tools you need. Our platform automatically orchestrates your end-to-end multi-tool, multi-environment pipelines – from data access to value delivery. Catch embarrassing and costly errors before they reach the end-user by adding any number of automated tests at every node in your development and production pipelines. Spin-up repeatable work environments in minutes to enable teams to make changes and experiment – without breaking production. Fearlessly deploy new features into production with the push of a button. Free your teams from tedious, manual work that impedes innovation.
  • 42
    Openbridge

    Openbridge

    Openbridge

    Uncover insights to supercharge sales growth using code-free, fully-automated data pipelines to data lakes or cloud warehouses. A flexible, standards-based platform to unify sales and marketing data for automating insights and smarter growth. Say goodbye to messy, expensive manual data downloads. Always know what you’ll pay and only pay for what you use. Fuel your tools with quick access to analytics-ready data. As certified developers, we only work with secure, official APIs. Get started quickly with data pipelines from popular sources. Pre-built, pre-transformed, and ready-to-go data pipelines. Unlock data from Amazon Vendor Central, Amazon Seller Central, Instagram Stories, Facebook, Amazon Advertising, Google Ads, and many others. Code-free data ingestion and transformation processes allow teams to realize value from their data quickly and cost-effectively. Data is always securely stored directly in a trusted, customer-owned data destination like Databricks, Amazon Redshift, etc.
    Starting Price: $149 per month
  • 43
    Y42

    Y42

    Datos-Intelligence GmbH

    Y42 is the first fully managed Modern DataOps Cloud. It is purpose-built to help companies easily design production-ready data pipelines on top of their Google BigQuery or Snowflake cloud data warehouse. Y42 provides native integration of best-of-breed open-source data tools, comprehensive data governance, and better collaboration for data teams. With Y42, organizations enjoy increased accessibility to data and can make data-driven decisions quickly and efficiently.
  • 44
    Alooma

    Alooma

    Google

    Alooma enables data teams to have visibility and control. It brings data from your various data silos together into BigQuery, all in real time. Set up and flow data in minutes or customize, enrich, and transform data on the stream before it even hits the data warehouse. Never lose an event. Alooma's built in safety nets ensure easy error handling without pausing your pipeline. Any number of data sources, from low to high volume, Alooma’s infrastructure scales to your needs.
  • 45
    Oarkflow

    Oarkflow

    Oarkflow

    Automate your business pipeline with our flow builder. Use operations that matters to you. Bring your own service providers for email, sms and http services. Use our advanced query builder to query and analyze csv with any field numbers and rows. We store the csv files you've uploaded on our platform in a secured vault and account activity logs. We don't store any data records you request for processing.
    Starting Price: $0.0005 per task
  • 46
    Castor

    Castor

    Castor

    Castor is a data catalog designed for mass adoption across the whole company. Have an overview of all your data environment. Search for data instantly thanks to our powerful search engine. Onboard to a new data infrastructure and access data in a breeze. Go beyond your traditional data catalog. Modern data teams now have numerous data sources, build one truth. With its delightful and automated documentation experience, Castor makes it dead simple to trust data. Column-level, cross-system data lineage in minutes. Get a bird’s eye view of your data pipelines to build trust in your data. Troubleshoot data issues, perform impact analyses, comply with GDPR in one tool. Optimize performance, cost, compliance, and security for your data. Keep your data stack healthy with our automated infrastructure monitoring system.
    Starting Price: $699 per month
  • 47
    Tarsal

    Tarsal

    Tarsal

    Tarsal's infinite scalability means as your organization grows, Tarsal grows with you. Tarsal makes it easy for you to switch where you're sending data - today's SIEM data is tomorrow's data lake data; all with one click. Keep your SIEM and gradually migrate analytics over to a data lake. You don't have to rip anything out to use Tarsal. Some analytics just won't run on your SIEM. Use Tarsal to have query-ready data on a data lake. Your SIEM is one of the biggest line items in your budget. Use Tarsal to send some of that data to your data lake. Tarsal is the first highly scalable ETL data pipeline built for security teams. Easily exfil terabytes of data in just just a few clicks, with instant normalization, and route that data to your desired destination.
  • 48
    Integrate.io

    Integrate.io

    Integrate.io

    Unify Your Data Stack: Experience the first no-code data pipeline platform and power enlightened decision making. Integrate.io is the only complete set of data solutions & connectors for easy building and managing of clean, secure data pipelines. Increase your data team's output with all of the simple, powerful tools & connectors you’ll ever need in one no-code data integration platform. Empower any size team to consistently deliver projects on-time & under budget. We ensure your success by partnering with you to truly understand your needs & desired outcomes. Our only goal is to help you overachieve yours. Integrate.io's Platform includes: -No-Code ETL & Reverse ETL: Drag & drop no-code data pipelines with 220+ out-of-the-box data transformations -Easy ELT & CDC :The Fastest Data Replication On The Market -Automated API Generation: Build Automated, Secure APIs in Minutes - Data Warehouse Monitoring: Finally Understand Your Warehouse Spend - FREE Data Observability: Custom
  • 49
    Crux

    Crux

    Crux

    Find out why the heavy hitters are using the Crux external data automation platform to scale external data integration, transformation, and observability without increasing headcount. Our cloud-native data integration technology accelerates the ingestion, preparation, observability and ongoing delivery of any external dataset. The result is that we can ensure you get quality data in the right place, in the right format when you need it. Leverage automatic schema detection, delivery schedule inference, and lifecycle management to build pipelines from any external data source quickly. Enhance discoverability throughout your organization through a private catalog of linked and matched data products. Enrich, validate, and transform any dataset to quickly combine it with other data sources and accelerate analytics.
  • 50
    Hevo

    Hevo

    Hevo Data

    Hevo Data is a no-code, bi-directional data pipeline platform specially built for modern ETL, ELT, and Reverse ETL Needs. It helps data teams streamline and automate org-wide data flows that result in a saving of ~10 hours of engineering time/week and 10x faster reporting, analytics, and decision making. The platform supports 100+ ready-to-use integrations across Databases, SaaS Applications, Cloud Storage, SDKs, and Streaming Services. Over 500 data-driven companies spread across 35+ countries trust Hevo for their data integration needs. Try Hevo today and get your fully managed data pipelines up and running in just a few minutes.
    Starting Price: $249/month