Alternatives to StreamScape

Compare StreamScape alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to StreamScape in 2024. Compare features, ratings, user reviews, pricing, and more from StreamScape competitors and alternatives in order to make an informed decision for your business.

  • 1
    Fivetran

    Fivetran

    Fivetran

    Fivetran is the smartest way to replicate data into your warehouse. We've built the only zero-maintenance pipeline, turning months of on-going development into a 5-minute setup. Our connectors bring data from applications and databases into one central location so that analysts can unlock profound insights about their business. Schema designs and ERDs make synced data immediately usable. Transform data into analytics-ready tables as soon as it’s loaded into your warehouse. Spend less time writing transformation code with our out-of-the-box data modeling. Connect to any git repository and manage dbt models directly from Fivetran. Develop and deliver your product with the utmost confidence in ours. Uptime and data delivery guarantees ensure your customers’ data never goes stale. Troubleshoot fast with a global team of Support Specialists.
  • 2
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 3
    TIBCO Data Fabric
    More data sources, more silos, more complexity, and constant change. Data architectures are challenged to keep pace—a big problem for today's data-driven organizations, and one that puts your business at risk. A data fabric is a modern distributed data architecture that includes shared data assets and optimized data fabric pipelines that you can use to address today's data challenges in a unified way. Optimized data management and integration capabilities so you can intelligently simplify, automate, and accelerate your data pipelines. Easy-to-deploy and adapt distributed data architecture that fits your complex, ever-changing technology landscape. Accelerate time to value by unlocking your distributed on-premises, cloud, and hybrid cloud data, no matter where it resides, and delivering it wherever it's needed at the pace of business.
  • 4
    Denodo

    Denodo

    Denodo Technologies

    The core technology to enable modern data integration and data management solutions. Quickly connect disparate structured and unstructured sources. Catalog your entire data ecosystem. Data stays in the sources and it is accessed on demand, with no need to create another copy. Build data models that suit the needs of the consumer, even across multiple sources. Hide the complexity of your back-end technologies from the end users. The virtual model can be secured and consumed using standard SQL and other formats like REST, SOAP and OData. Easy access to all types of data. Full data integration and data modeling capabilities. Active Data Catalog and self-service capabilities for data & metadata discovery and data preparation. Full data security and data governance capabilities. Fast intelligent execution of data queries. Real-time data delivery in any format. Ability to create data marketplaces. Decoupling of business applications from data systems to facilitate data-driven strategies.
  • 5
    K2View

    K2View

    K2View

    At K2View, we believe that every enterprise should be able to leverage its data to become as disruptive and agile as the best companies in its industry. We make this possible through our patented Data Product Platform, which creates and manages a complete and compliant dataset for every business entity – on demand, and in real time. The dataset is always in sync with its underlying sources, adapts to changes in the source structures, and is instantly accessible to any authorized data consumer. Data Product Platform fuels many operational use cases, including customer 360, data masking and tokenization, test data management, data migration, legacy application modernization, data pipelining and more – to deliver business outcomes in less than half the time, and at half the cost, of any other alternative. The platform inherently supports modern data architectures – data mesh, data fabric, and data hub – and deploys in cloud, on-premise, or hybrid environments.
  • 6
    Dagster Cloud

    Dagster Cloud

    Dagster Labs

    Dagster is a next-generation orchestration platform for the development, production, and observation of data assets. Unlike other data orchestration solutions, Dagster provides you with an end-to-end development lifecycle. Dagster gives you control over your disparate data tools and empowers you to build, test, deploy, run, and iterate on your data pipelines. It makes you and your data teams more productive, your operations more robust, and puts you in complete control of your data processes as you scale. Dagster brings a declarative approach to the engineering of data pipelines. Your team defines the data assets required, quickly assessing their status and resolving any discrepancies. An assets-based model is clearer than a tasks-based one and becomes a unifying abstraction across the whole workflow.
  • 7
    DataKitchen

    DataKitchen

    DataKitchen

    Reclaim control of your data pipelines and deliver value instantly, without errors. The DataKitchen™ DataOps platform automates and coordinates all the people, tools, and environments in your entire data analytics organization – everything from orchestration, testing, and monitoring to development and deployment. You’ve already got the tools you need. Our platform automatically orchestrates your end-to-end multi-tool, multi-environment pipelines – from data access to value delivery. Catch embarrassing and costly errors before they reach the end-user by adding any number of automated tests at every node in your development and production pipelines. Spin-up repeatable work environments in minutes to enable teams to make changes and experiment – without breaking production. Fearlessly deploy new features into production with the push of a button. Free your teams from tedious, manual work that impedes innovation.
  • 8
    Teradata QueryGrid
    Deploying multiple analytic engines means best-fit engineering, so QueryGrid lets users leverage the right tool for the job. SQL is the language of business, and QueryGrid delivers unparalleled SQL access across commercial and open source analytical engines. Built for a hybrid multi-cloud reality, Vantage solves the world’s most complex data challenges at scale. Software that delivers autonomy, visibility, and insights to keep pace with changing customer demand.
  • 9
    Upsolver

    Upsolver

    Upsolver

    Upsolver makes it incredibly simple to build a governed data lake and to manage, integrate and prepare streaming data for analysis. Define pipelines using only SQL on auto-generated schema-on-read. Easy visual IDE to accelerate building pipelines. Add Upserts and Deletes to data lake tables. Blend streaming and large-scale batch data. Automated schema evolution and reprocessing from previous state. Automatic orchestration of pipelines (no DAGs). Fully-managed execution at scale. Strong consistency guarantee over object storage. Near-zero maintenance overhead for analytics-ready data. Built-in hygiene for data lake tables including columnar formats, partitioning, compaction and vacuuming. 100,000 events per second (billions daily) at low cost. Continuous lock-free compaction to avoid “small files” problem. Parquet-based tables for fast queries.
  • 10
    Anzo

    Anzo

    Cambridge Semantics

    Anzo is a modern data discovery and integration platform that lets anyone find, connect and blend any enterprise data into analytics-ready datasets. Anzo’s unique use of semantics and graph data models makes it practical for the first time for virtually anyone in your organization – from skilled data scientists to novice business users – to drive the data discovery and integration process and build their own analytics-ready datasets. Anzo’s graph data models provide business users with a visual map of enterprise data that is easy to understand and navigate, even when your data is vast, siloed and complex. Semantics add business content to data, allowing users to harmonize data based on shared definitions and build blended, business-ready data on demand.
  • 11
    Datazoom

    Datazoom

    Datazoom

    Improving the experience, efficiency, and profitability of streaming video requires data. Datazoom enables video publishers to better operate distributed architectures through centralizing, standardizing, and integrating data in real-time to create a more powerful data pipeline and improve observability, adaptability, and optimization solutions. Datazoom is a video data platform that continually gathers data from endpoints, like a CDN or a video player, through an ecosystem of collectors. Once the data is gathered, it is normalized using standardized data definitions. This data is then sent through available connectors to analytics platforms like Google BigQuery, Google Analytics, and Splunk and can be visualized in tools such as Looker and Superset. Datazoom is your key to a more effective and efficient data pipeline. Get the data you need in real-time. Don’t wait for your data when you need to resolve an issue immediately.
  • 12
    Atlan

    Atlan

    Atlan

    The modern data workspace. Make all your data assets from data tables to BI reports, instantly discoverable. Our powerful search algorithms combined with easy browsing experience, make finding the right asset, a breeze. Atlan auto-generates data quality profiles which make detecting bad data, dead easy. From automatic variable type detection & frequency distribution to missing values and outlier detection, we’ve got you covered. Atlan takes the pain away from governing and managing your data ecosystem! Atlan’s bots parse through SQL query history to auto construct data lineage and auto-detect PII data, allowing you to create dynamic access policies & best in class governance. Even non-technical users can directly query across multiple data lakes, warehouses & DBs using our excel-like query builder. Native integrations with tools like Tableau and Jupyter makes data collaboration come alive.
  • 13
    Dropbase

    Dropbase

    Dropbase

    Centralize offline data, import files, process and clean up data. Export to a live database with 1 click. Streamline data workflows. Centralize offline data and make it accessible to your team. Bring offline files to Dropbase. Multiple formats. Any way you like. Process and format data. Add, edit, re-order, and delete processing steps. 1-click exports. Export to database, endpoints, or download code with 1 click. Instant REST API access. Query Dropbase data securely with REST API access keys. Onboard data where you need it. Combine and process datasets to fit the desired format or data model. No code. Process your data pipelines using a spreadsheet interface. Track every step. Flexible. Use a library of pre-built processing functions. Or write your own. 1-click exports. Export to database or generate endpoints with 1 click. Manage databases. Manage and databases and credentials.
    Starting Price: $19.97 per user per month
  • 14
    AtScale

    AtScale

    AtScale

    AtScale helps accelerate and simplify business intelligence resulting in faster time-to-insight, better business decisions, and more ROI on your Cloud analytics investment. Eliminate repetitive data engineering tasks like curating, maintaining and delivering data for analysis. Define business definitions in one location to ensure consistent KPI reporting across BI tools. Accelerate time to insight from data while efficiently managing cloud compute costs. Leverage existing data security policies for data analytics no matter where data resides. AtScale’s Insights workbooks and models let you perform Cloud OLAP multidimensional analysis on data sets from multiple providers – with no data prep or data engineering required. We provide built-in easy to use dimensions and measures to help you quickly derive insights that you can use for business decisions.
  • 15
    Data Taps

    Data Taps

    Data Taps

    Build your data pipelines like Lego blocks with Data Taps. Add new metrics layers, zoom in, and investigate with real-time streaming SQL. Build with others, share and consume data, globally. Refine and update without hassle. Use multiple models/schemas during schema evolution. Built to scale with AWS Lambda and S3.
  • 16
    Crux

    Crux

    Crux

    Find out why the heavy hitters are using the Crux external data automation platform to scale external data integration, transformation, and observability without increasing headcount. Our cloud-native data integration technology accelerates the ingestion, preparation, observability and ongoing delivery of any external dataset. The result is that we can ensure you get quality data in the right place, in the right format when you need it. Leverage automatic schema detection, delivery schedule inference, and lifecycle management to build pipelines from any external data source quickly. Enhance discoverability throughout your organization through a private catalog of linked and matched data products. Enrich, validate, and transform any dataset to quickly combine it with other data sources and accelerate analytics.
  • 17
    QuerySurge
    QuerySurge leverages AI to automate the data validation and ETL testing of Big Data, Data Warehouses, Business Intelligence Reports and Enterprise Apps/ERPs with full DevOps functionality for continuous testing. Use Cases - Data Warehouse & ETL Testing - Hadoop & NoSQL Testing - DevOps for Data / Continuous Testing - Data Migration Testing - BI Report Testing - Enterprise App/ERP Testing QuerySurge Features - Projects: Multi-project support - AI: automatically create datas validation tests based on data mappings - Smart Query Wizards: Create tests visually, without writing SQL - Data Quality at Speed: Automate the launch, execution, comparison & see results quickly - Test across 200+ platforms: Data Warehouses, Hadoop & NoSQL lakes, databases, flat files, XML, JSON, BI Reports - DevOps for Data & Continuous Testing: RESTful API with 60+ calls & integration with all mainstream solutions - Data Analytics & Data Intelligence:  Analytics dashboard & reports
  • 18
    Datastreamer

    Datastreamer

    Datastreamer

    Integrate unstructured external data into your organization in minutes. Datastreamer is a turnkey data platform to source, unify, and enrich unstructured external data with 95% less work than building pipelines in-house. Customers use Datastreamer to feed specialized AI models and accelerate insights in Threat Intelligence, KYC/AML, Financial Analysis and more. Feed your analytics products or specialized AI models with billions of data pieces from social media, blogs, news, forums, dark web data, and more. Our platform unifies source data into a common schema so you can use content from multiple sources simultaneously. Leverage our pre-integrated data partners or connect data from any data supplier. Tap into our powerful AI models to enhance data with components like sentiment analysis and PII redaction. Scale data pipelines with less costs by plugging into our managed infrastructure that is optimized to handle massive volumes of text data.
  • 19
    Nextflow

    Nextflow

    Seqera Labs

    Data-driven computational pipelines. Nextflow enables scalable and reproducible scientific workflows using software containers. It allows the adaptation of pipelines written in the most common scripting languages. Its fluent DSL simplifies the implementation and deployment of complex parallel and reactive workflows on clouds and clusters. Nextflow is built around the idea that Linux is the lingua franca of data science. Nextflow allows you to write a computational pipeline by making it simpler to put together many different tasks. You may reuse your existing scripts and tools and you don't need to learn a new language or API to start using it. Nextflow supports Docker and Singularity containers technology. This, along with the integration of the GitHub code-sharing platform, allows you to write self-contained pipelines, manage versions, and rapidly reproduce any former configuration. Nextflow provides an abstraction layer between your pipeline's logic and the execution layer.
    Starting Price: Free
  • 20
    Pandio

    Pandio

    Pandio

    Connecting systems to scale AI initiatives is complex, expensive, and prone to fail. Pandio’s cloud-native managed solution simplifies your data pipelines to harness the power of AI. Access your data from anywhere at any time in order to query, analyze, and drive to insight. Big data analytics without the big cost. Enable data movement seamlessly. Streaming, queuing and pub-sub with unmatched throughput, latency, and durability. Design, train, and deploy machine learning models locally in less than 30 minutes. Accelerate your path to ML and democratize the process across your organization. And it doesn’t require months (or years) of disappointment. Pandio’s AI-driven architecture automatically orchestrates your models, data, and ML tools. Pandio works with your existing stack to accelerate your ML initiatives. Orchestrate your models and messages across your organization.
    Starting Price: $1.40 per hour
  • 21
    Chalk

    Chalk

    Chalk

    Powerful data engineering workflows, without the infrastructure headaches. Complex streaming, scheduling, and data backfill pipelines, are all defined in simple, composable Python. Make ETL a thing of the past, fetch all of your data in real-time, no matter how complex. Incorporate deep learning and LLMs into decisions alongside structured business data. Make better predictions with fresher data, don’t pay vendors to pre-fetch data you don’t use, and query data just in time for online predictions. Experiment in Jupyter, then deploy to production. Prevent train-serve skew and create new data workflows in milliseconds. Instantly monitor all of your data workflows in real-time; track usage, and data quality effortlessly. Know everything you computed and data replay anything. Integrate with the tools you already use and deploy to your own infrastructure. Decide and enforce withdrawal limits with custom hold times.
    Starting Price: Free
  • 22
    Google Cloud Data Fusion
    Open core, delivering hybrid and multi-cloud integration. Data Fusion is built using open source project CDAP, and this open core ensures data pipeline portability for users. CDAP’s broad integration with on-premises and public cloud platforms gives Cloud Data Fusion users the ability to break down silos and deliver insights that were previously inaccessible. Integrated with Google’s industry-leading big data tools. Data Fusion’s integration with Google Cloud simplifies data security and ensures data is immediately available for analysis. Whether you’re curating a data lake with Cloud Storage and Dataproc, moving data into BigQuery for data warehousing, or transforming data to land it in a relational store like Cloud Spanner, Cloud Data Fusion’s integration makes development and iteration fast and easy.
  • 23
    Openbridge

    Openbridge

    Openbridge

    Uncover insights to supercharge sales growth using code-free, fully-automated data pipelines to data lakes or cloud warehouses. A flexible, standards-based platform to unify sales and marketing data for automating insights and smarter growth. Say goodbye to messy, expensive manual data downloads. Always know what you’ll pay and only pay for what you use. Fuel your tools with quick access to analytics-ready data. As certified developers, we only work with secure, official APIs. Get started quickly with data pipelines from popular sources. Pre-built, pre-transformed, and ready-to-go data pipelines. Unlock data from Amazon Vendor Central, Amazon Seller Central, Instagram Stories, Facebook, Amazon Advertising, Google Ads, and many others. Code-free data ingestion and transformation processes allow teams to realize value from their data quickly and cost-effectively. Data is always securely stored directly in a trusted, customer-owned data destination like Databricks, Amazon Redshift, etc.
    Starting Price: $149 per month
  • 24
    EraSearch

    EraSearch

    Era Software

    Purpose-built for cloud-native deployments, EraSearch brings you a dynamic data fabric that leverages decoupled storage & compute, a true zero-schema design, and adaptive indexing to deliver an infinitely-scalable log management experience with an unparalleled reduction in cost and complexity. Lots of log management products are built on top of Elasticsearch. We built EraSearch from scratch to solve its key problems. By embracing a stateless design for all of the core components, managing EraSearch with k8s is a dream. Lots of log management products are built on top of Elasticsearch. We built EraSearch from scratch to solve its key problems. By using a modern, coordination-free ingest design, EraSearch can handle data at a greatly-reduced cost. Running EraSearch is completely hands-off, so you never have to worry about cluster health again.
    Starting Price: ¢65 per GB
  • 25
    Querona

    Querona

    YouNeedIT

    We make BI & Big Data analytics work easier and faster. Our goal is to empower business users and make always-busy business and heavily loaded BI specialists less dependent on each other when solving data-driven business problems. If you have ever experienced a lack of data you needed, time to consuming report generation or long queue to your BI expert, consider Querona. Querona uses a built-in Big Data engine to handle growing data volumes. Repeatable queries can be cached or calculated in advance. Optimization needs less effort as Querona automatically suggests query improvements. Querona empowers business analysts and data scientists by putting self-service in their hands. They can easily discover and prototype data models, add new data sources, experiment with query optimization and dig in raw data. Less IT is needed. Now users can get live data no matter where it is stored. If databases are too busy to be queried live, Querona will cache the data.
  • 26
    Zetaris

    Zetaris

    Zetaris

    Rather than uploading the data to a central place to analyze it, Zetaris enables instant analytics across all your data, now. This means you can connect multiple databases and analyze them together in real-time, without the time and cost (and fail rate) associated with moving the data to a central location. Our unique analytical query optimizer ensures speed and scalability for whatever query is being run, across any combination of data sources. Ensure data governance and security by not moving data, and analyzing it at its source. Don't move the data. No data extraction, no data transformation, no copying of data to another repository. Remove unnecessary storage, processing and support costs, thanks to zero data duplication.
  • 27
    Adaptigent

    Adaptigent

    Adaptigent

    Fabric enables you to connect your modern IT ecosystem with your core mission-critical data and transaction systems in a seamless, rapid fashion. We live in a complex world, and our IT systems reflect that complexity. After years or even decades of evolution, market changes, technology shifts, and mergers & acquisitions, CIOs have been left with a level of systems complexity that is often untenable. This complexity not only ties up a huge portion of IT budgets, but it leaves IT organizations struggling to support the real-time needs of the business. No one can eliminate this complexity overnight, but Adaptigent’s Adaptive Integration Fabric can shield your business from the complexity of your mission critical data sources, allowing you to unlock the full potential of the most secure, stable and data rich legacy systems forming the backbone of your organization.
  • 28
    Nexla

    Nexla

    Nexla

    Nexla, with its automated approach to data engineering, has for the first time made it possible for data users to get ready-to-use data from any system without any need for connectors or code. Nexla uniquely combines no-code, low-code, and a developer SDK to bring together users across skill levels on to a single platform. With its data-as-a-product core, Nexla combines integration, preparation, monitoring, and delivery of data into a single system regardless of data velocity and format. Today Nexla powers mission critical data for JPMorgan, Doordash, LinkedIn, LiveRamp, J&J, and other leading enterprises across industries.
    Starting Price: $1000/month
  • 29
    Avalor

    Avalor

    Avalor

    Avalor’s data fabric helps security teams make faster, more accurate decisions. Our data fabric architecture integrates disparate data sources from legacy systems, data lakes, data warehouses, sql databases, and apps, providing a holistic view of business performance. Automation, 2-way sync, alerts, and analytics live on top of the platform, powered by the data fabric. All security functions benefit from fast, reliable, and precise analysis of enterprise data including asset coverage, compliance reporting, ROSI analysis, vulnerability management, and more. The average security team uses dozens of specialized tools and products, each with its own purpose, taxonomy, and output. With so much disparate data, it’s hard to prioritize your efforts and know exactly where issues lie. Quickly and accurately respond to questions from the business using data from across your organization.
  • 30
    MarkLogic

    MarkLogic

    Progress Software

    Unlock data value, accelerate insightful decisions, and securely achieve data agility with the MarkLogic data platform. Combine your data with everything known about it (metadata) in a single service and reveal smarter decisions—faster. Get a faster, trusted way to securely connect data and metadata, create and interpret meaning, and consume high-quality contextualized data across the enterprise with the MarkLogic data platform. Know your customers in-the-moment and provide relevant and seamless experiences, reveal new insights to accelerate innovation, and easily enable governed access and compliance with a single data platform. MarkLogic provides a proven foundation to help you achieve your key business and technical outcomes—now and in the future.
  • 31
    Cinchy

    Cinchy

    Cinchy

    Cinchy is the Worlds first data collaboration platform built for the enterprise. Highly-regulated organizations like banks, credit unions, and insurers use Cinchy to deliver hundreds of new technologies like customer experiences, workflows, automations, and advanced analytics in half the time. This is possible because our network-based architecture gives our customers full control of their data for the first time, resulting in the biggest shift in enterprise technology delivery since 1979. Cinchy is the world’s first autonomous data fabric. Inspired by the human brain, Cinchy’s data-centric architecture makes data silos and data integration obsolete by managing data as a network. Some of the most complex organizations in the world (including highly-regulated financial institutions) are already using Cinchy to accelerate and de-risk their transformation from app-centric to data-centric organizations, unlocking real agility.
  • 32
    Qlik Compose
    Qlik Compose for Data Warehouses (formerly Attunity Compose for Data Warehouses) provides a modern approach by automating and optimizing data warehouse creation and operation. Qlik Compose automates designing the warehouse, generating ETL code, and quickly applying updates, all whilst leveraging best practices and proven design patterns. Qlik Compose for Data Warehouses dramatically reduces the time, cost and risk of BI projects, whether on-premises or in the cloud. Qlik Compose for Data Lakes (formerly Attunity Compose for Data Lakes) automates your data pipelines to create analytics-ready data sets. By automating data ingestion, schema creation, and continual updates, organizations realize faster time-to-value from their existing data lake investments.
  • 33
    Arcion

    Arcion

    Arcion Labs

    Deploy production-ready change data capture pipelines for high-volume, real-time data replication - without a single line of code. Supercharged Change Data Capture. Enjoy automatic schema conversion, end-to-end replication, flexible deployment, and more with Arcion’s distributed Change Data Capture (CDC). Leverage Arcion’s zero data loss architecture for guaranteed end-to-end data consistency, built-in checkpointing, and more without any custom code. Leave scalability and performance concerns behind with a highly-distributed, highly parallel architecture supporting 10x faster data replication. Reduce DevOps overhead with Arcion Cloud, the only fully-managed CDC offering. Enjoy autoscaling, built-in high availability, monitoring console, and more. Simplify & standardize data pipelines architecture, and zero downtime workload migration from on-prem to cloud.
    Starting Price: $2,894.76 per month
  • 34
    Osmos

    Osmos

    Osmos

    With Osmos, your customers can easily clean their messy data files and import them directly into your operational system without writing a line of code. At the core, we have an AI-powered data transformation engine that enables users to map, validate, and clean data with only a few clicks. Your account will be charged or credited based on the percentage of the billing cycle left at the time the plan was changed. An eCommerce company automates ingestion of product catalog data from multiple distributors and vendors into their database. A manufacturing company automates the data ingestion of purchase orders from email attachments into Netsuite. Automatically clean up and reformat incoming data to match your destination schema. Never deal with custom scripts and spreadsheets again.
    Starting Price: $299 per month
  • 35
    Gravity Data

    Gravity Data

    Gravity

    Gravity's mission is to make streaming data easy from over 100 sources while only paying for what you use. Gravity removes the reliance on engineering teams to deliver streaming pipelines with a simple interface to get streaming up and running in minutes from databases, event data and APIs. Everyone in the data team can now build with simple point and click so that you can focus on building apps, services and customer experiences. Full Execution trace and detailed error messaging for quick diagnosis and resolution. We have implemented new, feature-rich ways for you to quickly get started. From bulk set-up, default schemas and data selection to different job modes and statuses. Spend less time wrangling with infrastructure and more time analysing data while allowing our intelligent engine to keep your pipelines running. Gravity integrates with your systems for notifications and orchestration.
  • 36
    Airbyte

    Airbyte

    Airbyte

    Get all your ELT data pipelines running in minutes, even your custom ones. Let your team focus on insights and innovation. Unify your data integration pipelines in one open-source ELT platform. Airbyte addresses all your data team's connector needs, however custom they are and whatever your scale. The data integration platform that can scale with your custom or high-volume needs. From high-volume databases to the long tail of API sources. Leverage Airbyte’s long tail of high-quality connectors that adapt to schema and API changes. Extensible to unify all native & custom ELT. Edit pre-built open-source connectors, or build new ones with our connector development kit in a few hours. Transparent and scalable pricing. Finally, a transparent and predictable cost-based pricing that scales with your data needs. You don’t need to worry about volume anymore. No more need for custom systems for your in-house scripts or database replication.
    Starting Price: $2.50 per credit
  • 37
    Yandex Data Proc
    You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.
    Starting Price: $0.19 per hour
  • 38
    Oarkflow

    Oarkflow

    Oarkflow

    Automate your business pipeline with our flow builder. Use operations that matters to you. Bring your own service providers for email, sms and http services. Use our advanced query builder to query and analyze csv with any field numbers and rows. We store the csv files you've uploaded on our platform in a secured vault and account activity logs. We don't store any data records you request for processing.
    Starting Price: $0.0005 per task
  • 39
    Y42

    Y42

    Datos-Intelligence GmbH

    Y42 is the first fully managed Modern DataOps Cloud. It is purpose-built to help companies easily design production-ready data pipelines on top of their Google BigQuery or Snowflake cloud data warehouse. Y42 provides native integration of best-of-breed open-source data tools, comprehensive data governance, and better collaboration for data teams. With Y42, organizations enjoy increased accessibility to data and can make data-driven decisions quickly and efficiently.
  • 40
    BigBI

    BigBI

    BigBI

    BigBI enables data specialists to build their own powerful big data pipelines interactively & efficiently, without any coding! BigBI unleashes the power of Apache Spark enabling: Scalable processing of real Big Data (up to 100X faster) Integration of traditional data (SQL, batch files) with modern data sources including semi-structured (JSON, NoSQL DBs, Elastic, Hadoop), and unstructured (Text, Audio, video), Integration of streaming data, cloud data, AI/ML & graphs
  • 41
    Alooma

    Alooma

    Google

    Alooma enables data teams to have visibility and control. It brings data from your various data silos together into BigQuery, all in real time. Set up and flow data in minutes or customize, enrich, and transform data on the stream before it even hits the data warehouse. Never lose an event. Alooma's built in safety nets ensure easy error handling without pausing your pipeline. Any number of data sources, from low to high volume, Alooma’s infrastructure scales to your needs.
  • 42
    CData Sync

    CData Sync

    CData Software

    CData Sync is a universal data pipeline that delivers automated continuous replication between hundreds of SaaS applications & cloud data sources and any major database or data warehouse, on-premise or in the cloud. Replicate data from hundreds of cloud data sources to popular database destinations, such as SQL Server, Redshift, S3, Snowflake, BigQuery, and more. Configuring replication is easy: login, select the data tables to replicate, and select a replication interval. Done. CData Sync extracts data iteratively, causing minimal impact on operational systems by only querying and updating data that has been added or changed since the last update. CData Sync offers the utmost flexibility across full and partial replication scenarios and ensures that critical data is stored safely in your database of choice. Download a 30-day free trial of the Sync application or request more information at www.cdata.com/sync
  • 43
    Skyvia

    Skyvia

    Devart

    Data integration, backup, management, and connectivity. 100 % cloud based platform that offers contemporary cloud agility and scalability, eliminating the need of deployment or manual upgrades. No coding wizard-based solution that meets the needs of both IT professionals and business users with no technical skills. With flexible pricing plans for each product, Skyvia suites for businesses of any size, from a small startup to an enterprise company. Connect your cloud, on-premise, and flat data to automate workflows. Automate data collection from disparate cloud sources to a database or data warehouse. Transfer your business data between cloud apps automatically in just a few clicks. Protect all your cloud data and keep it secure in one place. Share data in real time via REST API to connect with multiple OData consumers. Query and manage any data from the browser via SQL or intuitive visual Query Builder.
  • 44
    Lightbend

    Lightbend

    Lightbend

    Lightbend provides technology that enables developers to easily build data-centric applications that bring the most demanding, globally distributed applications and streaming data pipelines to life. Companies worldwide turn to Lightbend to solve the challenges of real-time, distributed data in support of their most business-critical initiatives. Akka Platform provides the building blocks that make it easy for businesses to build, deploy, and run large-scale applications that support digitally transformative initiatives. Accelerate time-to-value and reduce infrastructure and cloud costs with reactive microservices that take full advantage of the distributed nature of the cloud and are resilient to failure, highly efficient, and operative at any scale. Native support for encryption, data shredding, TLS enforcement, and continued compliance with GDPR. Framework for quick construction, deployment and management of streaming data pipelines.
  • 45
    Azkaban

    Azkaban

    Azkaban

    Azkaban is a distributed Workflow Manager, implemented at LinkedIn to solve the problem of Hadoop job dependencies. We had jobs that needed to run in order, from ETL jobs to data analytics products. After version 3.0, we provide two modes: the stand alone “solo-server” mode and distributed multiple-executor mode. The following describes the differences between the two modes. In solo server mode, the DB is embedded H2 and both web server and executor server run in the same process. This should be useful if one just wants to try things out. It can also be used on small scale use cases. The multiple executor mode is for most serious production environment. Its DB should be backed by MySQL instances with master-slave set up. The web server and executor servers should ideally run in different hosts so that upgrading and maintenance shouldn’t affect users. This multiple host setup brings in robust and scalable aspect to Azkaban.
  • 46
    Tarsal

    Tarsal

    Tarsal

    Tarsal's infinite scalability means as your organization grows, Tarsal grows with you. Tarsal makes it easy for you to switch where you're sending data - today's SIEM data is tomorrow's data lake data; all with one click. Keep your SIEM and gradually migrate analytics over to a data lake. You don't have to rip anything out to use Tarsal. Some analytics just won't run on your SIEM. Use Tarsal to have query-ready data on a data lake. Your SIEM is one of the biggest line items in your budget. Use Tarsal to send some of that data to your data lake. Tarsal is the first highly scalable ETL data pipeline built for security teams. Easily exfil terabytes of data in just just a few clicks, with instant normalization, and route that data to your desired destination.
  • 47
    Informatica Data Engineering
    Ingest, prepare, and process data pipelines at scale for AI and analytics in the cloud. Informatica’s comprehensive data engineering portfolio provides everything you need to process and prepare big data engineering workloads to fuel AI and analytics: robust data integration, data quality, streaming, masking, and data preparation capabilities. Rapidly build intelligent data pipelines with CLAIRE®-powered automation, including automatic change data capture (CDC) Ingest thousands of databases and millions of files, and streaming events. Accelerate time-to-value ROI with self-service access to trusted, high-quality data. Get unbiased, real-world insights on Informatica data engineering solutions from peers you trust. Reference architectures for sustainable data engineering solutions. AI-powered data engineering in the cloud delivers the trusted, high quality data your analysts and data scientists need to transform business.
  • 48
    Panoply

    Panoply

    SQream

    Panoply brings together a managed data warehouse with included, pre-built ELT data connectors, making it the easiest way to store, sync, and access all your business data. Our cloud data warehouse (built on Redshift or BigQuery), along with built-in data integrations to all major CRMs, databases, file systems, ad networks, web analytics tools, and more, will have you accessing usable data in less time, with a lower total cost of ownership. One platform with one easy price is all you need to get your business data up and running today. Panoply gives you unlimited access to data sources with prebuilt Snap Connectors and a Flex Connector that can bring in data from nearly any RestAPI. Panoply can be set up in minutes, requires zero ongoing maintenance, and provides online support including access to experienced data architects.
    Starting Price: $299 per month
  • 49
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 50
    Google Cloud Composer
    Cloud Composer's managed nature and Apache Airflow compatibility allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources. End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline. Author, schedule, and monitor your workflows through a single orchestration tool—whether your pipeline lives on-premises, in multiple clouds, or fully within Google Cloud. Ease your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud. Create workflows that connect data, processing, and services across clouds to give you a unified data environment.
    Starting Price: $0.074 per vCPU hour