Alternatives to VeloDB

Compare VeloDB alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to VeloDB in 2024. Compare features, ratings, user reviews, pricing, and more from VeloDB competitors and alternatives in order to make an informed decision for your business.

  • 1
    Google Cloud BigQuery
    BigQuery is a serverless, multicloud data warehouse that simplifies the process of working with all types of data so you can focus on getting valuable business insights quickly. At the core of Google’s data cloud, BigQuery allows you to simplify data integration, cost effectively and securely scale analytics, share rich data experiences with built-in business intelligence, and train and deploy ML models with a simple SQL interface, helping to make your organization’s operations more data-driven.
    Compare vs. VeloDB View Software
    Visit Website
  • 2
    StarTree

    StarTree

    StarTree

    StarTree Cloud is a fully-managed real-time analytics platform designed for OLAP at massive speed and scale for user-facing applications. Powered by Apache Pinot, StarTree Cloud provides enterprise-grade reliability and advanced capabilities such as tiered storage, scalable upserts, plus additional indexes and connectors. It integrates seamlessly with transactional databases and event streaming platforms, ingesting data at millions of events per second and indexing it for lightning-fast query responses. StarTree Cloud is available on your favorite public cloud or for private SaaS deployment. • Gain critical real-time insights to run your business • Seamlessly integrate data streaming and batch data • High performance in throughput and low-latency at petabyte scale • Fully-managed cloud service • Tiered storage to optimize cloud performance & spend • Fully-secure & enterprise-ready
    Compare vs. VeloDB View Software
    Visit Website
  • 3
    Amazon Redshift
    More customers pick Amazon Redshift than any other cloud data warehouse. Redshift powers analytical workloads for Fortune 500 companies, startups, and everything in between. Companies like Lyft have grown with Redshift from startups to multi-billion dollar enterprises. No other data warehouse makes it as easy to gain new insights from all your data. With Redshift you can query petabytes of structured and semi-structured data across your data warehouse, operational database, and your data lake using standard SQL. Redshift lets you easily save the results of your queries back to your S3 data lake using open formats like Apache Parquet to further analyze from other analytics services like Amazon EMR, Amazon Athena, and Amazon SageMaker. Redshift is the world’s fastest cloud data warehouse and gets faster every year. For performance intensive workloads you can use the new RA3 instances to get up to 3x the performance of any cloud data warehouse.
    Starting Price: $0.25 per hour
  • 4
    Striim

    Striim

    Striim

    Data integration for your hybrid cloud. Modern, reliable data integration across your private and public cloud. All in real-time with change data capture and data streams. Built by the executive & technical team from GoldenGate Software, Striim brings decades of experience in mission-critical enterprise workloads. Striim scales out as a distributed platform in your environment or in the cloud. Scalability is fully configurable by your team. Striim is fully secure with HIPAA and GDPR compliance. Built ground up for modern enterprise workloads in the cloud or on-premise. Drag and drop to create data flows between your sources and targets. Process, enrich, and analyze your streaming data with real-time SQL queries.
  • 5
    Apache Doris

    Apache Doris

    The Apache Software Foundation

    Apache Doris is a modern data warehouse for real-time analytics. It delivers lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within a second. Storage engine with real-time upsert, append and pre-aggregation. Optimize for high-concurrency and high-throughput queries with columnar storage engine, MPP architecture, cost based query optimizer, vectorized execution engine. Federated querying of data lakes such as Hive, Iceberg and Hudi, and databases such as MySQL and PostgreSQL. Compound data types such as Array, Map and JSON. Variant data type to support auto data type inference of JSON data. NGram bloomfilter and inverted index for text searches. Distributed design for linear scalability. Workload isolation and tiered storage for efficient resource management. Supports shared-nothing clusters as well as separation of storage and compute.
    Starting Price: Free
  • 6
    Timeplus

    Timeplus

    Timeplus

    Timeplus is a simple, powerful, and cost-efficient stream processing platform. All in a single binary, easily deployed anywhere. We help data teams process streaming and historical data quickly and intuitively, in organizations of all sizes and industries. Lightweight, single binary, without dependencies. End-to-end analytic streaming and historical functionalities. 1/10 the cost of similar open source frameworks. Turn real-time market and transaction data into real-time insights. Leverage append-only streams and key-value streams to monitor financial data. Implement real-time feature pipelines using Timeplus. One platform for all infrastructure logs, metrics, and traces, the three pillars supporting observability. In Timeplus, we support a wide range of data sources in our web console UI. You can also push data via REST API, or create external streams without copying data into Timeplus.
    Starting Price: $199 per month
  • 7
    Arroyo

    Arroyo

    Arroyo

    Scale from zero to millions of events per second. Arroyo ships as a single, compact binary. Run locally on MacOS or Linux for development, and deploy to production with Docker or Kubernetes. Arroyo is a new kind of stream processing engine, built from the ground up to make real-time easier than batch. Arroyo was designed from the start so that anyone with SQL experience can build reliable, efficient, and correct streaming pipelines. Data scientists and engineers can build end-to-end real-time applications, models, and dashboards, without a separate team of streaming experts. Transform, filter, aggregate, and join data streams by writing SQL, with sub-second results. Your streaming pipelines shouldn't page someone just because Kubernetes decided to reschedule your pods. Arroyo is built to run in modern, elastic cloud environments, from simple container runtimes like Fargate to large, distributed deployments on the Kubernetes logo Kubernetes.
  • 8
    Materialize

    Materialize

    Materialize

    Materialize is a reactive database that delivers incremental view updates. We help developers easily build with streaming data using standard SQL. Materialize can connect to many different external sources of data without pre-processing. Connect directly to streaming sources like Kafka, Postgres databases, CDC, or historical sources of data like files or S3. Materialize allows you to query, join, and transform data sources in standard SQL - and presents the results as incrementally-updated Materialized views. Queries are maintained and continually updated as new data streams in. With incrementally-updated views, developers can easily build data visualizations or real-time applications. Building with streaming data can be as simple as writing a few lines of SQL.
    Starting Price: $0.98 per hour
  • 9
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 10
    SelectDB

    SelectDB

    SelectDB

    SelectDB is a modern data warehouse based on Apache Doris, which supports rapid query analysis on large-scale real-time data. From Clickhouse to Apache Doris, to achieve the separation of the lake warehouse and upgrade to the lake warehouse. The fast-hand OLAP system carries nearly 1 billion query requests every day to provide data services for multiple scenes. Due to the problems of storage redundancy, resource seizure, complicated governance, and difficulty in querying and adjustment, the original lake warehouse separation architecture was decided to introduce Apache Doris lake warehouse, combined with Doris's materialized view rewriting ability and automated services, to achieve high-performance data query and flexible data governance. Write real-time data in seconds, and synchronize flow data from databases and data streams. Data storage engine for real-time update, real-time addition, and real-time pre-polymerization.
    Starting Price: $0.22 per hour
  • 11
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 12
    Kinetica

    Kinetica

    Kinetica

    A scalable cloud database for real-time analysis on large and streaming datasets. Kinetica is designed to harness modern vectorized processors to be orders of magnitude faster and more efficient for real-time spatial and temporal workloads. Track and gain intelligence from billions of moving objects in real-time. Vectorization unlocks new levels of performance for analytics on spatial and time series data at scale. Ingest and query at the same time to act on real-time events. Kinetica's lockless architecture and distributed ingestion ensures data is available to query as soon as it lands. Vectorized processing enables you to do more with less. More power allows for simpler data structures, which lead to lower storage costs, more flexibility and less time engineering your data. Vectorized processing opens the door to amazingly fast analytics and detailed visualization of moving objects at scale.
  • 13
    StarRocks

    StarRocks

    StarRocks

    Whether you're working with a single table or multiple, you'll experience at least 300% better performance on StarRocks compared to other popular solutions. From streaming data to data capture, with a rich set of connectors, you can ingest data into StarRocks in real time for the freshest insights. A query engine that adapts to your use cases. Without moving your data or rewriting SQL, StarRocks provides the flexibility to scale your analytics on demand with ease. StarRocks enables a rapid journey from data to insight. StarRocks' performance is unmatched and provides a unified OLAP solution covering the most popular data analytics scenarios. Whether you're working with a single table or multiple, you'll experience at least 300% better performance on StarRocks compared to other popular solutions. StarRocks' built-in memory-and-disk-based caching framework is specifically designed to minimize the I/O overhead of fetching data from external storage to accelerate query performance.
    Starting Price: Free
  • 14
    Rockset

    Rockset

    Rockset

    Real-Time Analytics on Raw Data. Live ingest from S3, Kafka, DynamoDB & more. Explore raw data as SQL tables. Build amazing data-driven applications & live dashboards in minutes. Rockset is a serverless search and analytics engine that powers real-time apps and live dashboards. Operate directly on raw data, including JSON, XML, CSV, Parquet, XLSX or PDF. Plug data from real-time streams, data lakes, databases, and data warehouses into Rockset. Ingest real-time data without building pipelines. Rockset continuously syncs new data as it lands in your data sources without the need for a fixed schema. Use familiar SQL, including joins, filters, and aggregations. It’s blazing fast, as Rockset automatically indexes all fields in your data. Serve fast queries that power the apps, microservices, live dashboards, and data science notebooks you build. Scale without worrying about servers, shards, or pagers.
    Starting Price: Free
  • 15
    Dremio

    Dremio

    Dremio

    Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable.
  • 16
    Baidu Palo

    Baidu Palo

    Baidu AI Cloud

    Palo helps enterprises to create the PB-level MPP architecture data warehouse service within several minutes and import the massive data from RDS, BOS, and BMR. Thus, Palo can perform the multi-dimensional analytics of big data. Palo is compatible with mainstream BI tools. Data analysts can analyze and display the data visually and gain insights quickly to assist decision-making. It has the industry-leading MPP query engine, with column storage, intelligent index,and vector execution functions. It can also provide in-library analytics, window functions, and other advanced analytics functions. You can create a materialized view and change the table structure without the suspension of service. It supports flexible and efficient data recovery.
  • 17
    ksqlDB

    ksqlDB

    Confluent

    Now that your data is in motion, it’s time to make sense of it. Stream processing enables you to derive instant insights from your data streams, but setting up the infrastructure to support it can be complex. That’s why Confluent developed ksqlDB, the database purpose-built for stream processing applications. Make your data immediately actionable by continuously processing streams of data generated throughout your business. ksqlDB’s intuitive syntax lets you quickly access and augment data in Kafka, enabling development teams to seamlessly create real-time innovative customer experiences and fulfill data-driven operational needs. ksqlDB offers a single solution for collecting streams of data, enriching them, and serving queries on new derived streams and tables. That means less infrastructure to deploy, maintain, scale, and secure. With less moving parts in your data architecture, you can focus on what really matters -- innovation.
  • 18
    Aerospike

    Aerospike

    Aerospike

    Aerospike is the global leader in next-generation, real-time NoSQL data solutions for any scale. Aerospike enterprises overcome seemingly impossible data bottlenecks to compete and win with a fraction of the infrastructure complexity and cost of legacy NoSQL databases. Aerospike’s patented Hybrid Memory Architecture™ delivers an unbreakable competitive advantage by unlocking the full potential of modern hardware, delivering previously unimaginable value from vast amounts of data at the edge, to the core and in the cloud. Aerospike empowers customers to instantly fight fraud; dramatically increase shopping cart size; deploy global digital payment networks; and deliver instant, one-to-one personalization for millions of customers. Aerospike customers include Airtel, Banca d’Italia, Nielsen, PayPal, Snap, Verizon Media and Wayfair. The company is headquartered in Mountain View, Calif., with additional locations in London; Bengaluru, India; and Tel Aviv, Israel.
  • 19
    Spark Streaming

    Spark Streaming

    Apache Software Foundation

    Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs. It supports Java, Scala and Python. Spark Streaming recovers both lost work and operator state (e.g. sliding windows) out of the box, without any extra code on your part. By running on Spark, Spark Streaming lets you reuse the same code for batch processing, join streams against historical data, or run ad-hoc queries on stream state. Build powerful interactive applications, not just analytics. Spark Streaming is developed as part of Apache Spark. It thus gets tested and updated with each Spark release. You can run Spark Streaming on Spark's standalone cluster mode or other supported cluster resource managers. It also includes a local run mode for development. In production, Spark Streaming uses ZooKeeper and HDFS for high availability.
  • 20
    Imply

    Imply

    Imply

    Imply is a real-time analytics platform built on Apache Druid, designed to handle large-scale, high-performance OLAP (Online Analytical Processing) workloads. It offers real-time data ingestion, fast query performance, and the ability to perform complex analytical queries on massive datasets with low latency. Imply is tailored for organizations that need interactive analytics, real-time dashboards, and data-driven decision-making at scale. It provides a user-friendly interface for data exploration, along with advanced features such as multi-tenancy, fine-grained access controls, and operational insights. With its distributed architecture and scalability, Imply is well-suited for use cases in streaming data analytics, business intelligence, and real-time monitoring across industries.
  • 21
    SingleStore

    SingleStore

    SingleStore

    SingleStore (formerly MemSQL) is a distributed, highly-scalable SQL database that can run anywhere. We deliver maximum performance for transactional and analytical workloads with familiar relational models. SingleStore is a scalable SQL database that ingests data continuously to perform operational analytics for the front lines of your business. Ingest millions of events per second with ACID transactions while simultaneously analyzing billions of rows of data in relational SQL, JSON, geospatial, and full-text search formats. SingleStore delivers ultimate data ingestion performance at scale and supports built in batch loading and real time data pipelines. SingleStore lets you achieve ultra fast query response across both live and historical data using familiar ANSI SQL. Perform ad hoc analysis with business intelligence tools, run machine learning algorithms for real-time scoring, perform geoanalytic queries in real time.
    Starting Price: $0.69 per hour
  • 22
    IBM Db2 Big SQL
    A hybrid SQL-on-Hadoop engine delivering advanced, security-rich data query across enterprise big data sources, including Hadoop, object storage and data warehouses. IBM Db2 Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL-on-Hadoop engine, delivering massively parallel processing (MPP) and advanced data query. Db2 Big SQL offers a single database connection or query for disparate sources such as Hadoop HDFS and WebHDFS, RDMS, NoSQL databases, and object stores. Benefit from low latency, high performance, data security, SQL compatibility, and federation capabilities to do ad hoc and complex queries. Db2 Big SQL is now available in 2 variations. It can be integrated with Cloudera Data Platform, or accessed as a cloud-native service on the IBM Cloud Pak® for Data platform. Access and analyze data and perform queries on batch and real-time data across sources, like Hadoop, object stores and data warehouses.
  • 23
    Apache Hive

    Apache Hive

    Apache Software Foundation

    The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. Apache Hive is an open source project run by volunteers at the Apache Software Foundation. Previously it was a subproject of Apache® Hadoop®, but has now graduated to become a top-level project of its own. We encourage you to learn about the project and contribute your expertise. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API.
  • 24
    Apache Pinot

    Apache Pinot

    Apache Corporation

    Pinot is designed to answer OLAP queries with low latency on immutable data. Pluggable indexing technologies - Sorted Index, Bitmap Index, Inverted Index. Joins are currently not supported, but this problem can be overcome by using Trino or PrestoDB for querying. SQL like language that supports selection, aggregation, filtering, group by, order by, distinct queries on data. Consist of of both offline and real-time table. Use real-time table only to cover segments for which offline data may not be available yet. Detect the right anomalies by customizing anomaly detect flow and notification flow.
  • 25
    Snowflake

    Snowflake

    Snowflake

    Your cloud data platform. Secure and easy access to any data with infinite scalability. Get all the insights from all your data by all your users, with the instant and near-infinite performance, concurrency and scale your organization requires. Seamlessly share and consume shared data to collaborate across your organization, and beyond, to solve your toughest business problems in real time. Boost the productivity of your data professionals and shorten your time to value in order to deliver modern and integrated data solutions swiftly from anywhere in your organization. Whether you’re moving data into Snowflake or extracting insight out of Snowflake, our technology partners and system integrators will help you deploy Snowflake for your success.
    Starting Price: $40.00 per month
  • 26
    PySpark

    PySpark

    PySpark

    PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment. PySpark supports most of Spark’s features such as Spark SQL, DataFrame, Streaming, MLlib (Machine Learning) and Spark Core. Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrame and can also act as distributed SQL query engine. Running on top of Spark, the streaming feature in Apache Spark enables powerful interactive and analytical applications across both streaming and historical data, while inheriting Spark’s ease of use and fault tolerance characteristics.
  • 27
    LlamaIndex

    LlamaIndex

    LlamaIndex

    LlamaIndex is a “data framework” to help you build LLM apps. Connect semi-structured data from API's like Slack, Salesforce, Notion, etc. LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. LlamaIndex provides the key tools to augment your LLM applications with data. Connect your existing data sources and data formats (API's, PDF's, documents, SQL, etc.) to use with a large language model application. Store and index your data for different use cases. Integrate with downstream vector store and database providers. LlamaIndex provides a query interface that accepts any input prompt over your data and returns a knowledge-augmented response. Connect unstructured sources such as documents, raw text files, PDF's, videos, images, etc. Easily integrate structured data sources from Excel, SQL, etc. Provides ways to structure your data (indices, graphs) so that this data can be easily used with LLMs.
  • 28
    Apache Flume

    Apache Flume

    Apache Software Foundation

    Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault-tolerant with tunable reliability mechanisms and many failovers and recovery mechanisms. It uses a simple extensible data model that allows for online analytic applications. The Apache Flume team is pleased to announce the release of Flume 1.8.0. Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming event data.
  • 29
    Apache Impala
    Impala provides low latency and high concurrency for BI/analytic queries on the Hadoop ecosystem, including Iceberg, open data formats, and most cloud storage options. Impala also scales linearly, even in multitenant environments. Impala is integrated with native Hadoop security and Kerberos for authentication, and via the Ranger module, you can ensure that the right users and applications are authorized for the right data. Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment, with no redundant infrastructure or data conversion/duplication. For Apache Hive users, Impala utilizes the same metadata and ODBC driver. Like Hive, Impala supports SQL, so you don't have to worry about reinventing the implementation wheel. With Impala, more users, whether using SQL queries or BI applications, can interact with more data through a single repository and metadata stored from source through analysis.
    Starting Price: Free
  • 30
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 31
    Amazon Timestream
    Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases. Amazon Timestream saves you time and cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical data to a cost optimized storage tier based upon user defined policies. Amazon Timestream’s purpose-built query engine lets you access and analyze recent and historical data together, without needing to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier. Amazon Timestream has built-in time series analytics functions, helping you identify trends and patterns in your data in near real-time.
  • 32
    QuasarDB

    QuasarDB

    QuasarDB

    Quasar's brain is QuasarDB, a high-performance, distributed, column-oriented timeseries database management system designed from the ground up to deliver real-time on petascale use cases. Up to 20X less disk usage. Quasardb ingestion and compression capabilities are unmatched. Up to 10,000X faster feature extraction. QuasarDB can extract features in real-time from the raw data, thanks to the combination of a built-in map/reduce query engine, an aggregation engine that leverages SIMD from modern CPUs, and stochastic indexes that use virtually no disk space. The most cost-effective timeseries solution, thanks to its ultra-efficient resource usage, the capability to leverage object storage (S3), unique compression technology, and fair pricing model. Quasar runs everywhere, from 32-bit ARM devices to high-end Intel servers, from Edge Computing to the cloud or on-premises.
  • 33
    Samza

    Samza

    Apache Software Foundation

    Samza allows you to build stateful applications that process data in real-time from multiple sources including Apache Kafka. Battle-tested at scale, it supports flexible deployment options to run on YARN or as a standalone library. Samza provides extremely low latencies and high throughput to analyze your data instantly. Scales to several terabytes of state with features like incremental checkpoints and host-affinity. Samza is easy to operate with flexible deployment options - YARN, Kubernetes or standalone. Ability to run the same code to process both batch and streaming data. Integrates with several sources including Kafka, HDFS, AWS Kinesis, Azure Eventhubs, K-V stores and ElasticSearch.
  • 34
    DeltaStream

    DeltaStream

    DeltaStream

    DeltaStream is a unified serverless stream processing platform that integrates with streaming storage services. Think about it as the compute layer on top of your streaming storage. It provides functionalities of streaming analytics(Stream processing) and streaming databases along with additional features to provide a complete platform to manage, process, secure and share streaming data. DeltaStream provides a SQL based interface where you can easily create stream processing applications such as streaming pipelines, materialized views, microservices and many more. It has a pluggable processing engine and currently uses Apache Flink as its primary stream processing engine. DeltaStream is more than just a query processing layer on top of Kafka or Kinesis. It brings relational database concepts to the data streaming world, including namespacing and role based access control enabling you to securely access, process and share your streaming data regardless of where they are stored.
  • 35
    Google Cloud Datastream
    Serverless and easy-to-use change data capture and replication service. Access to streaming data from MySQL, PostgreSQL, AlloyDB, SQL Server, and Oracle databases. Near real-time analytics in BigQuery. Easy-to-use setup with built-in secure connectivity for faster time-to-value. A serverless platform that automatically scales, with no resources to provision or manage. Log-based mechanism to reduce the load and potential disruption on source databases. Synchronize data across heterogeneous databases, storage systems, and applications reliably, with low latency, while minimizing impact on source performance. Get up and running fast with a serverless and easy-to-use service that seamlessly scales up or down, and has no infrastructure to manage. Connect and integrate data across your organization with the best of Google Cloud services like BigQuery, Spanner, Dataflow, and Data Fusion.
  • 36
    Oracle Cloud Infrastructure Streaming
    Streaming service is a real-time, serverless, Apache Kafka-compatible event streaming platform for developers and data scientists. Streaming is tightly integrated with Oracle Cloud Infrastructure (OCI), Database, GoldenGate, and Integration Cloud. The service also provides out-of-the-box integrations for hundreds of third-party products across categories such as DevOps, databases, big data, and SaaS applications. Data engineers can easily set up and operate big data pipelines. Oracle handles all infrastructure and platform management for event streaming, including provisioning, scaling, and security patching. With the help of consumer groups, Streaming can provide state management for thousands of consumers. This helps developers easily build applications at scale.
  • 37
    Decodable

    Decodable

    Decodable

    No more low level code and stitching together complex systems. Build and deploy pipelines in minutes with SQL. A data engineering service that makes it easy for developers and data engineers to build and deploy real-time data pipelines for data-driven applications. Pre-built connectors for messaging systems, storage systems, and database engines make it easy to connect and discover available data. For each connection you make, you get a stream to or from the system. With Decodable you can build your pipelines with SQL. Pipelines use streams to send data to, or receive data from, your connections. You can also use streams to connect pipelines together to handle the most complex processing tasks. Observe your pipelines to ensure data keeps flowing. Create curated streams for other teams. Define retention policies on streams to avoid data loss during external system failures. Real-time health and performance metrics let you know everything’s working.
    Starting Price: $0.20 per task per hour
  • 38
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 39
    Tabular

    Tabular

    Tabular

    Tabular is an open table store from the creators of Apache Iceberg. Connect multiple computing engines and frameworks. Decrease query time and storage costs by up to 50%. Centralize enforcement of data access (RBAC) policies. Connect any query engine or framework, including Athena, BigQuery, Redshift, Snowflake, Databricks, Trino, Spark, and Python. Smart compaction, clustering, and other automated data services reduce storage costs and query times by up to 50%. Unify data access at the database or table. RBAC controls are simple to manage, consistently enforced, and easy to audit. Centralize your security down to the table. Tabular is easy to use plus it features high-powered ingestion, performance, and RBAC under the hood. Tabular gives you the flexibility to work with multiple “best of breed” compute engines based on their strengths. Assign privileges at the data warehouse database, table, or column level.
    Starting Price: $100 per month
  • 40
    Trino

    Trino

    Trino

    Trino is a query engine that runs at ludicrous speed. Fast-distributed SQL query engine for big data analytics that helps you explore your data universe. Trino is a highly parallel and distributed query engine, that is built from the ground up for efficient, low-latency analytics. The largest organizations in the world use Trino to query exabyte-scale data lakes and massive data warehouses alike. Supports diverse use cases, ad-hoc analytics at interactive speeds, massive multi-hour batch queries, and high-volume apps that perform sub-second queries. Trino is an ANSI SQL-compliant query engine, that works with BI tools such as R, Tableau, Power BI, Superset, and many others. You can natively query data in Hadoop, S3, Cassandra, MySQL, and many others, without the need for complex, slow, and error-prone processes for copying the data. Access data from multiple systems within a single query.
    Starting Price: Free
  • 41
    Apache Storm

    Apache Storm

    Apache Software Foundation

    Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! Apache Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Apache Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Apache Storm integrates with the queueing and database technologies you already use. An Apache Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.
  • 42
    GeoSpock

    GeoSpock

    GeoSpock

    GeoSpock enables data fusion for the connected world with GeoSpock DB – the space-time analytics database. GeoSpock DB is a unique, cloud-native database optimised for querying for real-world use cases, able to fuse multiple sources of Internet of Things (IoT) data together to unlock its full value, whilst simultaneously reducing complexity and cost. GeoSpock DB enables efficient storage, data fusion, and rapid programmatic access to data, and allows you to run ANSI SQL queries and connect to analytics tools via JDBC/ODBC connectors. Users are able to perform analysis and share insights using familiar toolsets, with support for common BI tools (such as Tableau™, Amazon QuickSight™, and Microsoft Power BI™), and Data Science and Machine Learning environments (including Python Notebooks and Apache Spark). The database can also be integrated with internal applications and web services – with compatibility for open-source and visualisation libraries such as Kepler and Cesium.js.
  • 43
    Presto

    Presto

    Presto Foundation

    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. For data engineers who struggle with managing multiple query languages and interfaces to siloed databases and storage, Presto is the fast and reliable engine that provides one simple ANSI SQL interface for all your data analytics and your open lakehouse. Different engines for different workloads means you will have to re-platform down the road. With Presto, you get 1 familar ANSI SQL language and 1 engine for your data analytics so you don't need to graduate to another lakehouse engine. Presto can be used for interactive and batch workloads, small and large amounts of data, and scales from a few to thousands of users. Presto gives you one simple ANSI SQL interface for all of your data in various siloed data systems, helping you join your data ecosystem together.
  • 44
    ClickHouse

    ClickHouse

    ClickHouse

    ClickHouse is a fast open-source OLAP database management system. It is column-oriented and allows to generate analytical reports using SQL queries in real-time. ClickHouse's performance exceeds comparable column-oriented database management systems currently available on the market. It processes hundreds of millions to more than a billion rows and tens of gigabytes of data per single server per second. ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency. ClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure.
  • 45
    labPortal

    labPortal

    Analytical Information Systems

    Perhaps you want to give your clients access to their LIMS data and reports via the web. AIS labPortal allows you to do just that. Paper copies of sample analyses needn’t be sent out in the post to customers. Using their unique login and security password, clients can access data from their computer, which is not only safer and less time-consuming but also more environmentally friendly. labPortal is a web-based portal that securely stores your clients’ sample information and data in the cloud, allowing them to easily access it instantly from their own desktop, tablet or phone. The labPortal interface is 'inbox' style which is simple and easy to use with an enhanced query engine, conditional highlighting and Microsoft Excel export. The software features a simple and easy-to-use sample registration form which allows users to pre-register samples online. Transcribing data is a time-consuming and tedious activity.
    Starting Price: $200 per month
  • 46
    Astra Streaming
    Responsive applications keep users engaged and developers inspired. Rise to meet these ever-increasing expectations with the DataStax Astra Streaming service platform. DataStax Astra Streaming is a cloud-native messaging and event streaming platform powered by Apache Pulsar. Astra Streaming allows you to build streaming applications on top of an elastically scalable, multi-cloud messaging and event streaming platform. Astra Streaming is powered by Apache Pulsar, the next-generation event streaming platform which provides a unified solution for streaming, queuing, pub/sub, and stream processing. Astra Streaming is a natural complement to Astra DB. Using Astra Streaming, existing Astra DB users can easily build real-time data pipelines into and out of their Astra DB instances. With Astra Streaming, avoid vendor lock-in and deploy on any of the major public clouds (AWS, GCP, Azure) compatible with open-source Apache Pulsar.
  • 47
    WarpStream

    WarpStream

    WarpStream

    WarpStream is an Apache Kafka-compatible data streaming platform built directly on top of object storage, with no inter-AZ networking costs, no disks to manage, and infinitely scalable, all within your VPC. WarpStream is deployed as a stateless and auto-scaling agent binary in your VPC with no local disks to manage. Agents stream data directly to and from object storage with no buffering on local disks and no data tiering. Create new “virtual clusters” in our control plane instantly. Support different environments, teams, or projects without managing any dedicated infrastructure. WarpStream is protocol compatible with Apache Kafka, so you can keep using all your favorite tools and software. No need to rewrite your application or use a proprietary SDK. Just change the URL in your favorite Kafka client library and start streaming. Never again have to choose between reliability and your budget.
    Starting Price: $2,987 per month
  • 48
    Confluent

    Confluent

    Confluent

    Infinite retention for Apache Kafka® with Confluent. Be infrastructure-enabled, not infrastructure-restricted Legacy technologies require you to choose between being real-time or highly-scalable. Event streaming enables you to innovate and win - by being both real-time and highly-scalable. Ever wonder how your rideshare app analyzes massive amounts of data from multiple sources to calculate real-time ETA? Ever wonder how your credit card company analyzes millions of credit card transactions across the globe and sends fraud notifications in real-time? The answer is event streaming. Move to microservices. Enable your hybrid strategy through a persistent bridge to cloud. Break down silos to demonstrate compliance. Gain real-time, persistent event transport. The list is endless.
  • 49
    Tinybird

    Tinybird

    Tinybird

    Query and shape your data using Pipes, a new way to chain SQL queries inspired by Python Notebooks. Designed to reduce complexity without sacrificing performance. By splitting your query in different nodes you simplify development and maintenance. Activate your production-ready API endpoints with one click. Transformations occur on-the-fly so you will always work with the latest data. Share access securely to your data in one click and get fast and consistent results. Apart from providing monitoring tools, Tinybird scales linearly: don't worry about traffic spikes. Imagine if you could turn, in a matter of minutes, any Data Stream or CSV file into a fully secured real-time analytics API endpoint. We believe in high-frequency decision-making for all organizations in all industries including retail, manufacturing, telecommunications, government, advertising, entertainment, healthcare, and financial services.
    Starting Price: $0.07 per processed GB
  • 50
    Yandex Data Streams
    Simplifies data exchange between components in microservice architectures. When used as a transport for microservices, it simplifies integration, increases reliability, and improves scaling. Read and write data in near real-time. Set data throughput and storage times to meet your needs. Enjoy granular configuration of the resources for processing data streams, from small streams of 100 KB/s to streams of 100 MB/s. Deliver a single stream to multiple targets with different retention policies using Yandex Data Transfer. Data is automatically replicated across multiple geographically distributed availability zones. Once created, you can manage data streams centrally in the management console or using the API. Yandex Data Streams can continuously collect data from sources like website browsing histories, application and system logs, and social media feeds. Yandex Data Streams is capable of continuously collecting data from sources such as website browsing histories, application logs, etc.
    Starting Price: $0.086400 per GB