Alternatives to ClickHouse
Compare ClickHouse alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to ClickHouse in 2024. Compare features, ratings, user reviews, pricing, and more from ClickHouse competitors and alternatives in order to make an informed decision for your business.
-
1
Google Cloud BigQuery
Google
BigQuery is a serverless, multicloud data warehouse that simplifies the process of working with all types of data so you can focus on getting valuable business insights quickly. At the core of Google’s data cloud, BigQuery allows you to simplify data integration, cost effectively and securely scale analytics, share rich data experiences with built-in business intelligence, and train and deploy ML models with a simple SQL interface, helping to make your organization’s operations more data-driven. -
2
StarTree
StarTree
StarTree Cloud is a fully-managed real-time analytics platform designed for OLAP at massive speed and scale for user-facing applications. Powered by Apache Pinot, StarTree Cloud provides enterprise-grade reliability and advanced capabilities such as tiered storage, scalable upserts, plus additional indexes and connectors. It integrates seamlessly with transactional databases and event streaming platforms, ingesting data at millions of events per second and indexing it for lightning-fast query responses. StarTree Cloud is available on your favorite public cloud or for private SaaS deployment. • Gain critical real-time insights to run your business • Seamlessly integrate data streaming and batch data • High performance in throughput and low-latency at petabyte scale • Fully-managed cloud service • Tiered storage to optimize cloud performance & spend • Fully-secure & enterprise-ready -
3
Amazon Redshift
Amazon
More customers pick Amazon Redshift than any other cloud data warehouse. Redshift powers analytical workloads for Fortune 500 companies, startups, and everything in between. Companies like Lyft have grown with Redshift from startups to multi-billion dollar enterprises. No other data warehouse makes it as easy to gain new insights from all your data. With Redshift you can query petabytes of structured and semi-structured data across your data warehouse, operational database, and your data lake using standard SQL. Redshift lets you easily save the results of your queries back to your S3 data lake using open formats like Apache Parquet to further analyze from other analytics services like Amazon EMR, Amazon Athena, and Amazon SageMaker. Redshift is the world’s fastest cloud data warehouse and gets faster every year. For performance intensive workloads you can use the new RA3 instances to get up to 3x the performance of any cloud data warehouse.Starting Price: $0.25 per hour -
4
SAP HANA Cloud
SAP
SAP HANA Cloud is a fully managed in-memory cloud database as a service (DBaaS). As the cloud-based data foundation for SAP Business Technology Platform, it integrates data from across the enterprise, enabling faster decisions based on live data. Build data solutions with modern architectures and gain business-ready insights in real-time. As the data foundation for SAP Business Technology Platform, the SAP HANA Cloud database offers the power of SAP HANA in the cloud. Scale to your needs, process business data of all types, and perform advanced analytics on live transactions without tuning for fast, improved decision-making. Connect to distributed data with native integration, develop applications and tools across clouds and on-premise, and store volatile data. Tap business-ready information by creating one source of truth and enable security, privacy, and anonymization with enterprise reliability. -
5
QuestDB
QuestDB
QuestDB is a relational column-oriented database designed for time series and event data. It uses SQL with extensions for time series to assist with real-time analytics. These pages cover core concepts of QuestDB, including setup steps, usage guides, and reference documentation for syntax, APIs and configuration. This section describes the architecture of QuestDB, how it stores and queries data, and introduces features and capabilities unique to the system. Designated timestamp is a core feature that enables time-oriented language capabilities and partitioning. Symbol type makes storing and retrieving repetitive strings efficient. Storage model describes how QuestDB stores records and partitions within tables. Indexes can be used for faster read access on specific columns. Partitions can be used for significant performance benefits on calculations and queries. SQL extensions allow performant time series analysis with a concise syntax. -
6
Rockset
Rockset
Real-Time Analytics on Raw Data. Live ingest from S3, Kafka, DynamoDB & more. Explore raw data as SQL tables. Build amazing data-driven applications & live dashboards in minutes. Rockset is a serverless search and analytics engine that powers real-time apps and live dashboards. Operate directly on raw data, including JSON, XML, CSV, Parquet, XLSX or PDF. Plug data from real-time streams, data lakes, databases, and data warehouses into Rockset. Ingest real-time data without building pipelines. Rockset continuously syncs new data as it lands in your data sources without the need for a fixed schema. Use familiar SQL, including joins, filters, and aggregations. It’s blazing fast, as Rockset automatically indexes all fields in your data. Serve fast queries that power the apps, microservices, live dashboards, and data science notebooks you build. Scale without worrying about servers, shards, or pagers.Starting Price: Free -
7
TiDB
PingCAP
An open-source, cloud-native, distributed SQL database for elastic scale and real-time analytics. Supported by a wealth of open-source data migration tools in the ecosystem, TiDB gives you the freedom to choose your own vendor and avoid lock-in. Purposely built to deliver SQL at scale, TiDB eliminates the scaling problems of traditional relational databases without intrusion to your application. HTAP database platform that enables real-time situation awareness and decision making on live transactional data and eliminates friction between IT and business goals. TiDB is ACID-compliant and strongly consistent. You can use TiDB as a scale-out MySQL database with familiar SQL syntaxes and ecosystem tools. TiDB automatically shards your data so you don’t have to do it manually. You can simply add new nodes to scale horizontally and elastically to meet your business growth. TiDB simplifies the ETL process and automatically recovers from errors. -
8
TimescaleDB
Timescale
TimescaleDB is the leading open-source relational database with support for time-series data. Fully managed or self‑hosted. Rely on the same PostgreSQL you know and love, with full SQL, rock-solid reliability, and a massive ecosystem. Write millions of data points per second per node. Horizontally scale to petabytes. Don’t worry about cardinality. Simplify your stack, ask more complex questions, and build more powerful applications. Spend less with 94-97% compression rates from best-in-class algorithms and other performance improvements. A modern, cloud-native relational database platform for time-series data based on TimescaleDB and PostgreSQL. The fast, easy, and reliable way to store all your time-series data. All observability data is time-series data. Efficiently finding and addressing infrastructure and application issues is a time-series problem. -
9
MonetDB
MonetDB
Choose from a wide range of SQL features to realise your applications from pure analytics to hybrid transactional/analytical processing. When you're curious about what's in your data; when you want to work efficiently; when your deadline is closing: MonetDB returns query result in mere seconds or even less. When you want to (re)use your own code; when you need specialised functions: use the hooks to add your own user-defined functions in SQL, Python, R or C/C++. Join us and expand the MonetDB community spread over 130+ countries with students, teachers, researchers, start-ups, small businesses and multinational enterprises. Join the leading Database in Analytical Jobs and surf the innovation! Don’t lose time with complex installation, use MonetDB’s easy setup to get your DBMS up and running quickly. -
10
YDB
YDB
Entrust YDB with keeping your application state regardless of how large or frequently modified it is. Handling petabytes of data and millions of transactions per second is not an issue. Build analytical reports based on data you store in YDB with performance comparable to database management systems purpose-built for this use case. No compromises on consistency and availability are necessary. Use the YDB topics feature to reliably send data between your applications or consume change data capture feed from regular tables. Exactly-once and at-least-once semantics are available to choose from. YDB is designed to work in three availability zones, ensuring availability even if the whole availability zone goes offline. It recovers automatically after a disk, server, or data center failure with minimum latency disruptions for applications.Starting Price: Free -
11
VMware Tanzu Greenplum
Broadcom
Free your apps. Simplify your ops. To win in business today, you have to be great at software. How do you improve feature velocity on the workloads that power your business? Or effectively run and manage modernized workloads on any cloud? VMware Tanzu—coupled with VMware Pivotal Labs—enables you to transform your teams and your applications, while simplifying operations across multi-cloud infrastructure: on-premises, public cloud, and edge. -
12
ksqlDB
Confluent
Now that your data is in motion, it’s time to make sense of it. Stream processing enables you to derive instant insights from your data streams, but setting up the infrastructure to support it can be complex. That’s why Confluent developed ksqlDB, the database purpose-built for stream processing applications. Make your data immediately actionable by continuously processing streams of data generated throughout your business. ksqlDB’s intuitive syntax lets you quickly access and augment data in Kafka, enabling development teams to seamlessly create real-time innovative customer experiences and fulfill data-driven operational needs. ksqlDB offers a single solution for collecting streams of data, enriching them, and serving queries on new derived streams and tables. That means less infrastructure to deploy, maintain, scale, and secure. With less moving parts in your data architecture, you can focus on what really matters -- innovation. -
13
Apache Druid
Druid
Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures. -
14
Apache Kudu
The Apache Software Foundation
A Kudu cluster stores tables that look just like tables you're used to from relational (SQL) databases. A table can be as simple as a binary key and value, or as complex as a few hundred different strongly-typed attributes. Just like SQL, every table has a primary key made up of one or more columns. This might be a single column like a unique user identifier, or a compound key such as a (host, metric, timestamp) tuple for a machine time-series database. Rows can be efficiently read, updated, or deleted by their primary key. Kudu's simple data model makes it a breeze to port legacy applications or build new ones, no need to worry about how to encode your data into binary blobs or make sense of a huge database full of hard-to-interpret JSON. Tables are self-describing, so you can use standard tools like SQL engines or Spark to analyze your data. Kudu's APIs are designed to be easy to use. -
15
Apache Kylin
Apache Software Foundation
Apache Kylin™ is an open source, distributed Analytical Data Warehouse for Big Data; it was designed to provide OLAP (Online Analytical Processing) capability in the big data era. By renovating the multi-dimensional cube and precalculation technology on Hadoop and Spark, Kylin is able to achieve near constant query speed regardless of the ever-growing data volume. Reducing query latency from minutes to sub-second, Kylin brings online analytics back to big data. Kylin can analyze 10+ billions of rows in less than a second. No more waiting on reports for critical decisions. Kylin connects data on Hadoop to BI tools like Tableau, PowerBI/Excel, MSTR, QlikSense, Hue and SuperSet, making the BI on Hadoop faster than ever. As an Analytical Data Warehouse, Kylin offers ANSI SQL on Hadoop/Spark and supports most ANSI SQL query functions. Kylin can support thousands of interactive queries at the same time, thanks to the low resource consumption of each query. -
16
DuckDB
DuckDB
Processing and storing tabular datasets, e.g. from CSV or Parquet files. Large result set transfer to client. Large client/server installations for centralized enterprise data warehousing. Writing to a single database from multiple concurrent processes. DuckDB is a relational database management system (RDBMS). That means it is a system for managing data stored in relations. A relation is essentially a mathematical term for a table. Each table is a named collection of rows. Each row of a given table has the same set of named columns, and each column is of a specific data type. Tables themselves are stored inside schemas, and a collection of schemas constitutes the entire database that you can access. -
17
CrateDB
CrateDB
The enterprise database for time series, documents, and vectors. Store any type of data and combine the simplicity of SQL with the scalability of NoSQL. CrateDB is an open source distributed database running queries in milliseconds, whatever the complexity, volume and velocity of data. -
18
Databend
Databend
Databend is a modern, cloud-native data warehouse built to deliver high-performance, cost-efficient analytics for large-scale data processing. It is designed with an elastic architecture that scales dynamically to meet the demands of different workloads, ensuring efficient resource utilization and lower operational costs. Written in Rust, Databend offers exceptional performance through features like vectorized query execution and columnar storage, which optimize data retrieval and processing speeds. Its cloud-first design enables seamless integration with cloud platforms, and it emphasizes reliability, data consistency, and fault tolerance. Databend is an open source solution, making it a flexible and accessible choice for data teams looking to handle big data analytics in the cloud.Starting Price: Free -
19
Citus
Citus Data
Citus gives you the Postgres you love, plus the superpower of distributed tables. 100% open source. Now with schema-based and row-based sharding, plus Postgres 16 support. Scale Postgres by distributing data & queries. You can start with a single Citus node, then add nodes & rebalance shards when you need to grow. Speed up queries by 20x to 300x (or more) through parallelism, keeping more data in memory, higher I/O bandwidth, and columnar compression. Citus is an extension (not a fork) to the latest Postgres versions, so you can use your familiar SQL toolset & leverage your Postgres expertise. Reduce your infrastructure headaches by using a single database for both your transactional and analytical workloads. Download and use Citus open source for free. You can manage Citus yourself, embrace open source, and help us improve Citus via GitHub. Focus on your application & forget about your database. Run your app on Citus in the cloud with Azure Cosmos DB for PostgreSQL.Starting Price: $0.27 per hour -
20
Greenplum
Greenplum Database
Greenplum Database® is an advanced, fully featured, open source data warehouse. It provides powerful and rapid analytics on petabyte scale data volumes. Uniquely geared toward big data analytics, Greenplum Database is powered by the world’s most advanced cost-based query optimizer delivering high analytical query performance on large data volumes. Greenplum Database® project is released under the Apache 2 license. We want to thank all our current community contributors and are interested in all new potential contributions. For the Greenplum Database community no contribution is too small, we encourage all types of contributions. An open-source massively parallel data platform for analytics, machine learning and AI. Rapidly create and deploy models for complex applications in cybersecurity, predictive maintenance, risk management, fraud detection, and many other areas. Experience the fully featured, integrated, open source analytics platform. -
21
Presto
Presto Foundation
Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. For data engineers who struggle with managing multiple query languages and interfaces to siloed databases and storage, Presto is the fast and reliable engine that provides one simple ANSI SQL interface for all your data analytics and your open lakehouse. Different engines for different workloads means you will have to re-platform down the road. With Presto, you get 1 familar ANSI SQL language and 1 engine for your data analytics so you don't need to graduate to another lakehouse engine. Presto can be used for interactive and batch workloads, small and large amounts of data, and scales from a few to thousands of users. Presto gives you one simple ANSI SQL interface for all of your data in various siloed data systems, helping you join your data ecosystem together. -
22
Trino
Trino
Trino is a query engine that runs at ludicrous speed. Fast-distributed SQL query engine for big data analytics that helps you explore your data universe. Trino is a highly parallel and distributed query engine, that is built from the ground up for efficient, low-latency analytics. The largest organizations in the world use Trino to query exabyte-scale data lakes and massive data warehouses alike. Supports diverse use cases, ad-hoc analytics at interactive speeds, massive multi-hour batch queries, and high-volume apps that perform sub-second queries. Trino is an ANSI SQL-compliant query engine, that works with BI tools such as R, Tableau, Power BI, Superset, and many others. You can natively query data in Hadoop, S3, Cassandra, MySQL, and many others, without the need for complex, slow, and error-prone processes for copying the data. Access data from multiple systems within a single query.Starting Price: Free -
23
StarRocks
StarRocks
Whether you're working with a single table or multiple, you'll experience at least 300% better performance on StarRocks compared to other popular solutions. From streaming data to data capture, with a rich set of connectors, you can ingest data into StarRocks in real time for the freshest insights. A query engine that adapts to your use cases. Without moving your data or rewriting SQL, StarRocks provides the flexibility to scale your analytics on demand with ease. StarRocks enables a rapid journey from data to insight. StarRocks' performance is unmatched and provides a unified OLAP solution covering the most popular data analytics scenarios. Whether you're working with a single table or multiple, you'll experience at least 300% better performance on StarRocks compared to other popular solutions. StarRocks' built-in memory-and-disk-based caching framework is specifically designed to minimize the I/O overhead of fetching data from external storage to accelerate query performance.Starting Price: Free -
24
SelectDB
SelectDB
SelectDB is a modern data warehouse based on Apache Doris, which supports rapid query analysis on large-scale real-time data. From Clickhouse to Apache Doris, to achieve the separation of the lake warehouse and upgrade to the lake warehouse. The fast-hand OLAP system carries nearly 1 billion query requests every day to provide data services for multiple scenes. Due to the problems of storage redundancy, resource seizure, complicated governance, and difficulty in querying and adjustment, the original lake warehouse separation architecture was decided to introduce Apache Doris lake warehouse, combined with Doris's materialized view rewriting ability and automated services, to achieve high-performance data query and flexible data governance. Write real-time data in seconds, and synchronize flow data from databases and data streams. Data storage engine for real-time update, real-time addition, and real-time pre-polymerization.Starting Price: $0.22 per hour -
25
Snowflake
Snowflake
Your cloud data platform. Secure and easy access to any data with infinite scalability. Get all the insights from all your data by all your users, with the instant and near-infinite performance, concurrency and scale your organization requires. Seamlessly share and consume shared data to collaborate across your organization, and beyond, to solve your toughest business problems in real time. Boost the productivity of your data professionals and shorten your time to value in order to deliver modern and integrated data solutions swiftly from anywhere in your organization. Whether you’re moving data into Snowflake or extracting insight out of Snowflake, our technology partners and system integrators will help you deploy Snowflake for your success.Starting Price: $40.00 per month -
26
Apache Pinot
Apache Corporation
Pinot is designed to answer OLAP queries with low latency on immutable data. Pluggable indexing technologies - Sorted Index, Bitmap Index, Inverted Index. Joins are currently not supported, but this problem can be overcome by using Trino or PrestoDB for querying. SQL like language that supports selection, aggregation, filtering, group by, order by, distinct queries on data. Consist of of both offline and real-time table. Use real-time table only to cover segments for which offline data may not be available yet. Detect the right anomalies by customizing anomaly detect flow and notification flow. -
27
QuasarDB
QuasarDB
Quasar's brain is QuasarDB, a high-performance, distributed, column-oriented timeseries database management system designed from the ground up to deliver real-time on petascale use cases. Up to 20X less disk usage. Quasardb ingestion and compression capabilities are unmatched. Up to 10,000X faster feature extraction. QuasarDB can extract features in real-time from the raw data, thanks to the combination of a built-in map/reduce query engine, an aggregation engine that leverages SIMD from modern CPUs, and stochastic indexes that use virtually no disk space. The most cost-effective timeseries solution, thanks to its ultra-efficient resource usage, the capability to leverage object storage (S3), unique compression technology, and fair pricing model. Quasar runs everywhere, from 32-bit ARM devices to high-end Intel servers, from Edge Computing to the cloud or on-premises. -
28
Google Cloud Bigtable
Google
Google Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. Fast and performant: Use Cloud Bigtable as the storage engine that grows with you from your first gigabyte to petabyte-scale for low-latency applications as well as high-throughput data processing and analytics. Seamless scaling and replication: Start with a single node per cluster, and seamlessly scale to hundreds of nodes dynamically supporting peak demand. Replication also adds high availability and workload isolation for live serving apps. Simple and integrated: Fully managed service that integrates easily with big data tools like Hadoop, Dataflow, and Dataproc. Plus, support for the open source HBase API standard makes it easy for development teams to get started. -
29
Hydra
Hydra
Hydra is an open source, column-oriented Postgres. Query billions of rows instantly, no code changes. Hydra parallelizes and vectorizes aggregates (COUNT, SUM, AVG) to deliver the speed you’ve always wanted on Postgres. Boost performance at every size! Set up Hydra in 5 minutes without changing your syntax, tools, data model, or extensions. Use Hydra Cloud for fully managed operations and smooth sailing. Different industries have different needs. Get better analytics with powerful Postgres extensions, custom functions, and take control. Built by you, for you. Hydra is the fastest Postgres in the market for analytics. Boost performance with columnar storage, vectorization, and query parallelization. -
30
Vertica
OpenText
The Unified Analytics Warehouse. Highest performing analytics and machine learning at extreme scale. As the criteria for data warehousing continues to evolve, tech research analysts are seeing new leaders in the drive for game-changing big data analytics. Vertica powers data-driven enterprises so they can get the most out of their analytics initiatives with advanced time-series and geospatial analytics, in-database machine learning, data lake integration, user-defined extensions, cloud-optimized architecture, and more. Our Under the Hood webcast series lets you to dive deep into Vertica features – delivered by Vertica engineers and technical experts – to find out what makes it the fastest and most scalable advanced analytical database on the market. From ride sharing apps and smart agriculture to predictive maintenance and customer analytics, Vertica supports the world’s leading data-driven disruptors in their pursuit of industry and business transformation. -
31
SingleStore
SingleStore
SingleStore (formerly MemSQL) is a distributed, highly-scalable SQL database that can run anywhere. We deliver maximum performance for transactional and analytical workloads with familiar relational models. SingleStore is a scalable SQL database that ingests data continuously to perform operational analytics for the front lines of your business. Ingest millions of events per second with ACID transactions while simultaneously analyzing billions of rows of data in relational SQL, JSON, geospatial, and full-text search formats. SingleStore delivers ultimate data ingestion performance at scale and supports built in batch loading and real time data pipelines. SingleStore lets you achieve ultra fast query response across both live and historical data using familiar ANSI SQL. Perform ad hoc analysis with business intelligence tools, run machine learning algorithms for real-time scoring, perform geoanalytic queries in real time.Starting Price: $0.69 per hour -
32
VeloDB
VeloDB
Powered by Apache Doris, VeloDB is a modern data warehouse for lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within seconds. Storage engine with real-time upsert、append and pre-aggregation. Unparalleled performance in both real-time data serving and interactive ad-hoc queries. Not just structured but also semi-structured data. Not just real-time analytics but also batch processing. Not just run queries against internal data but also work as a federate query engine to access external data lakes and databases. Distributed design to support linear scalability. Whether on-premise deployment or cloud service, separation or integration of storage and compute, resource usage can be flexibly and efficiently adjusted according to workload requirements. Built on and fully compatible with open source Apache Doris. Support MySQL protocol, functions, and SQL for easy integration with other data tools. -
33
Exasol
Exasol
With an in-memory, columnar database and MPP architecture, you can query billions of rows in seconds. Queries are distributed across all nodes in a cluster, providing linear scalability for more users and advanced analytics. MPP, in-memory, and columnar storage add up to the fastest database built for data analytics. With SaaS, cloud, on premises and hybrid deployment options you can analyze data wherever it lives. Automatic query tuning reduces maintenance and overhead. Seamless integrations and performance efficiency gets you more power at a fraction of normal infrastructure costs. Smart, in-memory query processing allowed this social networking company to boost performance, processing 10B data sets a year. A single data repository and speed engine to accelerate critical analytics, delivering improved patient outcome and bottom line. -
34
Apache Doris
The Apache Software Foundation
Apache Doris is a modern data warehouse for real-time analytics. It delivers lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within a second. Storage engine with real-time upsert, append and pre-aggregation. Optimize for high-concurrency and high-throughput queries with columnar storage engine, MPP architecture, cost based query optimizer, vectorized execution engine. Federated querying of data lakes such as Hive, Iceberg and Hudi, and databases such as MySQL and PostgreSQL. Compound data types such as Array, Map and JSON. Variant data type to support auto data type inference of JSON data. NGram bloomfilter and inverted index for text searches. Distributed design for linear scalability. Workload isolation and tiered storage for efficient resource management. Supports shared-nothing clusters as well as separation of storage and compute.Starting Price: Free -
35
Invest your time in your project, and we’ll take care of database maintenance: software backups, monitoring, fault tolerance, and updates. ClickHouse is great at handling queries to large amounts of data in real time, while column-based storage saves space due to strong data compression. All DBMS connections are encrypted using the TLS protocol. Data is secured in accordance with the requirements of local regulatory, GDPR, and ISO industry standards. Visualize the data structure in your ClickHouse cluster and send SQL queries to databases from the management console. The service also provides data replication between database hosts (both inside and between availability zones) and automatically switches the load over to a backup replica in the event of a failure.Starting Price: $42.51 per month
-
36
Tabular
Tabular
Tabular is an open table store from the creators of Apache Iceberg. Connect multiple computing engines and frameworks. Decrease query time and storage costs by up to 50%. Centralize enforcement of data access (RBAC) policies. Connect any query engine or framework, including Athena, BigQuery, Redshift, Snowflake, Databricks, Trino, Spark, and Python. Smart compaction, clustering, and other automated data services reduce storage costs and query times by up to 50%. Unify data access at the database or table. RBAC controls are simple to manage, consistently enforced, and easy to audit. Centralize your security down to the table. Tabular is easy to use plus it features high-powered ingestion, performance, and RBAC under the hood. Tabular gives you the flexibility to work with multiple “best of breed” compute engines based on their strengths. Assign privileges at the data warehouse database, table, or column level.Starting Price: $100 per month -
37
IBM Db2 Big SQL
IBM
A hybrid SQL-on-Hadoop engine delivering advanced, security-rich data query across enterprise big data sources, including Hadoop, object storage and data warehouses. IBM Db2 Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL-on-Hadoop engine, delivering massively parallel processing (MPP) and advanced data query. Db2 Big SQL offers a single database connection or query for disparate sources such as Hadoop HDFS and WebHDFS, RDMS, NoSQL databases, and object stores. Benefit from low latency, high performance, data security, SQL compatibility, and federation capabilities to do ad hoc and complex queries. Db2 Big SQL is now available in 2 variations. It can be integrated with Cloudera Data Platform, or accessed as a cloud-native service on the IBM Cloud Pak® for Data platform. Access and analyze data and perform queries on batch and real-time data across sources, like Hadoop, object stores and data warehouses. -
38
Amazon Timestream
Amazon
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases. Amazon Timestream saves you time and cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical data to a cost optimized storage tier based upon user defined policies. Amazon Timestream’s purpose-built query engine lets you access and analyze recent and historical data together, without needing to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier. Amazon Timestream has built-in time series analytics functions, helping you identify trends and patterns in your data in near real-time. -
39
SSuite MonoBase Database
SSuite Office Software
Create relational or flat file databases with unlimited tables, fields, and rows. Includes a custom report builder. Interface with ODBC compatible databases and create custom reports for them. Create your own personal and custom databases. Some Highlights: - Filter tables instantly - Ultra simple graphical-user-interface - One click table and data form creation - Open up to 5 databases simultaneously - Export your data to comma separated files - Create custom reports for all your databases - Full helpfile to assist in creating database reports - Print tables and queries directly from the data grid - Supports any SQL standard that your ODBC compatible database requires Please install and run this database application with full administrator rights for best performance and user experience. Requires: . 1024x768 Display Size . Windows 98 / XP / 7 / 8 / 10 - 32bit and 64bit No Java or DotNet required. Green Energy Software. Saving the planet one bit at a time...Starting Price: Free -
40
Imply
Imply
Imply is a real-time analytics platform built on Apache Druid, designed to handle large-scale, high-performance OLAP (Online Analytical Processing) workloads. It offers real-time data ingestion, fast query performance, and the ability to perform complex analytical queries on massive datasets with low latency. Imply is tailored for organizations that need interactive analytics, real-time dashboards, and data-driven decision-making at scale. It provides a user-friendly interface for data exploration, along with advanced features such as multi-tenancy, fine-grained access controls, and operational insights. With its distributed architecture and scalability, Imply is well-suited for use cases in streaming data analytics, business intelligence, and real-time monitoring across industries. -
41
AlloyDB
Google
A fully managed PostgreSQL-compatible database service for your most demanding enterprise workloads. AlloyDB combines the best of Google with PostgreSQL, for superior performance, scale, and availability. Fully compatible with PostgreSQL, providing flexibility and true portability for your workloads. Superior performance, 4x faster than standard PostgreSQL for transactional workloads. Fast, real-time insights, up to 100x faster analytical queries than standard PostgreSQL. AlloyDB AI can help you build a wide range of generative AI applications. AlloyDB Omni is a downloadable edition of AlloyDB designed to run anywhere. Scale up and achieve predictable performance and a high availability SLA of 99.99%, inclusive of maintenance, for your most demanding enterprise workloads. Automated and machine learning-enabled autopilot systems simplify management by handling database patching, backups, scaling, and replication for you. -
42
Databricks Data Intelligence Platform
Databricks
The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker. -
43
InfiniDB
Database of Databases
InfiniDB is a column-store DBMS optimized for OLAP workloads. It has a distributed architecture to support Massive Paralllel Processing (MPP). It uses MySQL as its front-end such that users familiar with MySQL can quickly migrate to InfiniDB. Due to this fact, users can connect to InfiniDB using any MySQL connector. InfiniDB applies MVCC to do concurrency control. It uses term System Change Number (SCN) to indicate a version of the system. In its Block Resolution Manager (BRM), it utilizes three structures, version buffer, version substitution structure, and version buffer block manager, to manage multiple versions. InfiniDB applies deadlock detection to resolve conflicts. InfiniDB uses MySQL as its front-end and supports all MySQL syntaxes, including foreign keys. InfiniDB is a columnar DBMS. For each column, InfiniDB applies range partitioning and stores the minimum and maximum value of each partition in a small structure called extent map. -
44
kdb+
Kx Systems
A high-performance cross-platform historical time-series columnar database featuring: - An in-memory compute engine - A real-time streaming processor - An expressive query and programming language called q -
45
Querona
YouNeedIT
We make BI & Big Data analytics work easier and faster. Our goal is to empower business users and make always-busy business and heavily loaded BI specialists less dependent on each other when solving data-driven business problems. If you have ever experienced a lack of data you needed, time to consuming report generation or long queue to your BI expert, consider Querona. Querona uses a built-in Big Data engine to handle growing data volumes. Repeatable queries can be cached or calculated in advance. Optimization needs less effort as Querona automatically suggests query improvements. Querona empowers business analysts and data scientists by putting self-service in their hands. They can easily discover and prototype data models, add new data sources, experiment with query optimization and dig in raw data. Less IT is needed. Now users can get live data no matter where it is stored. If databases are too busy to be queried live, Querona will cache the data. -
46
Apache Cassandra
Apache Software Foundation
The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data. Cassandra's support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages. -
47
MariaDB
MariaDB
MariaDB Platform is a complete enterprise open source database solution. It has the versatility to support transactional, analytical and hybrid workloads as well as relational, JSON and hybrid data models. And it has the scalability to grow from standalone databases and data warehouses to fully distributed SQL for executing millions of transactions per second and performing interactive, ad hoc analytics on billions of rows. MariaDB can be deployed on prem on commodity hardware, is available on all major public clouds and through MariaDB SkySQL as a fully managed cloud database. To learn more, visit mariadb.com. -
48
Baidu Palo
Baidu AI Cloud
Palo helps enterprises to create the PB-level MPP architecture data warehouse service within several minutes and import the massive data from RDS, BOS, and BMR. Thus, Palo can perform the multi-dimensional analytics of big data. Palo is compatible with mainstream BI tools. Data analysts can analyze and display the data visually and gain insights quickly to assist decision-making. It has the industry-leading MPP query engine, with column storage, intelligent index,and vector execution functions. It can also provide in-library analytics, window functions, and other advanced analytics functions. You can create a materialized view and change the table structure without the suspension of service. It supports flexible and efficient data recovery. -
49
PuppyGraph
PuppyGraph
PuppyGraph empowers you to seamlessly query one or multiple data stores as a unified graph model. Graph databases are expensive, take months to set up, and need a dedicated team. Traditional graph databases can take hours to run multi-hop queries and struggle beyond 100GB of data. A separate graph database complicates your architecture with brittle ETLs and inflates your total cost of ownership (TCO). Connect to any data source anywhere. Cross-cloud and cross-region graph analytics. No complex ETLs or data replication is required. PuppyGraph enables you to query your data as a graph by directly connecting to your data warehouses and lakes. This eliminates the need to build and maintain time-consuming ETL pipelines needed with a traditional graph database setup. No more waiting for data and failed ETL processes. PuppyGraph eradicates graph scalability issues by separating computation and storage.Starting Price: Free -
50
Amazon Athena
Amazon
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets. Athena is out-of-the-box integrated with AWS Glue Data Catalog, allowing you to create a unified metadata repository across various services, crawl data sources to discover schemas and populate your Catalog with new and modified table and partition definitions, and maintain schema versioning.