Best Key-Value Databases - Page 2

Compare the Top Key-Value Databases as of August 2025 - Page 2

  • 1
    Google Cloud Bigtable
    Google Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. Fast and performant: Use Cloud Bigtable as the storage engine that grows with you from your first gigabyte to petabyte-scale for low-latency applications as well as high-throughput data processing and analytics. Seamless scaling and replication: Start with a single node per cluster, and seamlessly scale to hundreds of nodes dynamically supporting peak demand. Replication also adds high availability and workload isolation for live serving apps. Simple and integrated: Fully managed service that integrates easily with big data tools like Hadoop, Dataflow, and Dataproc. Plus, support for the open source HBase API standard makes it easy for development teams to get started.
  • 2
    Symas LMDB

    Symas LMDB

    Symas Corporation

    Symas LMDB is an extraordinarily fast, memory-efficient database we developed for the OpenLDAP Project. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases. Bottom line, with only 32KB of object code, LMDB may seem tiny. But it’s the right 32KB. Compact and efficient are two sides of a coin; that’s part of what makes LMDB so powerful. Symas offers fixed-price commercial support to those using LMDB in your applications. Development occurs in the OpenLDAP Project‘s git repo in the mdb.master branch. Symas LMDB has been written about, talked about, and utilized in a variety of impressive products and publications.
  • 3
    TerarkDB

    TerarkDB

    Terark

    TerarkDB is a core product of Terark. It is a RocksDB distribution that powered by ©™Terark algorithms. with these algorithms, TerarkDB is able to store more data and access much faster than official RocksDB(3+X more data and 10+X faster) on same hardware. TerarkDB is completely compatible(binary compatible) with official RocksDB. We forked RocksDB and made a few changes to fit our algorithms, we've added it as submodule rocksdb here. Our changes for RocksDB does not change any RocksDB API, and does not have any extra dependencies, say, Terark modified RocksDB does not depend on TerarkZipTable(Without TerarkZipTable, Terark RocksDB works exactly same as official RocksDB).
  • 4
    Google Cloud Memorystore
    Reduce latency with scalable, secure, and highly available in-memory service for Redis and Memcached. Memorystore automates complex tasks for open source Redis and Memcached like enabling high availability, failover, patching, and monitoring so you can spend more time coding. Start with the lowest tier and smallest size and then grow your instance with minimal impact. Memorystore for Memcached can support clusters as large as 5 TB supporting millions of QPS at very low latency. Memorystore for Redis instances are replicated across two zones and provide a 99.9% availability SLA. Instances are monitored constantly and with automatic failover—applications experience minimal disruption. Choose from the two most popular open source caching engines to build your applications. Memorystore supports both Redis and Memcached and is fully protocol compatible. Choose the right engine that fits your cost and availability requirements.
  • 5
    AsparaDB

    AsparaDB

    Alibaba

    ApsaraDB for Redis is an automated and scalable tool for developers to manage data storage shared across multiple processes, applications or servers. As a Redis protocol compatible tool, ApsaraDB for Redis offers exceptional read-write capabilities and ensures data persistence by using memory and hard disk storage. ApsaraDB for Redis provides data read-write capabilities at high speed by retrieving data from in-memory caches and ensures data persistence by using both memory and hard disk storage mode. ApsaraDB for Redis supports advanced data structures such as leaderboard, counting, session, and tracking, which are not readily achievable through ordinary databases. ApsaraDB for Redis also has an enhanced edition called "Tair" . Tair has officially handled the data caching scenarios of Alibaba Group since 2009 and has proven its outstanding performance in scenarios such as Double 11 Shopping Festival.
  • 6
    Oracle Coherence
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence, 14.1.1, adds a patented scalable messaging implementation, support for polyglot grid-side programming on GraalVM, distributed tracing in the grid, and certification on JDK 11. Coherence stores each piece of data within multiple members (one primary and one or more backup copies), and doesn't consider any mutating operation complete until the backup(s) are successfully created. This ensures that your data grid can tolerate the failure at any level: from single JVM, to whole data center.
  • 7
    Ehcache

    Ehcache

    Terracotta

    Ehcache is an open source, standards-based cache that boosts performance, offloads your database, and simplifies scalability. It's the most widely-used Java-based cache because it's robust, proven, full-featured, and integrates with other popular libraries and frameworks. Ehcache scales from in-process caching, all the way to mixed in-process/out-of-process deployments with terabyte-sized caches. Terracotta actively develops, maintains, and supports Ehcache as a professional open source project available under an Apache 2.0 license. Contributors are welcome to join our community.
  • 8
    LevelDB

    LevelDB

    Google

    LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Keys and values are arbitrary byte arrays. Data is stored sorted by key. Callers can provide a custom comparison function to override the sort order. Multiple changes can be made in one atomic batch. Users can create a transient snapshot to get a consistent view of data. Forward and backward iteration is supported over the data. Data is automatically compressed using the Snappy compression library. External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions. We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size. We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup.
  • 9
    upscaledb

    upscaledb

    upscaledb

    upscaledb is a fast key-value database which optimizes storage and algorithms for your specific data types. Optional compression further reduces file size and I/O, and can keep more data in memory to increase performance and scalability when running full-table scans to query and analyze the data. upscaledb can be used to build all functions of a typical SQL database, tailored to the specific needs of your application, and directly linked into your program. Its blazingly fast analytical functions and database cursors make it a natural fit to process data whenever a SQL database is not fast enough. Applications using upscaledb are deployed on tens of millions of desktops, but also on cloud instances, cell phones and other embedded devices. This benchmark runs a full-table scan over 50 million records and retrieves the maximum. The records are configured as uint32 values.
  • 10
    FoundationDB

    FoundationDB

    FoundationDB

    FoundationDB is multi-model, meaning you can store many types data in a single database. All data is safely stored, distributed, and replicated in the Key-Value Store component. FoundationDB is easy to install, grow, and manage. It has a distributed architecture that gracefully scales out, and handles faults while acting like a single ACID database. FoundationDB provides amazing performance on commodity hardware, allowing you to support very heavy loads at low cost. FoundationDB has been running in production for years and been hardened with lessons learned. Backing FoundationDB up is an unmatched testing system based on a deterministic simulation engine. We encourage your participation in our open-source community! Join us in technical and user discussions on the community forums, and learn how to contribute.
  • 11
    Azure Table Storage
    Use Azure Table storage to store petabytes of semi-structured data and keep costs down. Unlike many data stores—on-premises or cloud-based—Table storage lets you scale up without having to manually shard your dataset. Availability also isn’t a concern: using geo-redundant storage, stored data is replicated three times within a region—and an additional three times in another region, hundreds of miles away. Table storage is excellent for flexible datasets—web app user data, address books, device information, and other metadata—and lets you build cloud applications without locking down the data model to particular schemas. Because different rows in the same table can have a different structure—for example, order information in one row, and customer information in another—you can evolve your application and table schema without taking it offline. Table storage embraces a strong consistency model.
  • 12
    VMware Tanzu GemFire
    VMware Tanzu GemFire is a distributed, in-memory, key-value store that performs read and write operations at blazingly fast speeds. It offers highly available parallel message queues, continuous availability, and an event-driven architecture you can scale dynamically, with no downtime. As your data size requirements increase to support high-performance, real-time apps, Tanzu GemFire can scale linearly with ease. Traditional databases are often too brittle or unreliable for use with microservices. That’s why every modern distributed architecture needs a cache! With Tanzu GemFire, applications get low-latency responses to data access requests, and always return fresh data. Your applications can subscribe to real-time events to react to changes immediately. Tanzu GemFire’s continuous queries notify your application when new data is available, which reduces the overhead on your SQL database.
  • 13
    Apache Accumulo

    Apache Accumulo

    Apache Corporation

    With Apache Accumulo, users can store and manage large data sets across a cluster. Accumulo uses Apache Hadoop's HDFS to store its data and Apache ZooKeeper for consensus. While many users interact directly with Accumulo, several open source projects use Accumulo as their underlying store. To learn more about Accumulo, take the Accumulo tour, read the user manual and run the Accumulo example code. Feel free to contact us if you have any questions. Accumulo has a programming mechanism (called Iterators) that can modify key/value pairs at various points in the data management process. Every Accumulo key/value pair has its own security label which limits query results based off user authorizations. Accumulo runs on a cluster using one or more HDFS instances. Nodes can be added or removed as the amount of data stored in Accumulo changes.
  • 14
    KeyDB

    KeyDB

    KeyDB

    KeyDB maintains full compatibility with Redis modules, API and protocol. Seamlessly drop in KeyDB and maintain full compatibility with your existing clients, scripts and configurations. Multi-Master mode uses a single replicated dataset across many nodes to serve both read and write operations Nodes can be replicated cross-region to offer submillisecond latencies to local clients. Cluster mode allows unlimited read and write scaling by splitting the dataset across shards. This allows unlimited scaling, and also support high availability through replica nodes. KeyDB offers new community driven commands that enable you to do more with your data. Add your own commands and functionality using JavaScript with the ModJS module. ModJS lets you write functions in javascript that can in turn be called directly by KeyBD. The example to the left shows and example of a javascript function that would be loaded with the module. It can then be called directly from your client.
  • 15
    LedisDB

    LedisDB

    LedisDB

    Ledisdb is a high-performance NoSQL database library and server written in Go. It's similar to Redis but store data in disk. It supports many data structures including kv, list, hash, zset, set. LedisDB now supports multiple different databases as backends.
  • 16
    DataStax

    DataStax

    DataStax

    The Open, Multi-Cloud Stack for Modern Data Apps. Built on open-source Apache Cassandra™. Global-scale and 100% uptime without vendor lock-in. Deploy on multi-cloud, on-prem, open-source, and Kubernetes. Elastic and pay-as-you-go for improved TCO. Start building faster with Stargate APIs for NoSQL, real-time, reactive, JSON, REST, and GraphQL. Skip the complexity of multiple OSS projects and APIs that don’t scale. Ideal for commerce, mobile, AI/ML, IoT, microservices, social, gaming, and richly interactive applications that must scale-up and scale-down with demand. Get building modern data applications with Astra, a database-as-a-service powered by Apache Cassandra™. Use REST, GraphQL, JSON with your favorite full-stack framework Richly interactive apps that are elastic and viral-ready from Day 1. Pay-as-you-go Apache Cassandra DBaaS that scales effortlessly and affordably.
  • 17
    Cloudera

    Cloudera

    Cloudera

    Manage and secure the data lifecycle from the Edge to AI in any cloud or data center. Operates across all major public clouds and the private cloud with a public cloud experience everywhere. Integrates data management and analytic experiences across the data lifecycle for data anywhere. Delivers security, compliance, migration, and metadata management across all environments. Open source, open integrations, extensible, & open to multiple data stores and compute architectures. Deliver easier, faster, and safer self-service analytics experiences. Provide self-service access to integrated, multi-function analytics on centrally managed and secured business data while deploying a consistent experience anywhere—on premises or in hybrid and multi-cloud. Enjoy consistent data security, governance, lineage, and control, while deploying the powerful, easy-to-use cloud analytics experiences business users require and eliminating their need for shadow IT solutions.
  • 18
    Oracle Database
    Oracle database products offer customers cost-optimized and high-performance versions of Oracle Database, the world's leading converged, multi-model database management system, as well as in-memory, NoSQL, and MySQL databases. Oracle Autonomous Database, available on-premises via Oracle Cloud@Customer or in the Oracle Cloud Infrastructure, enables customers to simplify relational database environments and reduce management workloads. Oracle Autonomous Database eliminates the complexity of operating and securing Oracle Database while giving customers the highest levels of performance, scalability, and availability. Oracle Database can be deployed on-premises when customers have data residency and network latency concerns. Customers with applications that are dependent on specific Oracle database versions have complete control over the versions they run and when those versions change.
  • 19
    ArangoDB

    ArangoDB

    ArangoDB

    Natively store data for graph, document and search needs. Utilize feature-rich access with one query language. Map data natively to the database and access it with the best patterns for the job – traversals, joins, search, ranking, geospatial, aggregations – you name it. Polyglot persistence without the costs. Easily design, scale and adapt your architectures to changing needs and with much less effort. Combine the flexibility of JSON with semantic search and graph technology for next generation feature extraction even for large datasets.
  • 20
    Hazelcast

    Hazelcast

    Hazelcast

    In-Memory Computing Platform. The digital world is different. Microseconds matter. That's why the world's largest organizations rely on us to power their most time-sensitive applications at scale. New data-enabled applications can deliver transformative business power – if they meet today’s requirement of immediacy. Hazelcast solutions complement virtually any database to deliver results that are significantly faster than a traditional system of record. Hazelcast’s distributed architecture provides redundancy for continuous cluster up-time and always available data to serve the most demanding applications. Capacity grows elastically with demand, without compromising performance or availability. The fastest in-memory data grid, combined with third-generation high-speed event processing, delivered through the cloud.
  • 21
    OrientDB
    OrientDB is the world’s fastest graph database. Period. An independent benchmark study by IBM and the Tokyo Institute of Technology showed that OrientDB is 10x faster than Neo4j on graph operations among all the workloads. Drive competitive advantage and accelerate innovation with new revenue streams.
  • 22
    memcached

    memcached

    memcached

    You can think of it as a short-term memory for your applications. memcached allows you to take memory from parts of your system where you have more than you need and make it accessible to areas where you have less than you need. The first scenario illustrates the classic deployment strategy, however you'll find that it's both wasteful in the sense that the total cache size is a fraction of the actual capacity of your web farm, but also in the amount of effort required to keep the cache consistent across all of those nodes. With memcached, you can see that all of the servers are looking into the same virtual pool of memory. Also, as the demand for your application grows to the point where you need to have more servers, it generally also grows in terms of the data that must be regularly accessed. A deployment strategy where these two aspects of your system scale together just makes sense.
  • 23
    Apache Ignite

    Apache Ignite

    Apache Ignite

    Use Ignite as a traditional SQL database by leveraging JDBC drivers, ODBC drivers, or the native SQL APIs that are available for Java, C#, C++, Python, and other programming languages. Seamlessly join, group, aggregate, and order your distributed in-memory and on-disk data. Accelerate your existing applications by 100x using Ignite as an in-memory cache or in-memory data grid that is deployed over one or more external databases. Think of a cache that you can query with SQL, transact, and compute on. Build modern applications that support transactional and analytical workloads by using Ignite as a database that scales beyond the available memory capacity. Ignite allocates memory for your hot data and goes to disk whenever applications query cold records. Execute kilobyte-size custom code over petabytes of data. Turn your Ignite database into a distributed supercomputer for low-latency calculations, complex analytics, and machine learning.
  • 24
    XAP

    XAP

    GigaSpaces

    GigaSpaces XAP, an event-driven, distributed development platform, delivers extreme processing for mission-critical applications. XAP provides high availability, resilience and boundless scale under any load. XAP Skyline, an in-memory distributed technology for mission-critical applications running in cloud-native environments, unites data and business logic within the Kubernetes cluster. With XAP Skyline, developers can ensure that data-driven applications achieve the highest levels of performance and serve hundreds of thousands of concurrent users while delivering sub-second response times. XAP Skyline delivers the low latency, scalability and resilience. This developer platform is used in financial services, retail, and other industries where speed and scalability are critical.
  • 25
    GridDB

    GridDB

    GridDB

    GridDB uses multicast communication to constitute a cluster. Set the network to enable multicast communication. First, check the host name and an IP address. Execute “hostname -i” command to check the settings of an IP address of the host. If the IP address of the machine is the same as below, no need to perform additional network setting and you can jump to the next section. GridDB is a database that manages a group of data (known as a row) that is made up of a key and multiple values. Besides having a composition of an in-memory database that arranges all the data in the memory, it can also adopt a hybrid composition combining the use of a disk (including SSD as well) and a memory.
  • 26
    JaguarDB

    JaguarDB

    JaguarDB

    JaguarDB enables fast ingestion of time series data, coupling location-based data. It also can index in both dimensions, space and time. Back-filling time series data is also fast (inserting large volumes of data in past time). Normally time series is a series of data points indexed in time order. In JaguarDB, the time series has a different meaning: it is both a sequence of data points and a series of tick tables holding aggregated data values at specified time spans. For example, a time series table in JaguarDB can have a base table storing data points in time order, and tick tables such as 5 minute, 15 minute, hourly, daily, weekly, monthly tables to store aggregated data within these time spans. The format for the RETENTION is the same as the TICK format, except that it can have any number of retention periods. The RETENTION specifies how long the data points in the base table should be kept.
  • 27
    Kyoto Tycoon

    Kyoto Tycoon

    Altice Labs

    Kyoto Tycoon is a lightweight network server on top of the Kyoto Cabinet key-value database, built for high-performance and concurrency. Some of its features include. It has its own fully-featured protocol based on HTTP and a (limited) binary protocol for even better performance. There are several client libraries implementing them for multiple languages (we're maintaining one for Python here). It can also be configured with simultaneous support for the memcached protocol, with some limitations on available data update commands. This is useful if you wish to replace memcached in larger-than-memory/persistency scenarios. Here you can find improved versions of the latest available upstream releases, intended to be used together and tested in real-world production environments. The changes include bug fixes, minor new features and packaging for a few Linux distributions.
  • 28
    Lucid KV

    Lucid KV

    Lucid KV

    Lucid is currently in a development stage but we want to achieve a fast, secure and distributed key-value store accessible through an HTTP API, we also want to propose persistence, encryption, WebSocket streaming, replication and a lot of features. Private Keys Storing, IoT (to collect and save statistics data), Distributed cache, service discovery, distributed configuration, blob storage etc.
  • 29
    BoltDB

    BoltDB

    BoltDB

    Bolt is a pure Go key/value store inspired by Howard Chu's LMDB project. The goal of the project is to provide a simple, fast, and reliable database for projects that don't require a full database server such as Postgres or MySQL. Since Bolt is meant to be used as such a low-level piece of functionality, simplicity is key. The API will be small and only focus on getting values and setting values. That's it. The original goal of Bolt was to provide a simple pure Go key/value store and to not bloat the code with extraneous features. To that end, the project has been a success. However, this limited scope also means that the project is complete. Maintaining an open source database requires an immense amount of time and energy. Changes to the code can have unintended and sometimes catastrophic effects so even simple changes require hours and hours of careful testing and validation.
  • 30
    RocksDB

    RocksDB

    RocksDB

    RocksDB uses a log structured database engine, written entirely in C++, for maximum performance. Keys and values are just arbitrarily-sized byte streams. RocksDB is optimized for fast, low latency storage such as flash drives and high-speed disk drives. RocksDB exploits the full potential of high read/write rates offered by flash or RAM. RocksDB provides basic operations such as opening and closing a database, reading and writing to more advanced operations such as merging and compaction filters. RocksDB is adaptable to different workloads. From database storage engines such as MyRocks to application data caching to embedded workloads, RocksDB can be used for a variety of data needs.