Alternatives to upscaledb

Compare upscaledb alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to upscaledb in 2024. Compare features, ratings, user reviews, pricing, and more from upscaledb competitors and alternatives in order to make an informed decision for your business.

  • 1
    Amazon DynamoDB
    Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, Multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second. Many of the world's fastest-growing businesses such as Lyft, Airbnb, and Redfin as well as enterprises such as Samsung, Toyota, and Capital One depend on the scale and performance of DynamoDB to support their mission-critical workloads. Focus on driving innovation with no operational overhead. Build out your game platform with player data, session history, and leaderboards for millions of concurrent users. Use design patterns for deploying shopping carts, workflow engines, inventory tracking, and customer profiles. DynamoDB supports high-traffic, extreme-scaled events.
  • 2
    LevelDB

    LevelDB

    Google

    LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Keys and values are arbitrary byte arrays. Data is stored sorted by key. Callers can provide a custom comparison function to override the sort order. Multiple changes can be made in one atomic batch. Users can create a transient snapshot to get a consistent view of data. Forward and backward iteration is supported over the data. Data is automatically compressed using the Snappy compression library. External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions. We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size. We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup.
  • 3
    VMware Tanzu GemFire
    VMware Tanzu GemFire is a distributed, in-memory, key-value store that performs read and write operations at blazingly fast speeds. It offers highly available parallel message queues, continuous availability, and an event-driven architecture you can scale dynamically, with no downtime. As your data size requirements increase to support high-performance, real-time apps, Tanzu GemFire can scale linearly with ease. Traditional databases are often too brittle or unreliable for use with microservices. That’s why every modern distributed architecture needs a cache! With Tanzu GemFire, applications get low-latency responses to data access requests, and always return fresh data. Your applications can subscribe to real-time events to react to changes immediately. Tanzu GemFire’s continuous queries notify your application when new data is available, which reduces the overhead on your SQL database.
  • 4
    Riak KV

    Riak KV

    Riak

    At Riak, we are distributed systems experts and we work with Application teams to overcome these distributed system challenges. Riak’s Riak® is a distributed NoSQL database that delivers unmatched Resiliency beyond typical “high availability” offerings. Innovative technology to ensure data accuracy and never lose a write. Massive scale on commodity hardware. Common code foundation with true multi-model support. Riak® provides all this, while still focused on ease of operations. Chose Riak® KV flexible key-value data model for web scale profile and session management, real-time big data, catalog, content management, customer 360, digital messaging, and more use cases. Chose Riak® TS for IoT and time series use cases. When seconds of latency can cost thousands of dollars and an outage millions, the call for scalable, highly available databases that are easy to operationalize is resoundingly clear. Riak performs as promised and keeps the lights on.
  • 5
    BoltDB

    BoltDB

    BoltDB

    Bolt is a pure Go key/value store inspired by Howard Chu's LMDB project. The goal of the project is to provide a simple, fast, and reliable database for projects that don't require a full database server such as Postgres or MySQL. Since Bolt is meant to be used as such a low-level piece of functionality, simplicity is key. The API will be small and only focus on getting values and setting values. That's it. The original goal of Bolt was to provide a simple pure Go key/value store and to not bloat the code with extraneous features. To that end, the project has been a success. However, this limited scope also means that the project is complete. Maintaining an open source database requires an immense amount of time and energy. Changes to the code can have unintended and sometimes catastrophic effects so even simple changes require hours and hours of careful testing and validation.
  • 6
    LeanXcale

    LeanXcale

    LeanXcale

    LeanXcale is a fast and scalable database that combines the characteristics of SQL and NoSQL. It is built to ingest massive batch and real-time data pipelines and make it available through SQL or GIS for any use, such as operational applications, analytics, dashboarding, or machine learning processing. No matter what stack you use, LeanXcale provides you both SQL and NoSQL interfaces. KiVi storage engine is a relational key-value data store. Users can access the data not only through the standard SQL API but also through a direct ACID key-value interface. This key-value interface allows users to perform data ingestion at very high rates and very efficiently by avoiding SQL processing overhead. Highly-scalable, efficient and distributed storage engine distributed data along the cluster to improve the performance and increase the reliability.
    Starting Price: $0.127 per GB per month
  • 7
    BergDB

    BergDB

    BergDB

    Welcome! BergDB is a Java/.NET database designed to be simple and efficient. It was created for us developers who prefer to focus on our specific task, rather then spend time on database issues. BergDB has: simple key-value storage, ACID transactions, historic queries, efficient concurrency control, secondary indexes, fast append-only storage, replication, transparent object serialization and more. BergDB is an embedded, open-source, document-oriented, schemaless, NoSQL database. BergDB is built from the ground up to execute transactions exceptionally fast. And there are no compromises, all writes to the database are made in ACID transactions with the highest possible level of consistency (in SQL-speak: serializable isolation level). Historic queries are important when previous data states are of interest, and also as a fast way to handle concurrency. A read operation never locks anything in BergDB.
  • 8
    Oracle Berkeley DB
    Berkeley DB is a family of embedded key-value database libraries providing scalable high-performance data management services to applications. The Berkeley DB products use simple function-call APIs for data access and management. Berkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects. Berkeley DB provides a collection of well-proven building-block technologies that can be configured to address any application need from the hand-held device to the data center, from a local storage solution to a world-wide distributed one, from kilobytes to petabytes.
  • 9
    Apache Ignite

    Apache Ignite

    Apache Ignite

    Use Ignite as a traditional SQL database by leveraging JDBC drivers, ODBC drivers, or the native SQL APIs that are available for Java, C#, C++, Python, and other programming languages. Seamlessly join, group, aggregate, and order your distributed in-memory and on-disk data. Accelerate your existing applications by 100x using Ignite as an in-memory cache or in-memory data grid that is deployed over one or more external databases. Think of a cache that you can query with SQL, transact, and compute on. Build modern applications that support transactional and analytical workloads by using Ignite as a database that scales beyond the available memory capacity. Ignite allocates memory for your hot data and goes to disk whenever applications query cold records. Execute kilobyte-size custom code over petabytes of data. Turn your Ignite database into a distributed supercomputer for low-latency calculations, complex analytics, and machine learning.
  • 10
    eXtremeDB

    eXtremeDB

    McObject

    How is platform independent eXtremeDB different? - Hybrid data storage. Unlike other IMDS, eXtremeDB can be all-in-memory, all-persistent, or have a mix of in-memory tables and persistent tables - Active Replication Fabric™ is unique to eXtremeDB, offering bidirectional replication, multi-tier replication (e.g. edge-to-gateway-to-gateway-to-cloud), compression to maximize limited bandwidth networks and more - Row & Columnar Flexibility for Time Series Data supports database designs that combine row-based and column-based layouts, in order to best leverage the CPU cache speed - Embedded and Client/Server. Fast, flexible eXtremeDB is data management wherever you need it, and can be deployed as an embedded database system, and/or as a client/server database system -A hard real-time deterministic option in eXtremeDB/rt Designed for use in resource-constrained, mission-critical embedded systems. Found in everything from routers to satellites to trains to stock markets worldwide
  • 11
    Voldemort

    Voldemort

    Voldemort

    Voldemort is not a relational database, it does not attempt to satisfy arbitrary relations while satisfying ACID properties. Nor is it an object database that attempts to transparently map object reference graphs. Nor does it introduce a new abstraction such as document-orientation. It is basically just a big, distributed, persistent, fault-tolerant hash table. For applications that can use an O/R mapper like active-record or hibernate this will provide horizontal scalability and much higher availability but at great loss of convenience. For large applications under internet-type scalability pressure, a system may likely consist of a number of functionally partitioned services or APIs, which may manage storage resources across multiple data centers using storage systems which may themselves be horizontally partitioned. For applications in this space, arbitrary in-database joins are already impossible since all the data is not available in any single database.
  • 12
    ArcadeDB

    ArcadeDB

    ArcadeDB

    Manage complex models using ArcadeDB without any compromise. Forget about Polyglot Persistence. no need for multiple databases. You can store graphs, documents, key values and time series all in one ArcadeDB Multi-Model database. Since each model is native to the database engine, you don't have to worry about translations slowing you down. ArcadeDB's engine was built with Alien Technology. It's able to crunch millions of records per second. With ArcadeDB, the traversing speed is not affected by the database size. It is always constant, whether your database has a few records or billions. ArcadeDB can work as an embedded database, on a single server and can scale up using multiple servers with Kubernetes. Flexible enough to run on any platform with a small footprint. Your data is secure. Our unbreakable fully transactional engine assures durability for mission-critical production databases. ArcadeDB uses a Raft Consensus Algorithm to maintain consistency across multiple servers.
    Starting Price: Free
  • 13
    Amazon ElastiCache
    Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. Amazon ElastiCache offers fully managed Redis and Memcached for your most demanding applications that require sub-millisecond response times. Amazon ElastiCache works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times. By utilizing an end-to-end optimized stack running on customer-dedicated nodes, Amazon ElastiCache provides secure, blazing-fast performance.
  • 14
    RocksDB

    RocksDB

    RocksDB

    RocksDB uses a log structured database engine, written entirely in C++, for maximum performance. Keys and values are just arbitrarily-sized byte streams. RocksDB is optimized for fast, low latency storage such as flash drives and high-speed disk drives. RocksDB exploits the full potential of high read/write rates offered by flash or RAM. RocksDB provides basic operations such as opening and closing a database, reading and writing to more advanced operations such as merging and compaction filters. RocksDB is adaptable to different workloads. From database storage engines such as MyRocks to application data caching to embedded workloads, RocksDB can be used for a variety of data needs.
  • 15
    Aerospike

    Aerospike

    Aerospike

    Aerospike is the global leader in next-generation, real-time NoSQL data solutions for any scale. Aerospike enterprises overcome seemingly impossible data bottlenecks to compete and win with a fraction of the infrastructure complexity and cost of legacy NoSQL databases. Aerospike’s patented Hybrid Memory Architecture™ delivers an unbreakable competitive advantage by unlocking the full potential of modern hardware, delivering previously unimaginable value from vast amounts of data at the edge, to the core and in the cloud. Aerospike empowers customers to instantly fight fraud; dramatically increase shopping cart size; deploy global digital payment networks; and deliver instant, one-to-one personalization for millions of customers. Aerospike customers include Airtel, Banca d’Italia, Nielsen, PayPal, Snap, Verizon Media and Wayfair. The company is headquartered in Mountain View, Calif., with additional locations in London; Bengaluru, India; and Tel Aviv, Israel.
  • 16
    InterSystems Caché

    InterSystems Caché

    InterSystems

    InterSystems Caché® is a high-performance database that powers transaction processing applications around the world. It is used for everything from mapping a billion stars in the Milky Way, to processing a billion equity trades in a day, to managing smart energy grids. Caché is a multi-model (object, relational, key-value) DBMS and application server developed by InterSystems. InterSystems Caché provides several APIs to operate with same data simultaneously: key-value, relational, object, document, multi-dimensional. Data can be managed via SQL, Java, node.js, .NET, C++, Python. Caché also provides an application server which hosts web apps (CSP), REST, SOAP, web sockets and other types of TCP access for Caché data.
  • 17
    Kyoto Tycoon

    Kyoto Tycoon

    Altice Labs

    Kyoto Tycoon is a lightweight network server on top of the Kyoto Cabinet key-value database, built for high-performance and concurrency. Some of its features include. It has its own fully-featured protocol based on HTTP and a (limited) binary protocol for even better performance. There are several client libraries implementing them for multiple languages (we're maintaining one for Python here). It can also be configured with simultaneous support for the memcached protocol, with some limitations on available data update commands. This is useful if you wish to replace memcached in larger-than-memory/persistency scenarios. Here you can find improved versions of the latest available upstream releases, intended to be used together and tested in real-world production environments. The changes include bug fixes, minor new features and packaging for a few Linux distributions.
  • 18
    Infinispan

    Infinispan

    Infinispan

    Infinispan is an open-source in-memory data grid that offers flexible deployment options and robust capabilities for storing, managing, and processing data. Infinispan provides a key/value data store that can hold all types of data, from Java objects to plain text. Infinispan distributes your data across elastically scalable clusters to guarantee high availability and fault tolerance, whether you use Infinispan as a volatile cache or a persistent data store. Infinispan turbocharges applications by storing data closer to processing logic, which reduces latency and increases throughput. Available as a Java library, you simply add Infinispan to your application dependencies and then you’re ready to store data in the same memory space as the executing code.
  • 19
    SwayDB

    SwayDB

    SwayDB

    Embeddable persistent and in-memory key-value storage engine for high performance & resource efficiency. Designed to be efficient at managing bytes on-disk and in-memory by recognising reoccurring patterns in serialised bytes without restricting the core implementation to any specific data model (SQL, NoSQL etc) or storage type (Disk or RAM). The core provides many configurations that can be manually tuned for custom use-cases, but we aim implement automatic runtime tuning when we are able to collect and analyse runtime machine statistics & read-write patterns. Manage data by creating familiar data structures like Map, Set, Queue, SetMap, MultiMap that can easily be converted to native Java and Scala collections. Perform conditional updates/data modifications with any Java, Scala or any native JVM code - No query language.
  • 20
    Oracle Database
    Oracle database products offer customers cost-optimized and high-performance versions of Oracle Database, the world's leading converged, multi-model database management system, as well as in-memory, NoSQL, and MySQL databases. Oracle Autonomous Database, available on-premises via Oracle Cloud@Customer or in the Oracle Cloud Infrastructure, enables customers to simplify relational database environments and reduce management workloads. Oracle Autonomous Database eliminates the complexity of operating and securing Oracle Database while giving customers the highest levels of performance, scalability, and availability. Oracle Database can be deployed on-premises when customers have data residency and network latency concerns. Customers with applications that are dependent on specific Oracle database versions have complete control over the versions they run and when those versions change.
  • 21
    Azure Cache for Redis
    As traffic and demands on your app increase, scale performance simply and cost-effectively. Add a quick caching layer to the application architecture to handle thousands of simultaneous users with near-instant speed—all with the benefits of a fully managed service. Superior throughput and performance to handle millions of requests per second with sub-millisecond latency. Fully managed service with automatic patching, updates, scaling, and provisioning so you can focus on development. RedisBloom, RediSearch, and RedisTimeSeries module integration, supporting data analysis, search, and streaming. Powerful capabilities including clustering, built-in replication, Redis on Flash, and availability of up to 99.99 percent. Complement database services like Azure SQL Database and Azure Cosmos DB by enabling your data tier to scale throughput at a lower cost than through expanded database instances.
    Starting Price: $1.11 per month
  • 22
    AsparaDB

    AsparaDB

    Alibaba

    ApsaraDB for Redis is an automated and scalable tool for developers to manage data storage shared across multiple processes, applications or servers. As a Redis protocol compatible tool, ApsaraDB for Redis offers exceptional read-write capabilities and ensures data persistence by using memory and hard disk storage. ApsaraDB for Redis provides data read-write capabilities at high speed by retrieving data from in-memory caches and ensures data persistence by using both memory and hard disk storage mode. ApsaraDB for Redis supports advanced data structures such as leaderboard, counting, session, and tracking, which are not readily achievable through ordinary databases. ApsaraDB for Redis also has an enhanced edition called "Tair" . Tair has officially handled the data caching scenarios of Alibaba Group since 2009 and has proven its outstanding performance in scenarios such as Double 11 Shopping Festival.
  • 23
    Hazelcast

    Hazelcast

    Hazelcast

    In-Memory Computing Platform. The digital world is different. Microseconds matter. That's why the world's largest organizations rely on us to power their most time-sensitive applications at scale. New data-enabled applications can deliver transformative business power – if they meet today’s requirement of immediacy. Hazelcast solutions complement virtually any database to deliver results that are significantly faster than a traditional system of record. Hazelcast’s distributed architecture provides redundancy for continuous cluster up-time and always available data to serve the most demanding applications. Capacity grows elastically with demand, without compromising performance or availability. The fastest in-memory data grid, combined with third-generation high-speed event processing, delivered through the cloud.
  • 24
    Apache Accumulo

    Apache Accumulo

    Apache Corporation

    With Apache Accumulo, users can store and manage large data sets across a cluster. Accumulo uses Apache Hadoop's HDFS to store its data and Apache ZooKeeper for consensus. While many users interact directly with Accumulo, several open source projects use Accumulo as their underlying store. To learn more about Accumulo, take the Accumulo tour, read the user manual and run the Accumulo example code. Feel free to contact us if you have any questions. Accumulo has a programming mechanism (called Iterators) that can modify key/value pairs at various points in the data management process. Every Accumulo key/value pair has its own security label which limits query results based off user authorizations. Accumulo runs on a cluster using one or more HDFS instances. Nodes can be added or removed as the amount of data stored in Accumulo changes.
  • 25
    Lucid KV

    Lucid KV

    Lucid KV

    Lucid is currently in a development stage but we want to achieve a fast, secure and distributed key-value store accessible through an HTTP API, we also want to propose persistence, encryption, WebSocket streaming, replication and a lot of features. Private Keys Storing, IoT (to collect and save statistics data), Distributed cache, service discovery, distributed configuration, blob storage etc.
  • 26
    etcd

    etcd

    etcd

    etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node. Store data in hierarchically organized directories, as in a standard filesystem. Watch specific keys or directories for changes and react to changes in values.
  • 27
    FoundationDB

    FoundationDB

    FoundationDB

    FoundationDB is multi-model, meaning you can store many types data in a single database. All data is safely stored, distributed, and replicated in the Key-Value Store component. FoundationDB is easy to install, grow, and manage. It has a distributed architecture that gracefully scales out, and handles faults while acting like a single ACID database. FoundationDB provides amazing performance on commodity hardware, allowing you to support very heavy loads at low cost. FoundationDB has been running in production for years and been hardened with lessons learned. Backing FoundationDB up is an unmatched testing system based on a deterministic simulation engine. We encourage your participation in our open-source community! Join us in technical and user discussions on the community forums, and learn how to contribute.
  • 28
    GridDB

    GridDB

    GridDB

    GridDB uses multicast communication to constitute a cluster. Set the network to enable multicast communication. First, check the host name and an IP address. Execute “hostname -i” command to check the settings of an IP address of the host. If the IP address of the machine is the same as below, no need to perform additional network setting and you can jump to the next section. GridDB is a database that manages a group of data (known as a row) that is made up of a key and multiple values. Besides having a composition of an in-memory database that arranges all the data in the memory, it can also adopt a hybrid composition combining the use of a disk (including SSD as well) and a memory.
  • 29
    InterSystems IRIS

    InterSystems IRIS

    InterSystems

    InterSystems IRIS is a complete cloud-first data platform that includes a multi-model transactional data management engine, an application development platform, and interoperability engine, and an open analytics platform. It is the next generation of our proven data management software.It includes the capabilities of InterSystems Cache and Ensemble, plus a wealth of exciting new capabilities to make it easy to build and deploy cloud based, analytics-intensive enterprise applications with even greater performance and scalability. InterSystems IRIS provides a set of APIs to operate with transactional persistent data simultaneously: key-value, relational, object, document, multidimensional. Data can be managed by SQL, Java, node.js, .NET, C++, Python, and native server-side ObjectScript language. InterSystems IRIS includes
  • 30
    Symas LMDB

    Symas LMDB

    Symas Corporation

    Symas LMDB is an extraordinarily fast, memory-efficient database we developed for the OpenLDAP Project. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases. Bottom line, with only 32KB of object code, LMDB may seem tiny. But it’s the right 32KB. Compact and efficient are two sides of a coin; that’s part of what makes LMDB so powerful. Symas offers fixed-price commercial support to those using LMDB in your applications. Development occurs in the OpenLDAP Project‘s git repo in the mdb.master branch. Symas LMDB has been written about, talked about, and utilized in a variety of impressive products and publications.
  • 31
    JaguarDB

    JaguarDB

    JaguarDB

    JaguarDB enables fast ingestion of time series data, coupling location-based data. It also can index in both dimensions, space and time. Back-filling time series data is also fast (inserting large volumes of data in past time). Normally time series is a series of data points indexed in time order. In JaguarDB, the time series has a different meaning: it is both a sequence of data points and a series of tick tables holding aggregated data values at specified time spans. For example, a time series table in JaguarDB can have a base table storing data points in time order, and tick tables such as 5 minute, 15 minute, hourly, daily, weekly, monthly tables to store aggregated data within these time spans. The format for the RETENTION is the same as the TICK format, except that it can have any number of retention periods. The RETENTION specifies how long the data points in the base table should be kept.
  • 32
    Google Cloud Memorystore
    Reduce latency with scalable, secure, and highly available in-memory service for Redis and Memcached. Memorystore automates complex tasks for open source Redis and Memcached like enabling high availability, failover, patching, and monitoring so you can spend more time coding. Start with the lowest tier and smallest size and then grow your instance with minimal impact. Memorystore for Memcached can support clusters as large as 5 TB supporting millions of QPS at very low latency. Memorystore for Redis instances are replicated across two zones and provide a 99.9% availability SLA. Instances are monitored constantly and with automatic failover—applications experience minimal disruption. Choose from the two most popular open source caching engines to build your applications. Memorystore supports both Redis and Memcached and is fully protocol compatible. Choose the right engine that fits your cost and availability requirements.
  • 33
    Terracotta

    Terracotta

    Software AG

    Terracotta DB is a comprehensive, distributed in-memory data management solution which caters to caching and operational storage use cases, and enables transactional and analytical processing. Ultra-Fast Ram + Big Data = Business Power. With BigMemory, you get: Real-time access to terabytes of in-memory data. High throughput with low, predictable latency. Support for Java®, Microsoft® .NET/C#, C++ applications. 99.999 percent uptime. Linear scalability. Data consistency guarantees across multiple servers. Optimized data storage across RAM and SSD. SQL support for querying in-memory data. Reduced infrastructure costs through maximum hardware utilization. High-performance, persistent storage for durability and ultra-fast restart. Advanced monitoring, management and control. Ultra-fast in-memory data stores that automatically move data where it’s needed. Support for data replication across multiple data centers for disaster recovery. Manage fast-moving data in real time
  • 34
    Alibaba Cloud Tablestore
    Tablestore enables seamless expansion of data size and access concurrency through data sharding and server load balancer technologies, providing storage of and real-time access to massive structured data. Three copies of data with high consistency, full host, service high availability and data high reliability. Provides full/incremental data tunnels, seamlessly interconnecting with various products for big data analysis and real-time stream computing. Distributed architecture, single table auto scaling, support of 10-PB-level data and 10-million-level access concurrency. Multi-dimensional and multi-level security protection and resource access management to ensure data security. The low latency, high concurrency, elastic resources and Pay-As-You-Go billing method of this service enables your risk control system to always operate in optimal conditions, allowing you to strictly control transaction risks.
    Starting Price: $0.00010 per GB
  • 35
    FairCom DB

    FairCom DB

    FairCom Corporation

    FairCom DB is ideal for large-scale, mission-critical, core-business applications that require performance, reliability and scalability that cannot be achieved by other databases. FairCom DB delivers predictable high-velocity transactions and massively parallel big data analytics. It empowers developers with NoSQL APIs for processing binary data at machine speed and ANSI SQL for easy queries and analytics over the same binary data. Among the companies that take advantage of the flexibility of FairCom DB is Verizon, who recently chose FairCom DB as an in-memory database for its Verizon Intelligent Network Control Platform Transaction Server Migration. FairCom DB is an advanced database engine that gives you a Continuum of Control to achieve unprecedented performance with the lowest total cost of ownership (TCO). You do not conform to FairCom DB…FairCom DB conforms to you. With FairCom DB, you are not forced to conform your needs to meet the limitations of the database.
  • 36
    Macrometa

    Macrometa

    Macrometa

    We deliver a geo-distributed real-time database, stream processing and compute runtime for event-driven applications across up to 175 worldwide edge data centers. App & API builders love our platform because we solve the hardest problems of sharing mutable state across 100s of global locations, with strong consistency & low latency. Macrometa enables you to surgically extend your existing infrastructure to bring part of or your entire application closer to your end users. This allows you to improve performance, user experience, and comply with global data governance laws. Macrometa is a serverless, streaming NoSQL database, with integrated pub/sub and stream data processing and compute engine. Create stateful data infrastructure, stateful functions & containers for long running workloads, and process data streams in real time. You do the code, we do all the ops and orchestration.
  • 37
    Google Cloud Bigtable
    Google Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. Fast and performant: Use Cloud Bigtable as the storage engine that grows with you from your first gigabyte to petabyte-scale for low-latency applications as well as high-throughput data processing and analytics. Seamless scaling and replication: Start with a single node per cluster, and seamlessly scale to hundreds of nodes dynamically supporting peak demand. Replication also adds high availability and workload isolation for live serving apps. Simple and integrated: Fully managed service that integrates easily with big data tools like Hadoop, Dataflow, and Dataproc. Plus, support for the open source HBase API standard makes it easy for development teams to get started.
  • 38
    GridGain

    GridGain

    GridGain Systems

    The enterprise-grade platform built on Apache Ignite that provides in-memory speed and massive scalability for data-intensive applications and real-time data access across datastores and applications. Upgrade from Ignite to GridGain with no code changes and deploy your clusters securely at global scale with zero downtime. Perform rolling upgrades of your production clusters with no impact on application availability. Replicate across globally distributed data centers to load balance workloads and prevent downtime from regional outages. Secure your data at rest and in motion, and ensure compliance with security and privacy standards. Easily integrate with your organization's authentication and authorization system. Enable full data and user activity auditing. Create automated schedules for full and incremental backups. Restore your cluster to the last stable state with snapshots and point-in-time recovery.
  • 39
    OrigoDB

    OrigoDB

    Origo

    OrigoDB enables you to build high quality, mission critical systems with real-time performance at a fraction of the time and cost. This is not marketing gibberish! Please read on for a no nonsense description of our features. Get in touch if you have questions or download and try it out today! In-memory operations are orders of magnitude faster than disk operations. A single OrigoDB engine can execute millions of read transactions per second and thousands of write transactions per second with synchronous command journaling to a local SSD. This is the #1 reason we built OrigoDB. A single object oriented domain model is far simpler than the full stack including a relational model, object/relational mapping, data access code, views and stored procedures. That's a lot of waste that can be eliminated! The OrigoDB engine is 100% ACID out of the box. Commands execute one at a time, transitioning the in-memory model from one consistent state to the next.
    Starting Price: €200 per GB RAM per server
  • 40
    Oracle Coherence
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence, 14.1.1, adds a patented scalable messaging implementation, support for polyglot grid-side programming on GraalVM, distributed tracing in the grid, and certification on JDK 11. Coherence stores each piece of data within multiple members (one primary and one or more backup copies), and doesn't consider any mutating operation complete until the backup(s) are successfully created. This ensures that your data grid can tolerate the failure at any level: from single JVM, to whole data center.
  • 41
    DataStax

    DataStax

    DataStax

    The Open, Multi-Cloud Stack for Modern Data Apps. Built on open-source Apache Cassandra™. Global-scale and 100% uptime without vendor lock-in. Deploy on multi-cloud, on-prem, open-source, and Kubernetes. Elastic and pay-as-you-go for improved TCO. Start building faster with Stargate APIs for NoSQL, real-time, reactive, JSON, REST, and GraphQL. Skip the complexity of multiple OSS projects and APIs that don’t scale. Ideal for commerce, mobile, AI/ML, IoT, microservices, social, gaming, and richly interactive applications that must scale-up and scale-down with demand. Get building modern data applications with Astra, a database-as-a-service powered by Apache Cassandra™. Use REST, GraphQL, JSON with your favorite full-stack framework Richly interactive apps that are elastic and viral-ready from Day 1. Pay-as-you-go Apache Cassandra DBaaS that scales effortlessly and affordably.
  • 42
    Azure Table Storage
    Use Azure Table storage to store petabytes of semi-structured data and keep costs down. Unlike many data stores—on-premises or cloud-based—Table storage lets you scale up without having to manually shard your dataset. Availability also isn’t a concern: using geo-redundant storage, stored data is replicated three times within a region—and an additional three times in another region, hundreds of miles away. Table storage is excellent for flexible datasets—web app user data, address books, device information, and other metadata—and lets you build cloud applications without locking down the data model to particular schemas. Because different rows in the same table can have a different structure—for example, order information in one row, and customer information in another—you can evolve your application and table schema without taking it offline. Table storage embraces a strong consistency model.
  • 43
    IBM Cloud Databases
    IBM Cloud Databases are open source data stores for enterprise application development. Built on a Kubernetes foundation, they offer a database platform for serverless applications. They are designed to scale storage and compute resources seamlessly without being constrained by the limits of a single server. Natively integrated and available in the IBM Cloud console, these databases are now available through a consistent consumption, pricing, and interaction model. They aim to provide a cohesive experience for developers that include access control, backup orchestration, encryption key management, auditing, monitoring, and logging.
  • 44
    Azure Cosmos DB

    Azure Cosmos DB

    Microsoft

    Azure Cosmos DB is a fully managed NoSQL database service for modern app development with guaranteed single-digit millisecond response times and 99.999-percent availability backed by SLAs, automatic and instant scalability, and open source APIs for MongoDB and Cassandra. Enjoy fast writes and reads anywhere in the world with turnkey multi-master global distribution. Reduce time to insight by running near-real time analytics and AI on the operational data within your Azure Cosmos DB NoSQL database. Azure Synapse Link for Azure Cosmos DB seamlessly integrates with Azure Synapse Analytics without data movement or diminishing the performance of your operational data store.
  • 45
    Apache HBase

    Apache HBase

    The Apache Software Foundation

    Use Apache HBase™ when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Automatic failover support between RegionServers. Easy to use Java API for client access. Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options. Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX.
  • 46
    Redis

    Redis

    Redis Labs

    Redis Labs: home of Redis. Redis Enterprise is the best version of Redis. Go beyond cache; try Redis Enterprise free in the cloud using NoSQL & data caching with the world’s fastest in-memory database. Run Redis at scale, enterprise grade resiliency, massive scalability, ease of management, and operational simplicity. DevOps love Redis in the Cloud. Developers can access enhanced data structures, a variety of modules, and rapid innovation with faster time to market. CIOs love the confidence of working with 99.999% uptime best in class security and expert support from the creators of Redis. Implement relational databases, active-active, geo-distribution, built in conflict distribution for simple and complex data types, & reads/writes in multiple geo regions to the same data set. Redis Enterprise offers flexible deployment options, cloud on-prem, & hybrid. Redis Labs: home of Redis. Redis JSON, Redis Java, Python Redis, Redis on Kubernetes & Redis gui best practices.
  • 47
    memcached

    memcached

    memcached

    You can think of it as a short-term memory for your applications. memcached allows you to take memory from parts of your system where you have more than you need and make it accessible to areas where you have less than you need. The first scenario illustrates the classic deployment strategy, however you'll find that it's both wasteful in the sense that the total cache size is a fraction of the actual capacity of your web farm, but also in the amount of effort required to keep the cache consistent across all of those nodes. With memcached, you can see that all of the servers are looking into the same virtual pool of memory. Also, as the demand for your application grows to the point where you need to have more servers, it generally also grows in terms of the data that must be regularly accessed. A deployment strategy where these two aspects of your system scale together just makes sense.
  • 48
    GigaSpaces

    GigaSpaces

    GigaSpaces

    Smart DIH is an operational data hub that powers real-time modern applications. It unleashes the power of customers’ data by transforming data silos into assets, turning organizations into data-driven enterprises. Smart DIH consolidates data from multiple heterogeneous systems into a highly performant data layer. Low code tools empower data professionals to deliver data microservices in hours, shortening developing cycles and ensuring data consistency across all digital channels. XAP Skyline is a cloud-native, in memory data grid (IMDG) and developer framework designed for mission critical, cloud-native apps. XAP Skyline delivers maximal throughput, microsecond latency and scale, while maintaining transactional consistency. It provides extreme performance, significantly reducing data access time, which is crucial for real-time decisioning, and transactional applications. XAP Skyline is used in financial services, retail, and other industries where speed and scalability are critical.
  • 49
    XAP

    XAP

    GigaSpaces

    GigaSpaces XAP, an event-driven, distributed development platform, delivers extreme processing for mission-critical applications. XAP provides high availability, resilience and boundless scale under any load. XAP Skyline, an in-memory distributed technology for mission-critical applications running in cloud-native environments, unites data and business logic within the Kubernetes cluster. With XAP Skyline, developers can ensure that data-driven applications achieve the highest levels of performance and serve hundreds of thousands of concurrent users while delivering sub-second response times. XAP Skyline delivers the low latency, scalability and resilience. This developer platform is used in financial services, retail, and other industries where speed and scalability are critical.
  • 50
    KeyDB

    KeyDB

    KeyDB

    KeyDB maintains full compatibility with Redis modules, API and protocol. Seamlessly drop in KeyDB and maintain full compatibility with your existing clients, scripts and configurations. Multi-Master mode uses a single replicated dataset across many nodes to serve both read and write operations Nodes can be replicated cross-region to offer submillisecond latencies to local clients. Cluster mode allows unlimited read and write scaling by splitting the dataset across shards. This allows unlimited scaling, and also support high availability through replica nodes. KeyDB offers new community driven commands that enable you to do more with your data. Add your own commands and functionality using JavaScript with the ModJS module. ModJS lets you write functions in javascript that can in turn be called directly by KeyBD. The example to the left shows and example of a javascript function that would be loaded with the module. It can then be called directly from your client.