Best Data Management Software for Linux - Page 15

Compare the Top Data Management Software for Linux as of October 2025 - Page 15

  • 1
    ArcServe Live Migration
    Migrate data, applications and workloads to the cloud without downtime. Arcserve Live Migration was designed to eliminate disruption during your cloud transformation. Easily move data, applications and workloads to the cloud or target destination of your choice while keeping your business fully up and running. Remove complexity by orchestrating the cutover to the target destination. Manage the entire cloud migration process from a centralconsole. Arcserve Live Migration simplifies the process of migrating data, applications and workloads. Its highly flexible architecture enables you to move virtually any type of data or workload to cloud, on-premises or remote locations, such as the edge, with support for virtual, cloud and physical systems. Arcserve Live Migration automatically synchronizes files, databases, and applications on Windows and Linux systems with a second physical or virtual environment located on-premises, at a remote location, or in the cloud.
  • 2
    Life.io Engage
    Life.ioEngage™ replaces tired customer transactions by creating meaningful and ongoing customer interactions. At each stage of the customer relationship, we educate, engage, reward, and delight your customers. These thoughtful interactions reveal important data and insights that help you move the needle on your most important metrics: Conversion, lead generation, placement, wallet share generation, persistency, NPS. Available as a desktop or mobile app, the Engage platform can stand on its own or seamlessly integrate with Life.ioGrow™ and Life.ioEmpower™ as well as your existing technology. Successful engagement revolves around offering real value to the user. Built around the framework of holistic well-being, Engage: Offers classes, real-life stories, and short articles on personal finance, health, fitness and emotional well-being, encourages positive change through dynamic, original content, fun programs, and quizzes.
  • 3
    Symas LMDB

    Symas LMDB

    Symas Corporation

    Symas LMDB is an extraordinarily fast, memory-efficient database we developed for the OpenLDAP Project. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases. Bottom line, with only 32KB of object code, LMDB may seem tiny. But it’s the right 32KB. Compact and efficient are two sides of a coin; that’s part of what makes LMDB so powerful. Symas offers fixed-price commercial support to those using LMDB in your applications. Development occurs in the OpenLDAP Project‘s git repo in the mdb.master branch. Symas LMDB has been written about, talked about, and utilized in a variety of impressive products and publications.
  • 4
    Alibaba Cloud TSDB
    Time Series Database (TSDB) supports high-speed data reading and writing. It offers high compression ratios for cost-efficient data storage. This service also supports visualization of precision reduction, interpolation, multi-metric aggregate computing, and query results. The TSDB service reduces storage costs and improves the efficiency of data writing, query, and analysis. This enables you to handle large amounts of data points and collect data more frequently. This service has been widely applied to systems in different industries, such as IoT monitoring systems, enterprise energy management systems (EMSs), production security monitoring systems, and power supply monitoring systems. Optimizes database architectures and algorithms. TSDB can read or write millions of data points within seconds. Applies an efficient compression algorithm to reduce the size of each data point to 2 bytes, saving more than 90% in storage costs.
  • 5
    JanusGraph

    JanusGraph

    JanusGraph

    JanusGraph is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. JanusGraph is a project under The Linux Foundation, and includes participants from Expero, Google, GRAKN.AI, Hortonworks, IBM and Amazon. Elastic and linear scalability for a growing data and user base. Data distribution and replication for performance and fault tolerance. Multi-datacenter high availability and hot backups. All functionality is totally free. No need to buy commercial licenses. JanusGraph is fully open source under the Apache 2 license. JanusGraph is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. Support for ACID and eventual consistency. In addition to online transactional processing (OLTP), JanusGraph supports global graph analytics (OLAP) with its Apache Spark integration.
  • 6
    Nebula Graph
    The graph database built for super large-scale graphs with milliseconds of latency. We are continuing to collaborate with the community to prepare, popularize and promote the graph database. Nebula Graph only allows authenticated access via role-based access control. Nebula Graph supports multiple storage engine types and the query language can be extended to support new algorithms. Nebula Graph provides low latency read and write , while still maintaining high throughput to simplify the most complex data sets. With a shared-nothing distributed architecture , Nebula Graph offers linear scalability. Nebula Graph's SQL-like query language is easy to understand and powerful enough to meet complex business needs. With horizontal scalability and a snapshot feature, Nebula Graph guarantees high availability even in case of failures. Large Internet companies like JD, Meituan, and Xiaohongshu have deployed Nebula Graph in production environments.
  • 7
    Cayley

    Cayley

    Cayley

    Cayley is an open-source database for Linked Data. It is inspired by the graph database behind Google's Knowledge Graph (formerly Freebase). Cayley is an open-source graph database designed for ease of use and storing complex data. Built-in query editor, visualizer and REPL. Cayley can use multiple query languages like Gizmo, a query language inspired by Gremlin, GraphQL-inspired query language, MQL a simplified version for Freebase fans. Cayley is modular, easy to connect to your favorite programming languages and back-end stores, production ready, well tested and used by various companies for their production workloads and fast with optimized specifically for usage in applications. Rough performance testing shows that, on 2014 consumer hardware and an average disk, 134m quads in LevelDB is no problem and a multi-hop intersection query- films starring X and Y - takes ~150ms. Cayley is configured by default to run in memory (That's what backend memstore means).
  • 8
    Sparksee

    Sparksee

    Sparsity Technologies

    Sparksee (formerly known as DEX), makes space and performance compatible with a small footprint and a fast analysis of large networks. It is natively available for .Net, C++, Python, Objective-C and Java, and covers the whole spectrum of Operating Systems. The graph is represented through bitmap data structures that allow high compression rates. Each of the bitmaps is partitioned into chunks that fit into disk pages to improve I/O locality. Using bitmaps, operations are computed with binary logic instructions that simplify the execution in pipelined processors. Full native indexing allows an extremely fast access to each of the graph data structures. Node adjacencies are represented by bitmaps to minimize their footprint. The number of times each data page is brought to memory is minimized with advanced I/O policies. Each value in the database is represented only once, avoiding unnecessary replication.
  • 9
    TiMi

    TiMi

    TIMi

    With TIMi, companies can capitalize on their corporate data to develop new ideas and make critical business decisions faster and easier than ever before. The heart of TIMi’s Integrated Platform. TIMi’s ultimate real-time AUTO-ML engine. 3D VR segmentation and visualization. Unlimited self service business Intelligence. TIMi is several orders of magnitude faster than any other solution to do the 2 most important analytical tasks: the handling of datasets (data cleaning, feature engineering, creation of KPIs) and predictive modeling. TIMi is an “ethical solution”: no “lock-in” situation, just excellence. We guarantee you a work in all serenity and without unexpected extra costs. Thanks to an original & unique software infrastructure, TIMi is optimized to offer you the greatest flexibility for the exploration phase and the highest reliability during the production phase. TIMi is the ultimate “playground” that allows your analysts to test the craziest ideas!
  • 10
    DataPreparator

    DataPreparator

    DataPreparator

    DataPreparator is a free software tool designed to assist with common tasks of data preparation (or data preprocessing) in data analysis and data mining. DataPreparator can assist you with exploring and preparing data in various ways prior to data analysis or data mining. It includes operators for cleaning, discretization, numeration, scaling, attribute selection, missing values, outliers, statistics, visualization, balancing, sampling, row selection, and several other tasks. Data access from text files, relational databases, and Excel workbooks. Handling of large volumes of data (since data sets are not stored in the computer memory, with the exception of Excel workbooks and result sets of some databases where database drivers do not support data streaming). Stand alone tool, independent of any other tools. User friendly graphical user interface. Operator chaining to create sequences of preprocessing transformations (operator tree). Creating of model tree for test/execution data.
  • 11
    Dqlite

    Dqlite

    Canonical

    Dqlite is a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices. Dqlite (“distributed SQLite”) extends SQLite across a cluster of machines, with automatic failover and high-availability to keep your application running. It uses C-Raft, an optimised Raft implementation in C, to gain high-performance transactional consensus and fault tolerance while preserving SQlite’s outstanding efficiency and tiny footprint. C-Raft is tuned to minimize transaction latency. C-Raft and dqlite are both written in C for maximum cross-platform portability. Published under the LGPLv3 license with a static linking exception for maximum compatibility. Includes common CLI pattern for database initialization and voting member joins and departures. Minimal, tunable delay for failover with automatic leader election. Disk-backed database with in-memory options and SQLite transactions.
  • 12
    MySQL Workbench
    MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, and much more. MySQL Workbench is available on Windows, Linux and Mac OS X. MySQL Workbench enables a DBA, developer, or data architect to visually design, model, generate, and manage databases. It includes everything a data modeler needs for creating complex ER models, forward and reverse engineering, and also delivers key features for performing difficult change management and documentation tasks that normally require much time and effort. MySQL Workbench delivers visual tools for creating, executing, and optimizing SQL queries. The SQL Editor provides color syntax highlighting, auto-complete, reuse of SQL snippets, and execution history of SQL. The Database Connections Panel enables developers to easily manage standard database connections.
  • 13
    jBASE

    jBASE

    jBASE

    The future of your PICK system requires a database platform that continually evolves to meet the needs of today’s developers. jBASE is now officially certified for Docker containers, including built-in support for the MongoDB NoSQL database, and standard APIs for Salesforce, Avalara, and dozens of other platforms. Plus new enhancements to Objects that make life easier for developers. We are continuing to invest in jBASE because we believe in PICK! While others see a decline, we’ve seen 6 years of consecutive growth. We care about your long-term success and haven’t had a maintenance price increase in decades. We play well with others by collaborating and making jBASE integrate with modern technologies like VSCode, Mongo, Docker, and Salesforce. The migration routes from other PICK databases have been vastly simplified, licensing now supports flexible CPU and SaaS-based models, and our in-line operating system approach means our scalability, speed and stability are unmatched.
  • 14
    Sedna

    Sedna

    Sedna

    Sedna is a free native XML database which provides a full range of core database services - persistent storage, ACID transactions, security, indices, hot backup. Flexible XML processing facilities include W3C XQuery implementation, tight integration of XQuery with full-text search facilities and a node-level update language. It provides a number of easy exampes which can be run directly in command line and describes how to run examples provided with Sedna. Sedna distribution comes with an example set based on the XMark XML benchmark. This set allows you to investigate the features of Sedna easily. Examples include bulk load of a sample XML document and a number of sample XQuery queries and updates to this document. Below we will show how to run one of them.
  • 15
    LevelDB

    LevelDB

    Google

    LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Keys and values are arbitrary byte arrays. Data is stored sorted by key. Callers can provide a custom comparison function to override the sort order. Multiple changes can be made in one atomic batch. Users can create a transient snapshot to get a consistent view of data. Forward and backward iteration is supported over the data. Data is automatically compressed using the Snappy compression library. External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions. We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size. We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup.
  • 16
    rsync

    rsync

    rsync

    rsync is an open source utility that provides fast incremental file transfer. rsync is freely available under the GNU General Public License. The GPG signing key that is used to sign the release files is available from the public pgp key-server network. If you have automatic key-fetching enabled, just running a normal "gpg --verify" will grab my key automatically. Or, feel free to grab the gpp key for Wayne Davison manually. rsync is a file transfer program for Unix systems. rsync uses the "rsync algorithm" which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand. Optionally preserves symbolic links, hard links, file ownership, permissions, devices and times. Internal pipelining reduces latency for multiple files.
  • 17
    PoINT Data Replicator

    PoINT Data Replicator

    PoINT Software & Systems

    Today, organizations are typically storing unstructured data in file systems and increasingly in object and cloud storage. Cloud and object storage have numerous advantages, particularly with regard to inactive data. This leads to the requirement to migrate or replicate files (e.g. from legacy NAS) to cloud or object storage. More and more data is stored in cloud and object storage. This has created an underestimated security risk. In most cases, data stored in the cloud or in on-premises object storage is not backed up, as it is believed to be secure. This assumption is negligent and risky. High availability and redundancy as offered by cloud services and object storage products do not protect against human error, ransomware, malware, or technology failure. Thus, also cloud and object data need backup or replication, most appropriately on a separate storage technology, at a different location and in the original format as stored in the cloud and object storage.
  • 18
    IBM ProtecTIER
    ProtecTIER® is a disk-based data storage system. It uses data deduplication technology to store data to disk arrays. With Feature Code 9022, the ProtecTIER Virtual Tape Library (VTL) service emulates traditional automated tape libraries. With Feature Code 9024, a stand-alone TS7650G can be configured as FSI. Several software applications run on various TS7650G components and configurations. The ProtecTIER Manager workstation is a customer-supplied workstation that runs the ProtecTIER Manager software. The ProtecTIER Manager software provides the management GUI interface to the TS7650G. The ProtecTIER VTL service emulates traditional tape libraries. By emulating tape libraries, ProtecTIER VTL provides the capability to transition to disk backup without having to replace your entire backup environment. Your existing backup application can access virtual robots to move virtual cartridges between virtual slots and drives.
  • 19
    Apache Kudu

    Apache Kudu

    The Apache Software Foundation

    A Kudu cluster stores tables that look just like tables you're used to from relational (SQL) databases. A table can be as simple as a binary key and value, or as complex as a few hundred different strongly-typed attributes. Just like SQL, every table has a primary key made up of one or more columns. This might be a single column like a unique user identifier, or a compound key such as a (host, metric, timestamp) tuple for a machine time-series database. Rows can be efficiently read, updated, or deleted by their primary key. Kudu's simple data model makes it a breeze to port legacy applications or build new ones, no need to worry about how to encode your data into binary blobs or make sense of a huge database full of hard-to-interpret JSON. Tables are self-describing, so you can use standard tools like SQL engines or Spark to analyze your data. Kudu's APIs are designed to be easy to use.
  • 20
    Apache Parquet

    Apache Parquet

    The Apache Software Foundation

    We created Parquet to make the advantages of compressed, efficient columnar data representation available to any project in the Hadoop ecosystem. Parquet is built from the ground up with complex nested data structures in mind, and uses the record shredding and assembly algorithm described in the Dremel paper. We believe this approach is superior to simple flattening of nested namespaces. Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented. Parquet is built to be used by anyone. The Hadoop ecosystem is rich with data processing frameworks, and we are not interested in playing favorites.
  • 21
    Hypertable

    Hypertable

    Hypertable

    Hypertable delivers scalable database capacity at maximum performance to speed up your big data application and reduce your hardware footprint. Hypertable delivers maximum efficiency and superior performance over the competition which translates into major cost savings. A proven scalable design that powers hundreds of Google services. All the benefits of open source with a strong and thriving community. C++ implementation for optimum performance. 24/7/365 support for your business-critical big data application. Unparalleled access to Hypertable brain power by the employer of all core Hypertable developers. Hypertable was designed for the express purpose of solving the scalability problem, a problem that is not handled well by a traditional RDBMS. Hypertable is based on a design developed by Google to meet their scalability requirements and solves the scale problem better than any of the other NoSQL solutions out there.
  • 22
    InfiniDB

    InfiniDB

    Database of Databases

    InfiniDB is a column-store DBMS optimized for OLAP workloads. It has a distributed architecture to support Massive Paralllel Processing (MPP). It uses MySQL as its front-end such that users familiar with MySQL can quickly migrate to InfiniDB. Due to this fact, users can connect to InfiniDB using any MySQL connector. InfiniDB applies MVCC to do concurrency control. It uses term System Change Number (SCN) to indicate a version of the system. In its Block Resolution Manager (BRM), it utilizes three structures, version buffer, version substitution structure, and version buffer block manager, to manage multiple versions. InfiniDB applies deadlock detection to resolve conflicts. InfiniDB uses MySQL as its front-end and supports all MySQL syntaxes, including foreign keys. InfiniDB is a columnar DBMS. For each column, InfiniDB applies range partitioning and stores the minimum and maximum value of each partition in a small structure called extent map.
  • 23
    qikkDB

    qikkDB

    qikkDB

    QikkDB is a GPU accelerated columnar database, delivering stellar performance for complex polygon operations and big data analytics. When you count your data in billions and want to see real-time results you need qikkDB. We support Windows and Linux operating systems. We use Google Tests as the testing framework. There are hundreds of unit tests and tens of integration tests in the project. For development on Windows, Microsoft Visual Studio 2019 is recommended, and its dependencies are CUDA version 10.2 minimal, CMake 3.15 or newer, vcpkg, boost. For development on Linux, the dependencies are CUDA version 10.2 minimal, CMake 3.15 or newer, and boost. This project is licensed under the Apache License, Version 2.0. You can use an installation script or dockerfile to install qikkDB.
  • 24
    RRDtool

    RRDtool

    RRDtool

    RRDtool is the OpenSource industry standard, high performance data logging and graphing system for time series data. RRDtool can be easily integrated in shell scripts, perl, python, ruby, lua or tcl applications.
  • 25
    Amadea

    Amadea

    ISoft

    Amadea technology relies on the fastest real-time calculation and modeling engine on the market. Speed up the creation, deployment and automation of your analytics projects within the same integrated environment. Data quality is the key to analytical projects. Thanks to the ISoft real-time calculation engine, the fastest on the market, Amadea allows companies to prepare and use massive and/or complex data in real-time, regardless of the volume. ISoft started from a simple observation, successful analytical projects must involve the business users at every stage. Founded on a no-code interface, accessible to all types of users, Amadea allows everyone involved in analytical projects to take part. As Amadea has the fastest real-time calculation engine on the market, it lets you specify, prototype and build your data applications simultaneously. Amadea incorporates the fastest real-time data analysis engine on the market, 10 million lines per second & per core for standard calculations.
  • 26
    IBM InfoSphere Optim Data Privacy
    IBM InfoSphere® Optim™ Data Privacy provides extensive capabilities to effectively mask sensitive data across non-production environments, such as development, testing, QA or training. To protect confidential data this single offering provides a variety of transformation techniques that substitute sensitive information with realistic, fully functional masked data. Examples of masking techniques include substrings, arithmetic expressions, random or sequential number generation, date aging, and concatenation. The contextually accurate masking capabilities help masked data retain a similar format to the original information. Apply a range of masking techniques on-demand to transform personally-identifying information and confidential corporate data in applications, databases and reports. Data masking features help you to prevent misuse of information by masking, obfuscating, and privatizing personal information that is disseminated across non-production environments.
  • 27
    Redpanda

    Redpanda

    Redpanda Data

    Breakthrough data streaming capabilities that let you deliver customer experiences never before possible. Kafka API and ecosystem are compatible. Redpanda BulletPredictable low latencies with zero data loss. Redpanda BulletUpto 10x faster than Kafka. Redpanda BulletEnterprise-grade support and hotfixes. Redpanda BulletAutomated backups to S3/GCS. Redpanda Bullet100% freedom from routine Kafka operations. Redpanda BulletSupport for AWS and GCP. Redpanda was designed from the ground up to be easily installed to get streaming up and running quickly. After you see its power, put Redpanda to the test in production. Use the more advanced Redpanda features. We manage provisioning, monitoring, and upgrades. Without any access to your cloud credentials. Sensitive data never leaves your environment. Provisioned, operated, and maintained for you. Configurable instance types. Expand cluster as your needs grow.
  • 28
    Axibase Enterprise Reporter (AER)
    Axibase Enterprise Reporter (AER) is a unified IT reporting solution for performance monitoring and capacity planning based on linked data and self-service concepts. The linked data architecture implemented in AER allows it to deliver reporting capabilities on top of underlying monitoring systems simultaneously, without copying the data. AER is pre-integrated with IBM Tivoli, Microsoft System Center Operations Manager, HP Openview and Performance Manager, BMC ProactiveNet, VMWare vCenter, Oracle Enterprise Manager, SAP HANA, NetApp OnCommand, WhatsUp, Dynatrace, Entuity and other solutions. In addition, AER provides the universal adapter for integration with any monitoring system or a custom data source that supports JDBC connectivity. Leveraging AER as a single point of access to IT infrastructure metrics, systems administrators and application support teams are able to execute and automate performance monitoring and capacity planning tasks with minimal effort.
  • 29
    solidDB

    solidDB

    UNICOM Systems

    solidDB is known worldwide for delivering data with extreme speed. There are millions of deployments of solidDB in telecommunications networks, enterprise applications, and embedded software & systems. Market leaders such as Cisco, HP, Alcatel, Nokia and Siemens rely on it for their mission-critical applications. By keeping critical data in memory, rather than on disk, solidDB can perform significantly faster than conventional databases. It helps applications achieve throughput of hundreds of thousands to millions of transactions per second with response times measured in microseconds. Beyond game-changing performance, solidDB also provides built-in data availability features that help sustain uptime, prevent data loss and accelerate recovery. Additionally, solidDB supports administrators with the flexibility to tailor the software to precise application needs and features designed to simplify deployment and administration, helping drive down the total cost of ownership (TCO).
  • 30
    eMite

    eMite

    eMite

    eMite is the operational intelligence platform that combines advanced analytics, data correlation, KPI management and threshold alerting into a single, out-of-the-box browser-based solution that provides actionable insights from both real-time and historical data. eMite provides a very flexible and powerful data onboarding ETL (extract, transform, load) framework using several technologies to extract data, including APIs, XML, JSON, SQL, and others. eMite has developed over 80 pre-built adaptors to automatically ingest data from common third-party solutions from vendors like Salesforce, Microsoft, Oracle, Atlassian, Snare, and Genesys. eMite also provides adaptors to onboard data from more generic data sources like a database or an Excel file. eMite includes a KPI (Key Performance Indicator) management system, allowing users to build custom KPIs that are relevant to their operations.