Best Data Management Software - Page 98

Compare the Top Data Management Software as of February 2026 - Page 98

  • 1
    KX Insights
    KX Insights is a cloud-native platform for critical real-time performance and continuous actionable intelligence. Using complex event processing, high-speed analytics and machine learning interfaces, it enables fast decision-making and automated responses to events in fractions of a second. It’s not just storage and compute elasticity that have moved to the cloud. It’s everything: data, tools, development, security, connectivity, operations, maintenance. KX can help you leverage that power to make smarter, more insightful decisions by integrating real-time analytics into your business operations. KX Insights leverages industry standards to ensure openness and interoperability with other technologies in order to deliver insights faster and more cost-effectively. It operates a microservices-based architecture for capturing, storing and processing high-volume, high-velocity data using cloud standards, services, and protocols.
  • 2
    KX Streaming Analytics
    KX Streaming Analytics provides the ability to ingest, store, process, and analyze historic and time series data to make analytics, insights, and visualizations instantly available. To help ensure your applications and users are productive quickly, the platform provides the full lifecycle of data services, including query processing, tiering, migration, archiving, data protection, and scaling. Our advanced analytics and visualization tools, used widely across finance and industry, enable you to define and perform queries, calculations, aggregations, machine learning and AI on any streaming and historical data. Deployable across multiple hardware environments, data can come from real-time business events and high-volume sources including sensors, clickstreams, radio-frequency identification, GPS systems, social networking sites, and mobile devices.
  • 3
    Versio.io

    Versio.io

    Versio.io

    Versio.io is an enterprise software to manage the detection and post-processing of changes in a enterprise company. Our unique and innovative approaches have enabled us to build a completely new kind of enterprise product. Below we give you insights into our research and development work. Relationships can exist between assets & configurations. These represent an important extension of information. The original data sources only partially have this information. In Versio.io, we can use the topology service to automatically recognise and map such relationships. This means that relationships or dependencies between instances from any data source can be mapped. All business-relevant assets and configuration items from all levels of an organisation can be captured, historicised, topologised and stored in a central repository.
  • 4
    OneTick

    OneTick

    OneMarketData

    It's performance, superior features and unmatched functionality have led OneTick Database to be embraced by leading banks, brokerages, data vendors, exchanges, hedge funds, market makers and mutual funds. OneTick is the premier enterprise-wide solution for tick data capture, streaming analytics, data management and research. With its superior features and unmatched functionality, OneTick is being embraced enthusiastically by leading hedge funds, mutual funds, banks, brokerages, market makers, data vendors and exchanges. OneTick’s proprietary time series database is a unified, multi-asset class platform that includes a fully integrated streaming analytics engine and built-in business logic to eliminate the need for multiple disparate systems. The system provides the lowest total cost of ownership available.
  • 5
    OpenTSDB

    OpenTSDB

    OpenTSDB

    OpenTSDB consists of a Time Series Daemon (TSD) as well as set of command line utilities. Interaction with OpenTSDB is primarily achieved by running one or more of the independent TSDs. There is no master, no shared state so you can run as many TSDs as required to handle any load you throw at it. Each TSD uses the open source database HBase or hosted Google Bigtable service to store and retrieve time-series data. The data schema is highly optimized for fast aggregations of similar time series to minimize storage space. Users of the TSD never need to access the underlying store directly. You can communicate with the TSD via a simple telnet-style protocol, an HTTP API or a simple built-in GUI. The first step in using OpenTSDB is to send time series data to the TSDs. A number of tools exist to pull data from various sources into OpenTSDB.
  • 6
    Machbase

    Machbase

    Machbase

    Machbase, a time-series database that stores and analyzes a lot of sensor data from various facilities in real time, is the only DBMS solution that can process and analyze big data at high speed. Experience the amazing speed of Machbase! It is the most innovative product that enables real-time processing, storage, and analysis of sensor data. High speed sensor data storage and inquiry for sensor data by embedding DBMS in an Edge devices. Best data storage and extraction performance by DBMS running in a single server. Configuring Multi-node cluster with the advantages of availability and scalability. Total management solution of Edge computing for device, connectivity and data.
  • 7
    Blueflood

    Blueflood

    Blueflood

    Blueflood is a high throughput, low latency, multi-tenant distributed metric processing system behind Rackspace Metrics, which is currently used in production by the Rackspace Monitoring team and Rackspace public cloud team to store metrics generated by their systems. In addition to Rackspace metrics, other large scale deployments of Blueflood can be found at community Wiki. Data from Blueflood can be used to construct dashboards, generate reports, graphs or for any other use involving time-series data. It focuses on near-realtime data, with data that is queryable mere milliseconds after ingestion. You send metrics to the ingestion service. You query your metrics from the Query service. And in the background, rollups are batch-processed offline so that queries for large time-periods are returned quickly.
  • 8
    RRDtool

    RRDtool

    RRDtool

    RRDtool is the OpenSource industry standard, high performance data logging and graphing system for time series data. RRDtool can be easily integrated in shell scripts, perl, python, ruby, lua or tcl applications.
  • 9
    Hawkular Metrics

    Hawkular Metrics

    Hawkular Metrics

    Hawkular Metrics is a scalable, asynchronous, multi tenant, long term metrics storage engine that uses Cassandra as the data store and REST as the primary interface. This section provides an overview of some of the key features of Hawkular Metrics. The following sections provide in-depth discussions on these as well as other features. Hawkular Metrics is all about scalability. You can run a single instance backed by a single Cassandra node. You can also scale out Cassandra to multiple nodes to handle increasing loads. The Hawkular Metrics server employs a stateless architecture, which makes it easy to scale out as well. This diagram illustrates the various deployment options made possible with Hawkular Metrics' scalable architecture. The upper left shows the simplest deployment with a single Cassandra node and single Hawkular Metrics node. The bottom right picture shows that it is possible to run more Hawkular Metrics nodes than Cassandra nodes.
  • 10
    Heroic

    Heroic

    Heroic

    Heroic is an open-source monitoring system originally built at Spotify to address problems faced with large scale gathering and near real-time analysis of metrics. Heroic uses a small set of components which are responsible for very specific things. Indefinite retention, as long as you have the hardware spend. Federation support to connect multiple Heroic clusters into a global interface. Heroic uses a small set of components which are responsible for very specific things. Consumers are the component responsible for consuming metrics. When building Heroic it was quickly realized that navigating hundreds of millions of time series without context is hard. Heroic has support for federating requests, which allows multiple independent Heroic clusters to serve clients through a single global interface. This can be used to reduce the amount of geographical traffic by allowing one cluster to operate completely isolated within its zone.
  • 11
    Proficy Historian
    Proficy Historian is a best-in-class historian software solution that collects industrial time-series and A&E data at very high speed, stores it efficiently and securely, distributes it, and allows for fast retrieval and analysis —driving greater business value. With decades of experience and thousands of successful customer installations around the world, Proficy Historian changes the way companies perform and compete by making data available for asset and process performance analysis. The most recent Proficy Historian enhances usability, configurability and maintainability with significant architectural improvements. Take advantage of the solution’s simple yet powerful features to unlock new value from your equipment, process data, and business models. Remote collector management with UX. Horizontal scalability that enables enterprise-wide data visibility.
  • 12
    Circonus IRONdb
    Circonus IRONdb makes it easy to handle and store unlimited volumes of telemetry data, easily handling billions of metric streams. Circonus IRONdb enables users to identify areas of opportunity and challenge in real time, providing forensic, predictive, and automated analytics capabilities that no other product can match. Rely on machine learning to automatically set a “new normal” as your data and operations dynamically change. Circonus IRONdb integrates with Grafana, which has native support for our analytics query language. We are also compatible with other visualization apps, such as Graphite-web. Circonus IRONdb keeps your data safe by storing multiple copies of your data in a cluster of IRONdb nodes. System administrators typically manage clustering, often spending significant time maintaining it and keeping it working. Circonus IRONdb allows operators to set and forget their cluster, and stop wasting resources manually managing their time series data store.
  • 13
    KairosDB

    KairosDB

    KairosDB

    Data can be pushed in KairosDB via multiple protocols like Telnet, Rest and Graphite. Other mechanisms such as plugins can also be used. KairosDB stores time series in Cassandra, the popular and performant NoSQL datastore. The schema consists of 3 column families. This API provides operations to list existing metric names, list tag names and values, store metric data points, and query for metric data points. With a default install, KairosDB serve up a query page whereby you can query data within the data store. It's designed primarily for development purposes. Aggregators perform an operation on data points and down samples. Standard functions like min, max, sum, count, mean and more are available. Import and export is available on the KairosDB server from the command line. Internal metrics to the data store can monitor the server’s performance.
  • 14
    QuestDB

    QuestDB

    QuestDB

    QuestDB is a relational column-oriented database designed for time series and event data. It uses SQL with extensions for time series to assist with real-time analytics. These pages cover core concepts of QuestDB, including setup steps, usage guides, and reference documentation for syntax, APIs and configuration. This section describes the architecture of QuestDB, how it stores and queries data, and introduces features and capabilities unique to the system. Designated timestamp is a core feature that enables time-oriented language capabilities and partitioning. Symbol type makes storing and retrieving repetitive strings efficient. Storage model describes how QuestDB stores records and partitions within tables. Indexes can be used for faster read access on specific columns. Partitions can be used for significant performance benefits on calculations and queries. SQL extensions allow performant time series analysis with a concise syntax.
  • 15
    Minitab Connect
    The best insights are based on the most complete, most accurate, and most timely data. Minitab Connect empowers data users from across the enterprise with self-serve tools to transform diverse data into a governed network of data pipelines, feed analytics initiatives and foster organization-wide collaboration. Users can effortlessly blend and explore data from databases, cloud and on-premise apps, unstructured data, spreadsheets, and more. Flexible, automated workflows accelerate every step of the data integration process, while powerful data preparation and visualization tools help yield transformative insights. Flexible, intuitive data integration tools let users connect and blend data from a variety of internal and external sources, like data warehouses, data lakes, IoT devices, SaaS applications, cloud storage, spreadsheets, and email.
  • 16
    DataSentics

    DataSentics

    DataSentics

    Making data science & machine learning have a real impact on organizations. We are an AI product studio, a group of 100 experienced data scientists and data engineers with a combination of experience both from the agile world of digital start-ups as well as major international corporations. We don’t end with nice slides and dashboards. The result that counts is an automated data solution in production integrated inside a real process. We do not report clickers but data scientists and data engineers. We have a strong focus on productionalizing data science solutions in the cloud with high standards of CI and automation. Building the greatest concentration of the smartest and most creative data scientists and engineers by being the most exciting and fulfilling place for them to work in Central Europe. Giving them the freedom to use our critical mass of expertise to find and iterate on the most promising data-driven opportunities, both for our clients and our own products.
  • 17
     AXIS Suite

    AXIS Suite

    Abaco Systems

    Software tools and libraries that help you make your application faster, stronger, and better. Optimized for high performance, graphical user interface to use within application, graphical user interface tool to facilitate application development, GPU focused image processing, general processing, and display and includes an application programming interface for your application. Add visualization and controls to your embedded application in minutes, even with no GUI experience. The most valuable tool you'll ever use to demystify application performance and determinism. Simplified inter-thread communication. Associated GUI to build the application framework and monitor performance. Control how your application maps to hardware. Visualize how the hardware resources are utilized in real-time. Point-to-point data movement/message passing. Graphical user interface tool to facilitate application development.
  • 18
    Acodis

    Acodis

    Acodis

    Intelligent document processing automates the processing of data within documents, contextualizing the document, understanding the information, extracting it, and sending it to the right place. With Acodis, you can do all of this in just a few seconds. The world is full of unstructured data hidden in documents and it will be for a long time to come. That's why we built Acodis so that you can extract data from any document, in any language. Get structured data from any document with machine learning, in seconds. Build and combine document processing workflows with a few clicks, no coding required. Once you capture and automate your document's data, integrate the process into your existing systems. Acodis offers an easy-to-use user interface. This enables your team to automate document-related processes and enables you to make faster decisions based on machine learning. Use the REST client in the programming language that you are using and integrate it with your existing business tools.
  • 19
    Retina

    Retina

    Retina

    Predict future value from day one. Retina is the customer intelligence solution that provides accurate customer lifetime value metrics early in the customer journey. Optimize marketing budgets in real-time, drive more predictable repeat revenue, and elevate brand equity with the most accurate CLV metrics. Align customer acquisition around CLV with improved targeting, ad relevance, conversion rates & customer loyalty. Build lookalike audiences based on your best customers. Focus on customer behavior instead of demographics. Pinpoint attributes that make leads more likely to convert. Uncover product features that drive valuable customer behavior. Create customer journeys that positively impact lifetime value. Implement changes to boost the value of your customer base. Using a sample of your customer data, Retina delivers individual customer lifetime value calculations to qualified customers before you buy.
  • 20
    SylLab

    SylLab

    SylLab Systems

    SylLab Systems is providing embedded compliance for enterprise data security. Privacy compliance and cybersecurity are expensive and difficult to implement, and many organizations get it wrong. Changes in the architecture, lawyers, consultants are a significant expenditure when facing privacy regulations (HIPAA, GDPR, PDPA, CCPA). Request a demo to learn more. Privacy Regulations are expanding beyond the current framework of IT infrastructure. Adapting to such a change is costly, time-consuming, and requires legal and development expertise. There is a better, more structured approach to data governance that responds and adapts to your complex IT environment, whether it’s on-cloud or on-premise. Take control of your compliance workflow and shape it according to business logic. Learn more about the solution trusted by large financial institutions across the globe.
  • 21
    Evolution AI

    Evolution AI

    Evolution AI

    We provide a sample of extracted data so you can quickly make an informed decision. Get your project off the ground in less than 24 hours. Costly human intervention is kept to a minimum. Our AI algorithms extract data from documents with 99.5%+ accuracy, this is guaranteed by SLA. Our clients value the accuracy provided by human oversight combined with the cost-effectiveness of artificial intelligence. Evolution AI leads a research consortium funded by the UK government, including university, government and corporate members, which has allowed us to develop several breakthrough algorithms. We have trained our models on one of the largest data sets of labeled documents ever assembled, containing over 25 million documents. Evolution AI allows data extraction from complex documents without defining any rules or writing code. Using our simple point and click interface we can quickly identify any data point you wish to extract from a document.
  • 22
    Smart Engines

    Smart Engines

    Smart Engines

    Green AI-powered scanner SDK of ID cards, passports, driver’s licenses, residence permits, visas, and other ids, more than 1834+ types in total. Provides eco-friendly, fast and precise scanning SDK for a smartphone, web, desktop or server, works fully autonomously. Extracts data from photos and scans, as well as in the video stream from a smartphone or web camera, is robust to capturing conditions. No data transfer — ID scanning is performed on-device and on-premise. Automatic scanning of machine-readable zones (MRZ); all types of credit cards: embossed, indent-printed, and flat-printed; barcodes: PDF417, QR code, AZTEC, DataMatrix, and others on the fly by a smartphone’s camera. Provides high-quality MRZ, barcode, and credit card scanning in mobile applications on-device regardless of lighting conditions. Supports card scanning of 21 payment systems.
  • 23
    Sybrin AI
    Sybrin AI is a fully integrated technology stack powered by computer vision, machine learning, and data science designed to intelligently automate business processes. A comprehensive framework for extracting and understanding data from non-traditional data sources, documents, images, and video. Seamless, real-time ID capture and extraction of any ID document across the globe. Sybrin intelligent document capture is designed to enable the integration of image capture, clean up, recognition, and data extraction into your application. Verify that the person behind a remote interaction is a real person and is physically present through active or passive liveness detection using image processing techniques and neural networks to prevent spoof attacks. Sybrin Identity Verification validates the identity of the person who is actioning the transaction by matching the person’s identity document details against a live selfie and third-party database.
  • 24
    IRISXtract
    Companies receive tons of documents and information on a daily basis, both paper and electronic. Processing these documents is time consuming and resource intensive. IRISXtract™ automatically classifies documents and extracts essential data. It transfers the relevant information to your business process applications, faster and more efficiently than any manual processing. Our software ensures paperless processing of the best quality, in every language, for every document and every process. An innovative AI-based classification engine that uses statistical operators, based on certain features and characteristic values, to analyze documents. The data extraction is based on a free-form, full-text approach, that requires no templates, manual configuration or complicated training.
  • 25
    Zuva DocAI
    Everything you need to capture critical data across your organization. Access context-aware machine learning models to extract relevant information from your documents. Use our specialized classifiers to identify business document types. Distinguish across employee contracts, leases, supply agreements, and more. Quickly identify the language your document is written in. Know if your documents are in English, Portuguese, German and other languages. Create and retrieve OCR text and images from over 20 file types including email, word documents, and PDFs. Use any AI model from our library of 1000+ built-in clause and provision models, trained by our in-house team of experts to decrease initial uplift. Zuva DocAI is powered by Zuva’s patented ML technology trusted by top law firms and enterprises to identify, extract, and analyze content in documents with unparalleled accuracy. Build your own AI applications that meet your unique needs.
  • 26
    Azure HDInsight
    Run popular open-source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using Azure HDInsight, a customizable, enterprise-grade service for open-source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open-source project ecosystem with the global scale of Azure. Easily migrate your big data workloads and processing to the cloud. Open-source projects and clusters are easy to spin up quickly without the need to install hardware or manage infrastructure. Big data clusters reduce costs through autoscaling and pricing tiers that allow you to pay for only what you use. Enterprise-grade security and industry-leading compliance with more than 30 certifications helps protect your data. Optimized components for open-source technologies such as Hadoop and Spark keep you up to date.
  • 27
    Azure Data Lake Storage
    Eliminate data silos with a single storage platform. Optimize costs with tiered storage and policy management. Authenticate data using Azure Active Directory (Azure AD) and role-based access control (RBAC). And help protect data with security features like encryption at rest and advanced threat protection. Highly secure with flexible mechanisms for protection across data access, encryption, and network-level control. Single storage platform for ingestion, processing, and visualization that supports the most common analytics frameworks. Cost optimization via independent scaling of storage and compute, lifecycle policy management, and object-level tiering. Meet any capacity requirements and manage data with ease, with the Azure global infrastructure. Run large-scale analytics queries at consistently high performance.
  • 28
    Azure Databricks
    Unlock insights from all your data and build artificial intelligence (AI) solutions with Azure Databricks, set up your Apache Spark™ environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. Azure Databricks provides the latest versions of Apache Spark and allows you to seamlessly integrate with open source libraries. Spin up clusters and build quickly in a fully managed Apache Spark environment with the global scale and availability of Azure. Clusters are set up, configured, and fine-tuned to ensure reliability and performance without the need for monitoring. Take advantage of autoscaling and auto-termination to improve total cost of ownership (TCO).
  • 29
    Moonoia docBrain
    The docBrain platform brings together machine learning, data science, solution engineering and DevOps for document-centric productive purpose. Deep learning technology allows you to train AI models from the bottom up and create unique solutions that address your specific document challenges. Use docBrain's pre-trained models to access years' worth of learning and ensure a minimum return on investment prior to any training. Whether you train the AI yourself or use the models off-the-shelf, the solutions you deploy with docBrain will easily integrate with your business systems. docBrain was created in-house to solve Moonoia’s own document processing challenges created mainly by error-prone and costly manual data validation that was slowing down end-to-end processes, making automation impossible. Market-available OCR technologies were unable to achieve the accuracy levels required for straight-through processing, especially for handwritten, unstructured or low-quality documents.
  • 30
    Datafold

    Datafold

    Datafold

    Prevent data outages by identifying and fixing data quality issues before they get into production. Go from 0 to 100% test coverage of your data pipelines in a day. Know the impact of each code change with automatic regression testing across billions of rows. Automate change management, improve data literacy, achieve compliance, and reduce incident response time. Don’t let data incidents take you by surprise. Be the first one to know with automated anomaly detection. Datafold’s easily adjustable ML model adapts to seasonality and trend patterns in your data to construct dynamic thresholds. Save hours spent on trying to understand data. Use the Data Catalog to find relevant datasets, fields, and explore distributions easily with an intuitive UI. Get interactive full-text search, data profiling, and consolidation of metadata in one place.