Alternatives to KX Streaming Analytics

Compare KX Streaming Analytics alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to KX Streaming Analytics in 2024. Compare features, ratings, user reviews, pricing, and more from KX Streaming Analytics competitors and alternatives in order to make an informed decision for your business.

  • 1
    StarTree

    StarTree

    StarTree

    StarTree Cloud is a fully-managed real-time analytics platform designed for OLAP at massive speed and scale for user-facing applications. Powered by Apache Pinot, StarTree Cloud provides enterprise-grade reliability and advanced capabilities such as tiered storage, scalable upserts, plus additional indexes and connectors. It integrates seamlessly with transactional databases and event streaming platforms, ingesting data at millions of events per second and indexing it for lightning-fast query responses. StarTree Cloud is available on your favorite public cloud or for private SaaS deployment. • Gain critical real-time insights to run your business • Seamlessly integrate data streaming and batch data • High performance in throughput and low-latency at petabyte scale • Fully-managed cloud service • Tiered storage to optimize cloud performance & spend • Fully-secure & enterprise-ready
    Compare vs. KX Streaming Analytics View Software
    Visit Website
  • 2
    Striim

    Striim

    Striim

    Data integration for your hybrid cloud. Modern, reliable data integration across your private and public cloud. All in real-time with change data capture and data streams. Built by the executive & technical team from GoldenGate Software, Striim brings decades of experience in mission-critical enterprise workloads. Striim scales out as a distributed platform in your environment or in the cloud. Scalability is fully configurable by your team. Striim is fully secure with HIPAA and GDPR compliance. Built ground up for modern enterprise workloads in the cloud or on-premise. Drag and drop to create data flows between your sources and targets. Process, enrich, and analyze your streaming data with real-time SQL queries.
  • 3
    Rockset

    Rockset

    Rockset

    Real-Time Analytics on Raw Data. Live ingest from S3, Kafka, DynamoDB & more. Explore raw data as SQL tables. Build amazing data-driven applications & live dashboards in minutes. Rockset is a serverless search and analytics engine that powers real-time apps and live dashboards. Operate directly on raw data, including JSON, XML, CSV, Parquet, XLSX or PDF. Plug data from real-time streams, data lakes, databases, and data warehouses into Rockset. Ingest real-time data without building pipelines. Rockset continuously syncs new data as it lands in your data sources without the need for a fixed schema. Use familiar SQL, including joins, filters, and aggregations. It’s blazing fast, as Rockset automatically indexes all fields in your data. Serve fast queries that power the apps, microservices, live dashboards, and data science notebooks you build. Scale without worrying about servers, shards, or pagers.
  • 4
    Amazon Timestream
    Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases. Amazon Timestream saves you time and cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical data to a cost optimized storage tier based upon user defined policies. Amazon Timestream’s purpose-built query engine lets you access and analyze recent and historical data together, without needing to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier. Amazon Timestream has built-in time series analytics functions, helping you identify trends and patterns in your data in near real-time.
  • 5
    Circonus IRONdb
    Circonus IRONdb makes it easy to handle and store unlimited volumes of telemetry data, easily handling billions of metric streams. Circonus IRONdb enables users to identify areas of opportunity and challenge in real time, providing forensic, predictive, and automated analytics capabilities that no other product can match. Rely on machine learning to automatically set a “new normal” as your data and operations dynamically change. Circonus IRONdb integrates with Grafana, which has native support for our analytics query language. We are also compatible with other visualization apps, such as Graphite-web. Circonus IRONdb keeps your data safe by storing multiple copies of your data in a cluster of IRONdb nodes. System administrators typically manage clustering, often spending significant time maintaining it and keeping it working. Circonus IRONdb allows operators to set and forget their cluster, and stop wasting resources manually managing their time series data store.
  • 6
    Kinetica

    Kinetica

    Kinetica

    A scalable cloud database for real-time analysis on large and streaming datasets. Kinetica is designed to harness modern vectorized processors to be orders of magnitude faster and more efficient for real-time spatial and temporal workloads. Track and gain intelligence from billions of moving objects in real-time. Vectorization unlocks new levels of performance for analytics on spatial and time series data at scale. Ingest and query at the same time to act on real-time events. Kinetica's lockless architecture and distributed ingestion ensures data is available to query as soon as it lands. Vectorized processing enables you to do more with less. More power allows for simpler data structures, which lead to lower storage costs, more flexibility and less time engineering your data. Vectorized processing opens the door to amazingly fast analytics and detailed visualization of moving objects at scale.
  • 7
    Warp 10
    Warp 10 is a modular open source platform that collects, stores, and analyzes data from sensors. Shaped for the IoT with a flexible data model, Warp 10 provides a unique and powerful framework to simplify your processes from data collection to analysis and visualization, with the support of geolocated data in its core model (called Geo Time Series). Warp 10 is both a time series database and a powerful analytics environment, allowing you to make: statistics, extraction of characteristics for training models, filtering and cleaning of data, detection of patterns and anomalies, synchronization or even forecasts. The analysis environment can be implemented within a large ecosystem of software components such as Spark, Kafka Streams, Hadoop, Jupyter, Zeppelin and many more. It can also access data stored in many existing solutions, relational or NoSQL databases, search engines and S3 type object storage system.
  • 8
    SAS Event Stream Processing
    Streaming data from operations, transactions, sensors and IoT devices is valuable – when it's well-understood. Event stream processing from SAS includes streaming data quality and analytics – and a vast array of SAS and open source machine learning and high-frequency analytics for connecting, deciphering, cleansing and understanding streaming data – in one solution. No matter how fast your data moves, how much data you have, or how many data sources you’re pulling from, it’s all under your control via a single, intuitive interface. You can define patterns and address scenarios from all aspects of your business, giving you the power to stay agile and tackle issues as they arise.
  • 9
    Evam Continuous Intelligence Platform
    Evam's Continuous Intelligence Platform combines multiple products for processing and visualizing real-time data. It runs real-time machine learning models on streaming data, while enriching the real-time data with a smart in-memory caching mechanism. EVAM empowers telecommunications, financial services, retail, transportation and travel companies to maximize their business value. Through continuous intelligence platform with machine learning capabilities. EVAM processes real-time data and designs and orchestrates customer journeys visually with advanced analytical models, machine learning, and artificial intelligence algorithms. EVAM enables enterprises to engage their customers using their data across all channels, including legacy ones, in real-time. Collect billions of events and process them in real-time. Understand each customer's needs and attract, engage, and retain them more effectively.
  • 10
    BangDB

    BangDB

    BangDB

    BangDB natively integrates AI, streaming, graph, analytics within the DB itself to enable users to deal with complex data of different kinds, such as text, images, videos, objects etc. for real time data processing and analysis Ingest or stream any data, process it, train models, do prediction, find patterns, take action and automate all these to enable use cases such as IOT monitoring, fraud or disruption prevention, log analysis, lead generation, 1-on-1 personalisation and many more. Today’s use cases require different kinds of data to be ingested, processed, and queried at the same time for a given problem. BangDB supports most of the useful data formats to allow user to solve the problem in a simple manner. Rise of real time data pushes for real time streaming and predictive data analytics for advanced and optimized business operations.
  • 11
    Prometheus

    Prometheus

    Prometheus

    Power your metrics and alerting with a leading open-source monitoring solution. Prometheus fundamentally stores all data as time series: streams of timestamped values belonging to the same metric and the same set of labeled dimensions. Besides stored time series, Prometheus may generate temporary derived time series as the result of queries. Prometheus provides a functional query language called PromQL (Prometheus Query Language) that lets the user select and aggregate time series data in real time. The result of an expression can either be shown as a graph, viewed as tabular data in Prometheus's expression browser, or consumed by external systems via the HTTP API. Prometheus is configured via command-line flags and a configuration file. While the command-line flags configure immutable system parameters (such as storage locations, amount of data to keep on disk and in memory, etc.). Download: https://sourceforge.net/projects/prometheus.mirror/
  • 12
    Amazon Kinesis
    Easily collect, process, and analyze video and data streams in real time. Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin. Amazon Kinesis enables you to ingest, buffer, and process streaming data in real-time, so you can derive insights in seconds or minutes instead of hours or days.
  • 13
    JaguarDB

    JaguarDB

    JaguarDB

    JaguarDB enables fast ingestion of time series data, coupling location-based data. It also can index in both dimensions, space and time. Back-filling time series data is also fast (inserting large volumes of data in past time). Normally time series is a series of data points indexed in time order. In JaguarDB, the time series has a different meaning: it is both a sequence of data points and a series of tick tables holding aggregated data values at specified time spans. For example, a time series table in JaguarDB can have a base table storing data points in time order, and tick tables such as 5 minute, 15 minute, hourly, daily, weekly, monthly tables to store aggregated data within these time spans. The format for the RETENTION is the same as the TICK format, except that it can have any number of retention periods. The RETENTION specifies how long the data points in the base table should be kept.
  • 14
    Axibase Time Series Database
    Parallel query engine with time- and symbol-indexed data access. Extended SQL syntax with advanced filtering and aggregations. Consolidate quotes, trades, snapshots, and reference data in one place. Strategy backtesting on high-frequency data. Quantitative and market microstructure research. Granular transaction cost analysis and rollup reporting. Market surveillance and anomaly detection. Non-transparent ETF/ETN decomposition. FAST, SBE, and proprietary protocols. Plain text protocol. Consolidated and direct feeds. Built-in latency monitoring tools. End-of-day archives. ETL from institutional and retail financial data platforms. Parallel SQL engine with syntax extensions. Advanced filtering by trading session, auction stage, index composition. Optimized aggregates for OHLCV and VWAP calculations. Interactive SQL console with auto-completion. API endpoint for programmatic integration. Scheduled SQL reporting with email, file, and web delivery. JDBC and ODBC drivers.
  • 15
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 16
    QuestDB

    QuestDB

    QuestDB

    QuestDB is a relational column-oriented database designed for time series and event data. It uses SQL with extensions for time series to assist with real-time analytics. These pages cover core concepts of QuestDB, including setup steps, usage guides, and reference documentation for syntax, APIs and configuration. This section describes the architecture of QuestDB, how it stores and queries data, and introduces features and capabilities unique to the system. Designated timestamp is a core feature that enables time-oriented language capabilities and partitioning. Symbol type makes storing and retrieving repetitive strings efficient. Storage model describes how QuestDB stores records and partitions within tables. Indexes can be used for faster read access on specific columns. Partitions can be used for significant performance benefits on calculations and queries. SQL extensions allow performant time series analysis with a concise syntax.
  • 17
    Alibaba Cloud TSDB
    Time Series Database (TSDB) supports high-speed data reading and writing. It offers high compression ratios for cost-efficient data storage. This service also supports visualization of precision reduction, interpolation, multi-metric aggregate computing, and query results. The TSDB service reduces storage costs and improves the efficiency of data writing, query, and analysis. This enables you to handle large amounts of data points and collect data more frequently. This service has been widely applied to systems in different industries, such as IoT monitoring systems, enterprise energy management systems (EMSs), production security monitoring systems, and power supply monitoring systems. Optimizes database architectures and algorithms. TSDB can read or write millions of data points within seconds. Applies an efficient compression algorithm to reduce the size of each data point to 2 bytes, saving more than 90% in storage costs.
  • 18
    Informatica Data Engineering Streaming
    AI-powered Informatica Data Engineering Streaming enables data engineers to ingest, process, and analyze real-time streaming data for actionable insights. Advanced serverless deployment option​ with integrated metering dashboard cuts admin overhead. Rapidly build intelligent data pipelines with CLAIRE®-powered automation, including automatic change data capture (CDC). Ingest thousands of databases and millions of files, and streaming events. Efficiently ingest databases, files, and streaming data for real-time data replication and streaming analytics. Find and inventory all data assets throughout your organization. Intelligently discover and prepare trusted data for advanced analytics and AI/ML projects.
  • 19
    Azure Time Series Insights
    Azure Time Series Insights Gen2 is an open and scalable end-to-end IoT analytics service featuring best-in-class user experiences and rich APIs to integrate its powerful capabilities into your existing workflow or application. You can use it to collect, process, store, query and visualize data at Internet of Things (IoT) scale--data that's highly contextualized and optimized for time series. Azure Time Series Insights Gen2 is designed for ad hoc data exploration and operational analysis allowing you to uncover hidden trends, spotting anomalies, and conduct root-cause analysis. It's an open and flexible offering that meets the broad needs of industrial IoT deployments.
    Starting Price: $36.208 per unit per month
  • 20
    VictoriaMetrics

    VictoriaMetrics

    VictoriaMetrics

    VictoriaMetrics is a fast and scalable open source time series database and monitoring solution. It's designed to be user-friendly, allowing users to build a monitoring platform without scalability issues and with minimal operational burden. VictoriaMetrics is ideal for solving use cases with large amounts of time series data for IT infrastructure, APM, Kubernetes, IoT sensors, automotive vehicles, industrial telemetry, financial data, and other enterprise-level workloads. VictoriaMetrics is powered by several components, making it the perfect solution for collecting metrics (both push and pull models), running queries, and generating alerts. With VictoriaMetrics, you can store millions of data points per second on a single instance or scale to a high-load monitoring system across multiple data centers. Plus, it's designed to store 10x more data using the same compute and storage resources as existing solutions, making it a highly efficient choice.
  • 21
    InfluxDB

    InfluxDB

    InfluxData

    InfluxDB is a purpose-built data platform designed to handle all time series data, from users, sensors, applications and infrastructure — seamlessly collecting, storing, visualizing, and turning insight into action. With a library of more than 250 open source Telegraf plugins, importing and monitoring data from any system is easy. InfluxDB empowers developers to build transformative IoT, monitoring and analytics services and applications. InfluxDB’s flexible architecture fits any implementation — whether in the cloud, at the edge or on-premises — and its versatility, accessibility and supporting tools (client libraries, APIs, etc.) make it easy for developers at any level to quickly build applications and services with time series data. Optimized for developer efficiency and productivity, the InfluxDB platform gives builders time to focus on the features and functionalities that give their internal projects value and their applications a competitive edge.
  • 22
    OneTick

    OneTick

    OneMarketData

    It's performance, superior features and unmatched functionality have led OneTick Database to be embraced by leading banks, brokerages, data vendors, exchanges, hedge funds, market makers and mutual funds. OneTick is the premier enterprise-wide solution for tick data capture, streaming analytics, data management and research. With its superior features and unmatched functionality, OneTick is being embraced enthusiastically by leading hedge funds, mutual funds, banks, brokerages, market makers, data vendors and exchanges. OneTick’s proprietary time series database is a unified, multi-asset class platform that includes a fully integrated streaming analytics engine and built-in business logic to eliminate the need for multiple disparate systems. The system provides the lowest total cost of ownership available.
  • 23
    Materialize

    Materialize

    Materialize

    Materialize is a reactive database that delivers incremental view updates. We help developers easily build with streaming data using standard SQL. Materialize can connect to many different external sources of data without pre-processing. Connect directly to streaming sources like Kafka, Postgres databases, CDC, or historical sources of data like files or S3. Materialize allows you to query, join, and transform data sources in standard SQL - and presents the results as incrementally-updated Materialized views. Queries are maintained and continually updated as new data streams in. With incrementally-updated views, developers can easily build data visualizations or real-time applications. Building with streaming data can be as simple as writing a few lines of SQL.
    Starting Price: $0.98 per hour
  • 24
    DeltaStream

    DeltaStream

    DeltaStream

    DeltaStream is a unified serverless stream processing platform that integrates with streaming storage services. Think about it as the compute layer on top of your streaming storage. It provides functionalities of streaming analytics(Stream processing) and streaming databases along with additional features to provide a complete platform to manage, process, secure and share streaming data. DeltaStream provides a SQL based interface where you can easily create stream processing applications such as streaming pipelines, materialized views, microservices and many more. It has a pluggable processing engine and currently uses Apache Flink as its primary stream processing engine. DeltaStream is more than just a query processing layer on top of Kafka or Kinesis. It brings relational database concepts to the data streaming world, including namespacing and role based access control enabling you to securely access, process and share your streaming data regardless of where they are stored.
  • 25
    Apama

    Apama

    Apama

    Apama Streaming Analytics allows organizations to analyze and act on IoT and fast-moving data in real-time, responding to events intelligently the moment they happen. Apama Community Edition is a freemium version of Apama by Software AG that can be used to learn about, develop and put streaming analytics applications into production. The Software AG Data & Analytics Platform is an end-toend, modular and integrated set of world-class capabilities optimized for high-speed data management and analytics on real-time data and offering out-of-the-box integration and connectivity to all key enterprise data sources. Choose the capabilities you need: streaming, predictive and visual analytics along with messaging for easy integration with other enterprise apps and an in-memory data store for extremely fast access. Integrate historical and other data for comparison—ideal when building models or enriching customer and other vital data.
  • 26
    Oracle Stream Analytics
    Oracle Stream Analytics allows users to process and analyze large scale real-time information by using sophisticated correlation patterns, enrichment, and machine learning. It offers real-time actionable business insight on streaming data and automates action to drive today’s agile businesses. Visual GEOProcessing with GEOFence relationship spatial analytics. New Expressive Patterns Library, including Spatial, Statistical, General industry and Anomaly detection, streaming machine learning. Abstracted visual façade to interrogate live real time streaming data and perform intuitive in-memory real time business analytics.
  • 27
    Azure Data Explorer
    Azure Data Explorer is a fast, fully managed data analytics service for real-time analysis on large volumes of data streaming from applications, websites, IoT devices, and more. Ask questions and iteratively explore data on the fly to improve products, enhance customer experiences, monitor devices, and boost operations. Quickly identify patterns, anomalies, and trends in your data. Explore new questions and get answers in minutes. Run as many queries as you need, thanks to the optimized cost structure. Explore new possibilities with your data cost-effectively. Focus on insights, not infrastructure, with the easy-to-use, fully managed data analytics service. Respond quickly to fast-flowing and rapidly changing data. Azure Data Explorer simplifies analytics from all forms of streaming data.
    Starting Price: $0.11 per hour
  • 28
    Embiot

    Embiot

    Telchemy

    Embiot® is a compact, high performance IoT analytics software agent for IoT gateway and smart sensor applications. This edge computing application is small enough to integrate directly into devices, smart sensors and gateways, but powerful enough to calculate complex analytics from large amounts of raw data at high speed. Internally, Embiot uses a stream processing model to enable it to handle sensor data that arrives at different rates and out of order. It has a simple intuitive configuration language and a rich set of math, stats and AI functions making it fast and easy to solve your analytics problems. Embiot supports a range of input methods including MODBUS, MQTT, REST/XML, REST/JSON, Name/Value and CSV. Embiot is able to send output reports to multiple destinations concurrently in REST, MQTT and custom text formats. For security, Embiot supports TLS selectively on any input or output stream, HTTP and MQTT authentication.
  • 29
    Digital Twin Streaming Service
    ScaleOut Digital Twin Streaming Service™ Easily build and deploy real-time digital twins for streaming analytics Connect to many data sources with Azure & AWS IoT hubs, Kafka, and more Maximize situational awareness with live, aggregate analytics. Introducing a breakthrough cloud service that simultaneously tracks telemetry from millions of data sources with “real-time” digital twins — enabling immediate, deep introspection with state-tracking and highly targeted, real-time feedback for thousands of devices. A powerful UI simplifies deployment and displays aggregate analytics in real time to maximize situational awareness. Ideal for a wide range of applications, including the Internet of Things (IoT), real-time intelligent monitoring, logistics, and financial services. Simplified pricing makes getting started fast and easy. Combined with the ScaleOut Digital Twin Builder software toolkit, the ScaleOut Digital Twin Streaming Service enables the next generation in stream processing.
  • 30
    Visual KPI

    Visual KPI

    Transpara

    Real-time operations monitoring and data visualization, including KPIs, dashboards, trends, analytics, hierarchy and alerts. Aggregates all of your data sources (industrial, IoT, business, external, etc.) on the fly without moving the data, and displays it in real-time with obvious context on any device.
  • 31
    SQLstream

    SQLstream

    Guavus, a Thales company

    SQLstream ranks #1 for IoT stream processing & analytics (ABI Research). Used by Verizon, Walmart, Cisco, & Amazon, our technology powers applications across data centers, the cloud, & the edge. Thanks to sub-ms latency, SQLstream enables live dashboards, time-critical alerts, & real-time action. Smart cities can optimize traffic light timing or reroute ambulances & fire trucks. Security systems can shut down hackers & fraudsters right away. AI / ML models, trained by streaming sensor data, can predict equipment failures. With lightning performance, up to 13M rows / sec / CPU core, companies have drastically reduced their footprint & cost. Our efficient, in-memory processing permits operations at the edge that are otherwise impossible. Acquire, prepare, analyze, & act on data in any format from any source. Create pipelines in minutes not months with StreamLab, our interactive, low-code GUI dev environment. Export SQL scripts & deploy with the flexibility of Kubernetes.
  • 32
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 33
    kdb+

    kdb+

    Kx Systems

    A high-performance cross-platform historical time-series columnar database featuring: - An in-memory compute engine - A real-time streaming processor - An expressive query and programming language called q
  • 34
    Blueflood

    Blueflood

    Blueflood

    Blueflood is a high throughput, low latency, multi-tenant distributed metric processing system behind Rackspace Metrics, which is currently used in production by the Rackspace Monitoring team and Rackspace public cloud team to store metrics generated by their systems. In addition to Rackspace metrics, other large scale deployments of Blueflood can be found at community Wiki. Data from Blueflood can be used to construct dashboards, generate reports, graphs or for any other use involving time-series data. It focuses on near-realtime data, with data that is queryable mere milliseconds after ingestion. You send metrics to the ingestion service. You query your metrics from the Query service. And in the background, rollups are batch-processed offline so that queries for large time-periods are returned quickly.
  • 35
    CrateDB

    CrateDB

    CrateDB

    The enterprise database for time series, documents, and vectors. Store any type of data and combine the simplicity of SQL with the scalability of NoSQL. CrateDB is an open source distributed database running queries in milliseconds, whatever the complexity, volume and velocity of data.
  • 36
    KairosDB

    KairosDB

    KairosDB

    Data can be pushed in KairosDB via multiple protocols like Telnet, Rest and Graphite. Other mechanisms such as plugins can also be used. KairosDB stores time series in Cassandra, the popular and performant NoSQL datastore. The schema consists of 3 column families. This API provides operations to list existing metric names, list tag names and values, store metric data points, and query for metric data points. With a default install, KairosDB serve up a query page whereby you can query data within the data store. It's designed primarily for development purposes. Aggregators perform an operation on data points and down samples. Standard functions like min, max, sum, count, mean and more are available. Import and export is available on the KairosDB server from the command line. Internal metrics to the data store can monitor the server’s performance.
  • 37
    Riak TS
    Riak® TS is the only enterprise-grade NoSQL time series database optimized specifically for IoT and Time Series data. It ingests, transforms, stores, and analyzes massive amounts of time series data. Riak TS is engineered to be faster than Cassandra. The Riak TS masterless architecture is designed to read and write data even in the event of hardware failures or network partitions. Data is evenly distributed across the Riak ring and, by default, there are three replicas of your data. This ensures at least one copy of your data is available for read operations. Riak TS is a distributed system with no central coordinator. It is easy to set up and operate. The masterless architecture makes it easy to add and remove nodes from a cluster. The masterless architecture of Riak TS makes it easy to add and remove nodes from your cluster. You can achieve predictable and near-linear scale by adding nodes using commodity hardware.
  • 38
    SiriDB

    SiriDB

    Cesbit

    SiriDB is designed with performance in mind, inserts and queries are answered in a blink of an eye. The custom query language gives you the ability to speed up your development. SiriDB is scalable on the fly and has no downtime while updating or expanding your database. The scalable possibilities enable you to enlarge the database time after time without losing speed. We take full leverage of all available resources as we distribute your time series data over all pools. SiriDB is developed to give an unprecedented performance without downtime. A SiriDB cluster distributes time series across multiple pools. Each pool supports active replicas for load balancing and redundancy. When one of the replicas is not available the database is still accessible.
  • 39
    QuasarDB

    QuasarDB

    QuasarDB

    Quasar's brain is QuasarDB, a high-performance, distributed, column-oriented timeseries database management system designed from the ground up to deliver real-time on petascale use cases. Up to 20X less disk usage. Quasardb ingestion and compression capabilities are unmatched. Up to 10,000X faster feature extraction. QuasarDB can extract features in real-time from the raw data, thanks to the combination of a built-in map/reduce query engine, an aggregation engine that leverages SIMD from modern CPUs, and stochastic indexes that use virtually no disk space. The most cost-effective timeseries solution, thanks to its ultra-efficient resource usage, the capability to leverage object storage (S3), unique compression technology, and fair pricing model. Quasar runs everywhere, from 32-bit ARM devices to high-end Intel servers, from Edge Computing to the cloud or on-premises.
  • 40
    TIBCO Streaming
    Analyze, continuously query, and act on IoT and other streaming data at lightning fast speeds. Take real-time operations and analytics to the next level with intelligent applications that deploy quickly for taking action based on new decisions and models, all without extra overhead. TIBCO® Streaming software is enterprise-grade, cloud-ready streaming analytics for quickly building real-time applications at a fraction of the cost and risk of alternatives.
  • 41
    IBM Streams
    IBM Streams evaluates a broad range of streaming data — unstructured text, video, audio, geospatial and sensor — helping organizations spot opportunities and risks and make decisions in real-time. Make sense of your data, turning fast-moving volumes and varieties into insight with IBM® Streams. Streams evaluate a broad range of streaming data — unstructured text, video, audio, geospatial and sensor — helping organizations spot opportunities and risks as they happen. Combine Streams with other IBM Cloud Pak® for Data capabilities, built on an open, extensible architecture. Help enable data scientists to collaboratively build models to apply to stream flows, plus, analyze massive amounts of data in real-time. Acting upon your data and deriving true value is easier than ever.
  • 42
    Gathr

    Gathr

    Gathr

    The only all-in-one data pipeline platform. Built ground-up for a cloud-first world, Gathr is the only platform to handle all your data integration and engineering needs - ingestion, ETL, ELT, CDC, streaming analytics, data preparation, machine learning, advanced analytics and more. With Gathr, anyone can build and deploy pipelines in minutes, irrespective of skill levels. Create Ingestion pipelines in minutes, not weeks. Ingest data from any source, deliver to any destination. Build applications quickly with a wizard-based approach. Replicate data in real-time using a templatized CDC app. Native integration for all sources and targets. Best-in-class capabilities with everything you need to succeed today and tomorrow. Choose between free, pay-per-use or customize as per your requirements.
  • 43
    StreamSets

    StreamSets

    StreamSets

    StreamSets DataOps Platform. The data integration platform to build, run, monitor and manage smart data pipelines that deliver continuous data for DataOps, and power modern analytics and hybrid integration. Only StreamSets provides a single design experience for all design patterns for 10x greater developer productivity; smart data pipelines that are resilient to change for 80% less breakages; and a single pane of glass for managing and monitoring all pipelines across hybrid and cloud architectures to eliminate blind spots and control gaps. With StreamSets, you can deliver the continuous data that drives the connected enterprise.
    Starting Price: $1000 per month
  • 44
    KX Insights
    KX Insights is a cloud-native platform for critical real-time performance and continuous actionable intelligence. Using complex event processing, high-speed analytics and machine learning interfaces, it enables fast decision-making and automated responses to events in fractions of a second. It’s not just storage and compute elasticity that have moved to the cloud. It’s everything: data, tools, development, security, connectivity, operations, maintenance. KX can help you leverage that power to make smarter, more insightful decisions by integrating real-time analytics into your business operations. KX Insights leverages industry standards to ensure openness and interoperability with other technologies in order to deliver insights faster and more cost-effectively. It operates a microservices-based architecture for capturing, storing and processing high-volume, high-velocity data using cloud standards, services, and protocols.
  • 45
    BlackLynx Accelerated Analytics
    BlackLynx’s accelerators deliver analytics power where it’s needed and without requiring specialized skills. No matter what your analytics ecosystem includes, you can power data-driven business with powerful, easy-to-use heterogeneous computing. BlackStack software and electronics integration dramatically accelerate processing speeds for sensors deployed within ground, naval, space-based, or airborne assets. Our software enables customers to accelerate relevant AI/ML algorithms or other computing functions with a focus in the areas of real-time sensor processing; including signal detection, video sensors, missiles, radar, thermal, and other object detection capabilities. BlackStack software dramatically accelerates processing speeds for real-time data analytics. We empower our customers to probe enterprise-scale levels of unstructured and fast-changing data to collect, filter, and organize vast amounts of intelligence information or cybersecurity forensic data.
  • 46
    Esper Enterprise Edition
    Esper Enterprise Edition is a distributable platform for linear and elastic horizontal scalability and fault-tolerant event processing. EPL editor and debugger; Hot deployment; Detailed metric and memory use reporting with break-down and summary per EPL. Data Push for multi-tier CEP-to-Browser delivery; Management of Logical and Physical Subscribers and Subscriptions. Web-based user interface for managing all aspects of multiple distributed engine instances with JavaScript and HTML 5. Composable, configurable and interactive displays of distributed event streams or series; Charts, Gauges, Timelines, Grids. JDBC-compliant client and server endpoints for interoperability. Esper Enterprise Edition is a closed-source commercial product by EsperTech. The source code is made available to support customers only. Esper Enterprise Edition is a distributable platform for linear and elastic horizontal scalability and fault-tolerant event processing.
  • 47
    Oracle Cloud Infrastructure Streaming
    Streaming service is a real-time, serverless, Apache Kafka-compatible event streaming platform for developers and data scientists. Streaming is tightly integrated with Oracle Cloud Infrastructure (OCI), Database, GoldenGate, and Integration Cloud. The service also provides out-of-the-box integrations for hundreds of third-party products across categories such as DevOps, databases, big data, and SaaS applications. Data engineers can easily set up and operate big data pipelines. Oracle handles all infrastructure and platform management for event streaming, including provisioning, scaling, and security patching. With the help of consumer groups, Streaming can provide state management for thousands of consumers. This helps developers easily build applications at scale.
  • 48
    Lenses

    Lenses

    Lenses.io

    Enable everyone to discover and observe streaming data. Sharing, documenting and cataloging your data can increase productivity by up to 95%. Then from data, build apps for production use cases. Apply a data-centric security model to cover all the gaps of open source technology, and address data privacy. Provide secure and low-code data pipeline capabilities. Eliminate all darkness and offer unparalleled observability in data and apps. Unify your data mesh and data technologies and be confident with open source in production. Lenses is the highest rated product for real-time stream analytics according to independent third party reviews. With feedback from our community and thousands of engineering hours invested, we've built features that ensure you can focus on what drives value from your real time data. Deploy and run SQL-based real time applications over any Kafka Connect or Kubernetes infrastructure including AWS EKS.
    Starting Price: $49 per month
  • 49
    Machbase

    Machbase

    Machbase

    Machbase, a time-series database that stores and analyzes a lot of sensor data from various facilities in real time, is the only DBMS solution that can process and analyze big data at high speed. Experience the amazing speed of Machbase! It is the most innovative product that enables real-time processing, storage, and analysis of sensor data. High speed sensor data storage and inquiry for sensor data by embedding DBMS in an Edge devices. Best data storage and extraction performance by DBMS running in a single server. Configuring Multi-node cluster with the advantages of availability and scalability. Total management solution of Edge computing for device, connectivity and data.
  • 50
    OpenTSDB

    OpenTSDB

    OpenTSDB

    OpenTSDB consists of a Time Series Daemon (TSD) as well as set of command line utilities. Interaction with OpenTSDB is primarily achieved by running one or more of the independent TSDs. There is no master, no shared state so you can run as many TSDs as required to handle any load you throw at it. Each TSD uses the open source database HBase or hosted Google Bigtable service to store and retrieve time-series data. The data schema is highly optimized for fast aggregations of similar time series to minimize storage space. Users of the TSD never need to access the underlying store directly. You can communicate with the TSD via a simple telnet-style protocol, an HTTP API or a simple built-in GUI. The first step in using OpenTSDB is to send time series data to the TSDs. A number of tools exist to pull data from various sources into OpenTSDB.