Best Data Management Software for Apache Kafka - Page 5

Compare the Top Data Management Software that integrates with Apache Kafka as of July 2025 - Page 5

This a list of Data Management software that integrates with Apache Kafka. Use the filters on the left to add additional filters for products that have integrations with Apache Kafka. View the products that work with Apache Kafka in the table below.

  • 1
    lakeFS

    lakeFS

    Treeverse

    lakeFS enables you to manage your data lake the way you manage your code. Run parallel pipelines for experimentation and CI/CD for your data. Simplifying the lives of engineers, data scientists and analysts who are transforming the world with data. lakeFS is an open source platform that delivers resilience and manageability to object-storage based data lakes. With lakeFS you can build repeatable, atomic and versioned data lake operations, from complex ETL jobs to data science and analytics. lakeFS supports AWS S3, Azure Blob Storage and Google Cloud Storage (GCS) as its underlying storage service. It is API compatible with S3 and works seamlessly with all modern data frameworks such as Spark, Hive, AWS Athena, Presto, etc. lakeFS provides a Git-like branching and committing model that scales to exabytes of data by utilizing S3, GCS, or Azure Blob for storage.
  • 2
    Eclipse Streamsheets
    Build professional applications to automate workflows, continuously monitor operations, and control processes in real-time. Your solutions run 24/7 on servers in the cloud and on the edge. Thanks to the spreadsheet user interface, you do not have to be a software developer. Instead of writing program code, you drag-and-drop data, fill cells with formulas, and design charts in a way you already know. Find all necessary protocols on board that you need to connect to sensors, and machines like MQTT, REST, and OPC UA. Streamsheets is native to stream data processing like MQTT and kafka. Pick up a topic stream, transform it and blast it back out into the endless streaming world. REST opens you the world, Streamsheets let you connect to any web service or let them connect to you. Streamsheets run in the cloud, on your servers, but also on edge devices like a Raspberry Pi.
  • 3
    Apache Kylin

    Apache Kylin

    Apache Software Foundation

    Apache Kylin™ is an open source, distributed Analytical Data Warehouse for Big Data; it was designed to provide OLAP (Online Analytical Processing) capability in the big data era. By renovating the multi-dimensional cube and precalculation technology on Hadoop and Spark, Kylin is able to achieve near constant query speed regardless of the ever-growing data volume. Reducing query latency from minutes to sub-second, Kylin brings online analytics back to big data. Kylin can analyze 10+ billions of rows in less than a second. No more waiting on reports for critical decisions. Kylin connects data on Hadoop to BI tools like Tableau, PowerBI/Excel, MSTR, QlikSense, Hue and SuperSet, making the BI on Hadoop faster than ever. As an Analytical Data Warehouse, Kylin offers ANSI SQL on Hadoop/Spark and supports most ANSI SQL query functions. Kylin can support thousands of interactive queries at the same time, thanks to the low resource consumption of each query.
  • 4
    witboost

    witboost

    Agile Lab

    witboost is a modular, scalable, fast, efficient data management system for your company to truly become data driven, reduce time-to-market, it expenditures and overheads. witboost comprises a series of modules. These are building blocks that can work as standalone solutions to address and solve a single need or problem, or they can be combined to create the perfect data management ecosystem for your company. Each module improves a specific data engineering function and they can be combined to create the perfect solution to answer your specific needs, guaranteeing a blazingly fact and smooth implementation, thus dramatically reducing time-to-market, time-to-value and consequently the TCO of your data engineering infrastructure. Smart Cities need digital twins to predict needs and avoid unforeseen problems, gathering data from thousands of sources and managing ever more complex telematics.
  • 5
    Rawcubes

    Rawcubes

    Rawcubes

    The only software that combines data intelligence through knowledge Graph with multi cloud data strategies to enable better business insights. Lack of Insightful data preventing you from running successful campaigns? Uncover the intelligence and learn what your customer wants! Get a 360-degree view of the business operations through a single, end-to-end analysis using our proprietary product, DataBlaze. Empower your data experts with data strategies models. No need to write codes, no human errors. Leverage pre-built ML models to aid insurers in accurately evaluating and managing property risk. Rawcubes helps businesses utilize their data by leveraging our data platforms, pre-built domain knowledge graph, and analytical models to enable better business insights. Rawcubes provides world-class data management software, business analytical models, and access to a team of data scientists and data engineers if you need expert advice or just to bounce around an idea or two.
  • 6
    Apache Pinot

    Apache Pinot

    Apache Corporation

    Pinot is designed to answer OLAP queries with low latency on immutable data. Pluggable indexing technologies - Sorted Index, Bitmap Index, Inverted Index. Joins are currently not supported, but this problem can be overcome by using Trino or PrestoDB for querying. SQL like language that supports selection, aggregation, filtering, group by, order by, distinct queries on data. Consist of of both offline and real-time table. Use real-time table only to cover segments for which offline data may not be available yet. Detect the right anomalies by customizing anomaly detect flow and notification flow.
  • 7
    Apache Hudi

    Apache Hudi

    Apache Corporation

    Hudi is a rich platform to build streaming data lakes with incremental data pipelines on a self-managing database layer, while being optimized for lake engines and regular batch processing. Hudi maintains a timeline of all actions performed on the table at different instants of time that helps provide instantaneous views of the table, while also efficiently supporting retrieval of data in the order of arrival. A Hudi instant consists of the following components. Hudi provides efficient upserts, by mapping a given hoodie key consistently to a file id, via an indexing mechanism. This mapping between record key and file group/file id, never changes once the first version of a record has been written to a file. In short, the mapped file group contains all versions of a group of records.
  • 8
    Heroic

    Heroic

    Heroic

    Heroic is an open-source monitoring system originally built at Spotify to address problems faced with large scale gathering and near real-time analysis of metrics. Heroic uses a small set of components which are responsible for very specific things. Indefinite retention, as long as you have the hardware spend. Federation support to connect multiple Heroic clusters into a global interface. Heroic uses a small set of components which are responsible for very specific things. Consumers are the component responsible for consuming metrics. When building Heroic it was quickly realized that navigating hundreds of millions of time series without context is hard. Heroic has support for federating requests, which allows multiple independent Heroic clusters to serve clients through a single global interface. This can be used to reduce the amount of geographical traffic by allowing one cluster to operate completely isolated within its zone.
  • 9
    Circonus IRONdb
    Circonus IRONdb makes it easy to handle and store unlimited volumes of telemetry data, easily handling billions of metric streams. Circonus IRONdb enables users to identify areas of opportunity and challenge in real time, providing forensic, predictive, and automated analytics capabilities that no other product can match. Rely on machine learning to automatically set a “new normal” as your data and operations dynamically change. Circonus IRONdb integrates with Grafana, which has native support for our analytics query language. We are also compatible with other visualization apps, such as Graphite-web. Circonus IRONdb keeps your data safe by storing multiple copies of your data in a cluster of IRONdb nodes. System administrators typically manage clustering, often spending significant time maintaining it and keeping it working. Circonus IRONdb allows operators to set and forget their cluster, and stop wasting resources manually managing their time series data store.
  • 10
    QuestDB

    QuestDB

    QuestDB

    QuestDB is a relational column-oriented database designed for time series and event data. It uses SQL with extensions for time series to assist with real-time analytics. These pages cover core concepts of QuestDB, including setup steps, usage guides, and reference documentation for syntax, APIs and configuration. This section describes the architecture of QuestDB, how it stores and queries data, and introduces features and capabilities unique to the system. Designated timestamp is a core feature that enables time-oriented language capabilities and partitioning. Symbol type makes storing and retrieving repetitive strings efficient. Storage model describes how QuestDB stores records and partitions within tables. Indexes can be used for faster read access on specific columns. Partitions can be used for significant performance benefits on calculations and queries. SQL extensions allow performant time series analysis with a concise syntax.
  • 11
    IBM Event Streams
    IBM Event Streams is a fully managed event streaming platform built on Apache Kafka, designed to help enterprises process and respond to real-time data streams. With capabilities for machine learning integration, high availability, and secure cloud deployment, it enables organizations to create intelligent applications that react to events as they happen. The platform supports multi-cloud environments, disaster recovery, and geo-replication, making it ideal for mission-critical workloads. IBM Event Streams simplifies building and scaling real-time, event-driven solutions, ensuring data is processed quickly and efficiently.
  • 12
    StreamFlux

    StreamFlux

    Fractal

    Data is crucial when it comes to building, streamlining and growing your business. However, getting the full value out of data can be a challenge, many organizations are faced with poor access to data, incompatible tools, spiraling costs and slow results. Simply put, leaders who can turn raw data into real results will thrive in today’s landscape. The key to this is empowering everyone across your business to be able to analyze, build and collaborate on end-to-end AI and machine learning solutions in one place, fast. Streamflux is a one-stop shop to meet your data analytics and AI challenges. Our self-serve platform allows you the freedom to build end-to-end data solutions, uses models to answer complex questions and assesses user behaviors. Whether you’re predicting customer churn and future revenue, or generating recommendations, you can go from raw data to genuine business impact in days, not months.
  • 13
    Kyrah

    Kyrah

    Kyrah

    Kyrah facilitates enterprise data management across your cloud data estate, data exploration, storage assets grouping, security policy enforcement and permissions management. It makes all changes transparent, secure and GDPR-compliant with its easily configurable and fully automated change request mechanism. And last, but not least, it offers full suitability of all events through activity log. Is seamless self-service data provisioning with shopping cart style checkout. Is a single pane of glass for your enterprise data estate via storage map with data usage heatmap. Enables faster time to market by unifying people, process and data provisioning in a single platform and user interface. Provides a single pane of glass for an organization to understand its data landscape including features such as data use a heatmap, data sensitivity and more. Enables enforcement of data compliance to ensure organizations comply with data sovereignty requirements, thus avoiding any potential fines.
  • 14
    Redpanda

    Redpanda

    Redpanda Data

    Breakthrough data streaming capabilities that let you deliver customer experiences never before possible. Kafka API and ecosystem are compatible. Redpanda BulletPredictable low latencies with zero data loss. Redpanda BulletUpto 10x faster than Kafka. Redpanda BulletEnterprise-grade support and hotfixes. Redpanda BulletAutomated backups to S3/GCS. Redpanda Bullet100% freedom from routine Kafka operations. Redpanda BulletSupport for AWS and GCP. Redpanda was designed from the ground up to be easily installed to get streaming up and running quickly. After you see its power, put Redpanda to the test in production. Use the more advanced Redpanda features. We manage provisioning, monitoring, and upgrades. Without any access to your cloud credentials. Sensitive data never leaves your environment. Provisioned, operated, and maintained for you. Configurable instance types. Expand cluster as your needs grow.
  • 15
    Samza

    Samza

    Apache Software Foundation

    Samza allows you to build stateful applications that process data in real-time from multiple sources including Apache Kafka. Battle-tested at scale, it supports flexible deployment options to run on YARN or as a standalone library. Samza provides extremely low latencies and high throughput to analyze your data instantly. Scales to several terabytes of state with features like incremental checkpoints and host-affinity. Samza is easy to operate with flexible deployment options - YARN, Kubernetes or standalone. Ability to run the same code to process both batch and streaming data. Integrates with several sources including Kafka, HDFS, AWS Kinesis, Azure Eventhubs, K-V stores and ElasticSearch.
  • 16
    Red Hat OpenShift Streams
    Red Hat® OpenShift® Streams for Apache Kafka is a managed cloud service that provides a streamlined developer experience for building, deploying, and scaling new cloud-native applications or modernizing existing systems. Red Hat OpenShift Streams for Apache Kafka makes it easy to create, discover, and connect to real-time data streams no matter where they are deployed. Streams are a key component for delivering event-driven and data analytics applications. The combination of seamless operations across distributed microservices, large data transfer volumes, and managed operations allows teams to focus on team strengths, speed up time to value, and lower operational costs. OpenShift Streams for Apache Kafka includes a Kafka ecosystem and is part of a family of cloud services—and the Red Hat OpenShift product family—which helps you build a wide range of data-driven solutions.
  • 17
    Shapelets

    Shapelets

    Shapelets

    Powerful computing at your fingertips. Parallel computing, groundbreaking algorithms, so what are you waiting for? Designed to empower data scientists in business. Get the fastest computing in an all-inclusive time-series platform. Shapelets provides you with analytical features such as causality, discords and motif discovery, forecasting, clustering, etc. Run, extend and integrate your own algorithms into the Shapelets platform to make the most of Big Data analysis. Shapelets integrates seamlessly with any data collection and storage solution. It also integrates with MS Office and any other visualization tool to simplify and share insights without any technical acumen. Our UI works with the server to bring you interactive visualizations. You can make the most of your metadata and represent it in the many different visual graphs provided by our modern interface. Shapelets enables users from the oil, gas, and energy industry to perform real-time analysis of operational data.
  • 18
    Baffle

    Baffle

    Baffle

    Baffle provides universal data protection from any source to any destination to control who can see what data. Enterprises continue to battle cybersecurity threats such as ransomware, as well as breaches and losses of their data assets in public and private clouds. New data management restrictions and considerations on how it must be protected have changed how data is stored, retrieved, and analyzed. Baffle’s aim is to render data breaches and data losses irrelevant by assuming that breaches will happen. We provide a last line of defense by ensuring that unprotected data is never available to an attacker. Our data protection solutions protect data as soon as it is produced and keep it protected even while it is being processed. Baffle's transparent data security mesh for both on-premises and cloud data offers several data protection modes. Protect data on-the-fly as it moves from a source data store to a cloud database or object storage, ensuring safe consumption of sensitive data.
  • 19
    Meltano

    Meltano

    Meltano

    Meltano provides the ultimate flexibility in deployment options. Own your data stack, end to end. Ever growing connector library of 300+ connectors have been running in production for years. Run workflows in isolated environments, execute end-to-end tests, and version control everything. Open source gives you the power to build your ideal data stack. Define your entire project as code and collaborate confidently with your team. The Meltano CLI enables you to rapidly create your project, making it easy to start replicating data. Meltano is designed to be the best way to run dbt to manage your transformations. Your entire data stack is defined in your project, making it simple to deploy it to production. Validate your changes in development before moving to CI, and in staging before moving to production.
  • 20
    Feast

    Feast

    Tecton

    Make your offline data available for real-time predictions without having to build custom pipelines. Ensure data consistency between offline training and online inference, eliminating train-serve skew. Standardize data engineering workflows under one consistent framework. Teams use Feast as the foundation of their internal ML platforms. Feast doesn’t require the deployment and management of dedicated infrastructure. Instead, it reuses existing infrastructure and spins up new resources when needed. You are not looking for a managed solution and are willing to manage and maintain your own implementation. You have engineers that are able to support the implementation and management of Feast. You want to run pipelines that transform raw data into features in a separate system and integrate with it. You have unique requirements and want to build on top of an open source solution.
  • 21
    Semarchy xDI
    Experience Semarchy’s flexible unified data platform to empower better business decisions enterprise-wide. Integrate all your data with xDI, the high-performance, agile, and extensible data integration for all styles and use cases. Its single technology federates all forms of data integration, and mapping converts business rules into deployable code. xDI has extensible and open architecture supporting on-premise, cloud, hybrid, and multi-cloud environments.
  • 22
    rudol

    rudol

    rudol

    Unify your data catalog, reduce communication overhead and enable quality control to any member of your company, all without deploying or installing anything. rudol is a data quality platform that helps companies understand all their data sources, no matter where they come from; reduces excessive communication in reporting processes or urgencies; and enables data quality diagnosing and issue prevention to all the company, through easy steps With rudol, each organization is able to add data sources from a growing list of providers and BI tools with a standardized structure, including MySQL, PostgreSQL, Airflow, Redshift, Snowflake, Kafka, S3*, BigQuery*, MongoDB*, Tableau*, PowerBI*, Looker* (* in development). So, regardless of where it’s coming from, people can understand where and how the data is stored, read and collaborate with its documentation, or easily contact data owners using our integrations.
    Starting Price: $0
  • 23
    Benerator

    Benerator

    Benerator

    Describe your data model on an abstract level in XML. Involve your business people as no developer skills are necessary. Use a wide range of function libraries to fake realistic data. Write your own extensions in Javascript or Java. Integrate your data processes into Gitlab CI or Jenkins. Generate, anonymize, and migrate with Benerator’s model-driven data toolkit. Define processes to anonymize or pseudonymize data in plain XML on an abstract level without the need for developer skills. Stay GDPR compliant with your data and protect the privacy of your customers. Mask and obfuscate sensitive data for BI, test, development, or training purposes. Combine data from various sources (subsetting) and keep the data integrity. Migrate and transform your data in multisystem landscapes. Reuse your testing data models to migrate production environments. Keep your data consistent and reliable in a microsystem architecture.
  • 24
    Acryl Data

    Acryl Data

    Acryl Data

    No more data catalog ghost towns. Acryl Cloud drives fast time-to-value via Shift Left practices for data producers and an intuitive UI for data consumers. Continuously detect data quality incidents in real-time, automate anomaly detection to prevent breakages, and drive fast resolution when they do occur. Acryl Cloud supports both push-based and pull-based metadata ingestion for easy maintenance, ensuring information is trustworthy, up-to-date, and definitive. Data should be operational. Go beyond simple visibility and use automated Metadata Tests to continuously expose data insights and surface new areas for improvement. Reduce confusion and accelerate resolution with clear asset ownership, automatic detection, streamlined alerts, and time-based lineage for tracing root causes.
  • 25
    APERIO DataWise
    Data is used in every aspect of a processing plant or facility, it is underlying most operational processes, most business decisions, and most environmental events. Failures are often attributed to this same data, in terms of operator error, bad sensors, safety or environmental events, or poor analytics. This is where APERIO can alleviate these problems. Data integrity is a key element of Industry 4.0; the foundation upon which more advanced applications, such as predictive models, process optimization, and custom AI tools are developed. APERIO DataWise is the industry-leading provider of reliable, trusted data. Automate the quality of your PI data or digital twins continuously and at scale. Ensure validated data across the enterprise to improve asset reliability. Empower the operator to make better decisions. Detect threats made to operational data to ensure operational resilience. Accurately monitor & report sustainability metrics.
  • 26
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 27
    SecuPi

    SecuPi

    SecuPi

    SecuPi provides an overarching data-centric security platform, delivering fine-grained access control (ABAC), Database Activity Monitoring (DAM) and de-identification using FPE encryption, physical and dynamic masking and deletion (RTBF). SecuPi offers wide coverage across packaged and home-grown applications, direct access tools, big data, and cloud environments. One data security platform for monitoring, controlling, encrypting, and classifying data across all cloud & on-prem platforms seamlessly with no code changes. Agile and efficient configurable platform to meet current & future regulatory and audit requirements. No source-code changes with fast & cost-efficient implementation. SecuPi’s fine-grain data access controls protect sensitive data so users get access only to data they are entitled to view, and no more. Seamlessly integrate with Starburst/Trino for automated enforcement of data access policies and data protection operations.
  • 28
    VeloDB

    VeloDB

    VeloDB

    Powered by Apache Doris, VeloDB is a modern data warehouse for lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within seconds. Storage engine with real-time upsert、append and pre-aggregation. Unparalleled performance in both real-time data serving and interactive ad-hoc queries. Not just structured but also semi-structured data. Not just real-time analytics but also batch processing. Not just run queries against internal data but also work as a federate query engine to access external data lakes and databases. Distributed design to support linear scalability. Whether on-premise deployment or cloud service, separation or integration of storage and compute, resource usage can be flexibly and efficiently adjusted according to workload requirements. Built on and fully compatible with open source Apache Doris. Support MySQL protocol, functions, and SQL for easy integration with other data tools.
  • 29
    Baidu Palo

    Baidu Palo

    Baidu AI Cloud

    Palo helps enterprises to create the PB-level MPP architecture data warehouse service within several minutes and import the massive data from RDS, BOS, and BMR. Thus, Palo can perform the multi-dimensional analytics of big data. Palo is compatible with mainstream BI tools. Data analysts can analyze and display the data visually and gain insights quickly to assist decision-making. It has the industry-leading MPP query engine, with column storage, intelligent index,and vector execution functions. It can also provide in-library analytics, window functions, and other advanced analytics functions. You can create a materialized view and change the table structure without the suspension of service. It supports flexible and efficient data recovery.
  • 30
    Validio

    Validio

    Validio

    See how your data assets are used: popularity, utilization, and schema coverage. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Find and filter the data you need based on metadata tags and descriptions. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Drive data governance and ownership across your organization. Stream-lake-warehouse lineage to facilitate data ownership and collaboration. Automatically generated field-level lineage map to understand the entire data ecosystem. Anomaly detection learns from your data and seasonality patterns, with automatic backfill from historical data. Machine learning-based thresholds are trained per data segment, trained on actual data instead of metadata only.