Business Software for Apache Spark - Page 4

Top Software that integrates with Apache Spark as of July 2025 - Page 4

  • 1
    Mage Sensitive Data Discovery
    Uncover hidden sensitive data locations within your enterprise through Mage's patented Sensitive Data Discovery module. Find data hidden in all types of data stores in the most obscure locations, be it structured, unstructured, Big Data, or on the Cloud. Leverage the power of Artificial Intelligence and Natural Language Processing to uncover data in the most complex of locations. Ensure efficient identification of sensitive data with minimal false positives with a patented approach to data discovery. Configure any additional data classifications over and above the 70+ out of the box data classifications covering all popular PII and PHI data. Schedule sample, full, or even incremental scans through a simplified discovery process.
  • 2
    Deep.BI

    Deep.BI

    Deep BI

    Deep.BI enables Media, Insurance, E-commerce and Banking enterprises to effectively increase revenues by anticipating specific user behaviors then automating actions to convert these users to paying customers and retaining them. Predictive customer data platform with real-time user scoring, based on Deep.BI's next-gen, enterprise data warehouse. We help digital businesses and platforms improve their products, content and distribution. Deep.BI's platform collects extensive data about product usage and content consumption and provides real-time, actionable insights. Real-time, actionable insights are generated within seconds through the Deep.Conveyor data pipeline, available for analysis in the Deep.Explorer business intelligence platform, augmented through the Deep.Score event scoring engine built with custom AI algorithms for your use case, and are ready for automation using the Deep.Conductor high-speed API and AI model serving platform.
  • 3
    Metabase

    Metabase

    Metabase

    Meet the easy, open source way for everyone in your company to ask questions and learn from data. Connect to your data and get it in front of your team. Dashboards (like this one) are easy to build, share, and explore. Anyone on your team can get answers to questions about your data with just a few clicks, whether it's the CEO or Customer Support. When the questions get more complicated, SQL and our notebook editor are there for the data savvy. Visual joins, multiple aggregations and filtering steps give you the tools to dig deeper into your data. Add variables to your queries to create interactive visualizations that users can tweak and explore. Set up alerts and scheduled reports to get the right data in front of the right people at the right time. Start in a couple clicks with the hosted version, or use Docker to get up and running on your own for free. Connect to your existing data, invite your team, and you have a BI solution that would usually take a sales call.
  • 4
    Apache HBase

    Apache HBase

    The Apache Software Foundation

    Use Apache HBase™ when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Automatic failover support between RegionServers. Easy to use Java API for client access. Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options. Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX.
  • 5
    Hadoop

    Hadoop

    Apache Software Foundation

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2).
  • 6
    Amazon EMR
    Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting.
  • 7
    Google Cloud Bigtable
    Google Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. Fast and performant: Use Cloud Bigtable as the storage engine that grows with you from your first gigabyte to petabyte-scale for low-latency applications as well as high-throughput data processing and analytics. Seamless scaling and replication: Start with a single node per cluster, and seamlessly scale to hundreds of nodes dynamically supporting peak demand. Replication also adds high availability and workload isolation for live serving apps. Simple and integrated: Fully managed service that integrates easily with big data tools like Hadoop, Dataflow, and Dataproc. Plus, support for the open source HBase API standard makes it easy for development teams to get started.
  • 8
    Azure Data Factory
    Integrate data silos with Azure Data Factory, a service built for all data integration needs and skill levels. Easily construct ETL and ELT processes code-free within the intuitive visual environment, or write your own code. Visually integrate data sources using more than 90+ natively built and maintenance-free connectors at no added cost. Focus on your data—the serverless integration service does the rest. Data Factory provides a data integration and transformation layer that works across your digital transformation initiatives. Data Factory can help independent software vendors (ISVs) enrich their SaaS apps with integrated hybrid data as to deliver data-driven user experiences. Pre-built connectors and integration at scale enable you to focus on your users while Data Factory takes care of the rest.
  • 9
    Alibaba Log Service
    Log Service is a complete real-time data logging service that has been developed by Alibaba Group. Log Service supports collection, consumption, shipping, search, and analysis of logs, and improves the capacity of processing and analyzing large amounts of logs. Completes data collections from more than 30 data sources within five minutes. Deploys reliable high-availability service nodes in data centers around the world. Fully supports real-time and offline computing, and seamlessly connects to Alibaba Cloud software, open-source software, and commercial software. You can set the access permissions for individual rows so that the same report is displayed differently for each user role.
  • 10
    IBM Databand
    Monitor your data health and pipeline performance. Gain unified visibility for pipelines running on cloud-native tools like Apache Airflow, Apache Spark, Snowflake, BigQuery, and Kubernetes. An observability platform purpose built for Data Engineers. Data engineering is only getting more challenging as demands from business stakeholders grow. Databand can help you catch up. More pipelines, more complexity. Data engineers are working with more complex infrastructure than ever and pushing higher speeds of release. It’s harder to understand why a process has failed, why it’s running late, and how changes affect the quality of data outputs. Data consumers are frustrated with inconsistent results, model performance, and delays in data delivery. Not knowing exactly what data is being delivered, or precisely where failures are coming from, leads to persistent lack of trust. Pipeline logs, errors, and data quality metrics are captured and stored in independent, isolated systems.
  • 11
    Molecula

    Molecula

    Molecula

    Molecula is an enterprise feature store that simplifies, accelerates, and controls big data access to power machine-scale analytics and AI. Continuously extracting features, reducing the dimensionality of data at the source, and routing real-time feature changes into a central store enables millisecond queries, computation, and feature re-use across formats and locations without copying or moving raw data. The Molecula feature store provides data engineers, data scientists, and application developers a single access point to graduate from reporting and explaining with human-scale data to predicting and prescribing real-time business outcomes with all data. Enterprises spend a lot of money preparing, aggregating, and making numerous copies of their data for every project before they can make decisions with it. Molecula brings an entirely new paradigm for continuous, real-time data analysis to be used for all your mission-critical applications.
  • 12
    JanusGraph

    JanusGraph

    JanusGraph

    JanusGraph is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. JanusGraph is a project under The Linux Foundation, and includes participants from Expero, Google, GRAKN.AI, Hortonworks, IBM and Amazon. Elastic and linear scalability for a growing data and user base. Data distribution and replication for performance and fault tolerance. Multi-datacenter high availability and hot backups. All functionality is totally free. No need to buy commercial licenses. JanusGraph is fully open source under the Apache 2 license. JanusGraph is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. Support for ACID and eventual consistency. In addition to online transactional processing (OLTP), JanusGraph supports global graph analytics (OLAP) with its Apache Spark integration.
  • 13
    TiMi

    TiMi

    TIMi

    With TIMi, companies can capitalize on their corporate data to develop new ideas and make critical business decisions faster and easier than ever before. The heart of TIMi’s Integrated Platform. TIMi’s ultimate real-time AUTO-ML engine. 3D VR segmentation and visualization. Unlimited self service business Intelligence. TIMi is several orders of magnitude faster than any other solution to do the 2 most important analytical tasks: the handling of datasets (data cleaning, feature engineering, creation of KPIs) and predictive modeling. TIMi is an “ethical solution”: no “lock-in” situation, just excellence. We guarantee you a work in all serenity and without unexpected extra costs. Thanks to an original & unique software infrastructure, TIMi is optimized to offer you the greatest flexibility for the exploration phase and the highest reliability during the production phase. TIMi is the ultimate “playground” that allows your analysts to test the craziest ideas!
  • 14
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 15
    Kylo

    Kylo

    Teradata

    Kylo is an open source enterprise-ready data lake management software platform for self-service data ingest and data preparation with integrated metadata management, governance, security and best practices inspired by Think Big's 150+ big data implementation projects. Self-service data ingest with data cleansing, validation, and automatic profiling. Wrangle data with visual sql and an interactive transform through a simple user interface. Search and explore data and metadata, view lineage, and profile statistics. Monitor health of feeds and services in the data lake. Track SLAs and troubleshoot performance. Design batch or streaming pipeline templates in Apache NiFi and register with Kylo to enable user self-service. Organizations can expend significant engineering effort moving data into Hadoop yet struggle to maintain governance and data quality. Kylo dramatically simplifies data ingest by shifting ingest to data owners through a simple guided UI.
  • 16
    Privacera

    Privacera

    Privacera

    At the intersection of data governance, privacy, and security, Privacera’s unified data access governance platform maximizes the value of data by providing secure data access control and governance across hybrid- and multi-cloud environments. The hybrid platform centralizes access and natively enforces policies across multiple cloud services—AWS, Azure, Google Cloud, Databricks, Snowflake, Starburst and more—to democratize trusted data enterprise-wide without compromising compliance with regulations such as GDPR, CCPA, LGPD, or HIPAA. Trusted by Fortune 500 customers across finance, insurance, retail, healthcare, media, public and the federal sector, Privacera is the industry’s leading data access governance platform that delivers unmatched scalability, elasticity, and performance. Headquartered in Fremont, California, Privacera was founded in 2016 to manage cloud data privacy and security by the creators of Apache Ranger™ and Apache Atlas™.
  • 17
    MLflow

    MLflow

    MLflow

    MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components. Record and query experiments: code, data, config, and results. Package data science code in a format to reproduce runs on any platform. Deploy machine learning models in diverse serving environments. Store, annotate, discover, and manage models in a central repository. The MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files when running your machine learning code and for later visualizing the results. MLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs. An MLflow Project is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. In addition, the Projects component includes an API and command-line tools for running projects.
  • 18
    Mage Static Data Masking
    Mage™ Static Data Masking (SDM) and Test data Management (TDM) capabilities fully integrate with Imperva’s Data Security Fabric (DSF) delivering complete protection for all sensitive or regulated data while simultaneously integrating seamlessly with an organization’s existing IT framework and existing application development, testing and data flows without the requirement for any additional architectural changes.
  • 19
    Mage Dynamic Data Masking
    Mage™ Dynamic Data Masking module of the Mage data security platform has been designed with the end customer needs taken into consideration. Mage™ Dynamic Data Masking has been developed working alongside our customers, to address the specific needs and requirements they have. As a result, this product has evolved in a way to meet all the use cases that an enterprise could possibly have. Most other solutions in the market are either a part of an acquisition or are developed to meet only a specific use case. Mage™ Dynamic Data Masking has been designed to deliver adequate protection to sensitive data in production to application and database users while simultaneously integrating seamlessly with an organization's existing IT framework without the requirement of any additional architectural changes.​
  • 20
    Acxiom Real Identity
    Real Identity™ delivers sub second decisions to power relevant messages in real time. Real Identity enables the world’s biggest brands to accurately identify and ethically connect with people anytime, anywhere to create relevant experiences. Engage people with reach, scale and precision across every interaction. Manage and maintain identity across your enterprise by leveraging 50 years of data and identity expertise combined with the latest artificial intelligence and machine learning techniques. The adtech environment requires speed and access to identity and data to enable personalization and decisioning use cases. In a cookieless world, first-party data signals will drive these functions while the conversation continues to be between people, the brands, and the publishers. By delivering experiences that matter, across all channels, you can wow your customer and prospects while staying ahead of regulations and ahead of your competition.
  • 21
    Okera

    Okera

    Okera

    Okera, the Universal Data Authorization company, helps modern, data-driven enterprises accelerate innovation, minimize data security risks, and demonstrate regulatory compliance. The Okera Dynamic Access Platform automatically enforces universal fine-grained access control policies. This allows employees, customers, and partners to use data responsibly, while protecting them from inappropriately accessing data that is confidential, personally identifiable, or regulated. Okera’s robust audit capabilities and data usage intelligence deliver the real-time and historical information that data security, compliance, and data delivery teams need to respond quickly to incidents, optimize processes, and analyze the performance of enterprise data initiatives. Okera began development in 2016 and now dynamically authorizes access to hundreds of petabytes of sensitive data for the world’s most demanding F100 companies and regulatory agencies. The company is headquartered in San Francisco.
  • 22
    Tonic

    Tonic

    Tonic

    Tonic automatically creates mock data that preserves key characteristics of secure datasets so that developers, data scientists, and salespeople can work conveniently without breaching privacy. Tonic mimics your production data to create de-identified, realistic, and safe data for your test environments. With Tonic, your data is modeled from your production data to help you tell an identical story in your testing environments. Safe, useful data created to mimic your real-world data, at scale. Generate data that looks, acts, and feels just like your production data and safely share it across teams, businesses, and international borders. PII/PHI identification, obfuscation, and transformation. Proactively protect your sensitive data with automatic scanning, alerts, de-identification, and mathematical guarantees of data privacy. Advanced sub setting across diverse database types. Collaboration, compliance, and data workflows — perfectly automated.
  • 23
    HPE Ezmeral

    HPE Ezmeral

    Hewlett Packard Enterprise

    Run, manage, control and secure the apps, data and IT that run your business, from edge to cloud. HPE Ezmeral advances digital transformation initiatives by shifting time and resources from IT operations to innovations. Modernize your apps. Simplify your Ops. And harness data to go from insights to impact. Accelerate time-to-value by deploying Kubernetes at scale with integrated persistent data storage for app modernization on bare metal or VMs, in your data center, on any cloud or at the edge. Harness data and get insights faster by operationalizing the end-to-end process to build data pipelines. Bring DevOps agility to the machine learning lifecycle, and deliver a unified data fabric. Boost efficiency and agility in IT Ops with automation and advanced artificial intelligence. And provide security and control to eliminate risk and reduce costs. HPE Ezmeral Container Platform provides an enterprise-grade platform to deploy Kubernetes at scale for a wide range of use cases.
  • 24
    NVIDIA RAPIDS
    The RAPIDS suite of software libraries, built on CUDA-X AI, gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes. Accelerate your Python data science toolchain with minimal code changes and no new tools to learn. Increase machine learning model accuracy by iterating on models faster and deploying them more frequently.
  • 25
    Jovian

    Jovian

    Jovian

    Start coding instantly with an interactive Jupyter notebook running on the cloud. No installation or setup required. Start with a blank notebook, follow-along with a tutorial or use a starter template. Manage all your projects on Jovian. Just run jovian.commit() to capture snapshots, record versions and generate shareable links for your notebooks. Showcase your best work on your Jovian profile. Feature projects, notebooks, collections, activities and more. Track changes in code, outputs, graphs, tables, logs and more with simple, intutive and visual notebook diffs. Share your work online, or collaborate privately with your team. Let others build upon your experiments & contribute back. Collaborators can discuss and comment on specific parts of your notebooks, with a powerful cell-level commenting inteface. A flexible comparison dashboard lets you sort, filter, archive and do much more to analyze ML experiments & results.
  • 26
    Apache Bigtop

    Apache Bigtop

    Apache Software Foundation

    Bigtop is an Apache Foundation project for Infrastructure Engineers and Data Scientists looking for comprehensive packaging, testing, and configuration of the leading open source big data components. Bigtop supports a wide range of components/projects, including, but not limited to, Hadoop, HBase and Spark. Bigtop packages Hadoop RPMs and DEBs, so that you can manage and maintain your Hadoop cluster. Bigtop provides an integrated smoke testing framework, alongside a suite of over 50 test files. Bigtop provides vagrant recipes, raw images, and (work-in-progress) docker recipes for deploying Hadoop from zero. Bigtop support many Operating Systems, including Debian, Ubuntu, CentOS, Fedora, openSUSE and many others. Bigtop includes tools and a framework for testing at various levels (packaging, platform, runtime, etc.) for both initial deployments as well as upgrade scenarios for the entire data platform, not just the individual components.
  • 27
    OPAQUE

    OPAQUE

    OPAQUE Systems

    OPAQUE Systems offers a leading confidential AI platform that enables organizations to securely run AI, machine learning, and analytics workflows on sensitive data without compromising privacy or compliance. Their technology allows enterprises to unleash AI innovation risk-free by leveraging confidential computing and cryptographic verification, ensuring data sovereignty and regulatory adherence. OPAQUE integrates seamlessly into existing AI stacks via APIs, notebooks, and no-code solutions, eliminating the need for costly infrastructure changes. The platform provides verifiable audit trails and attestation for complete transparency and governance. Customers like Ant Financial have benefited by using previously inaccessible data to improve credit risk models. With OPAQUE, companies accelerate AI adoption while maintaining uncompromising security and control.
  • 28
    NVMesh

    NVMesh

    Excelero

    Excelero delivers low-latency distributed block storage for web-scale applications. NVMesh enables shared NVMe across any network and supports any local or distributed file system. The solution features an intelligent management layer that abstracts underlying hardware with CPU offload, creates logical volumes with redundancy, and provides centralized, intelligent management and monitoring. Applications can enjoy the latency, throughput and IOPs of a local NVMe device with the convenience of centralized storage while avoiding proprietary hardware lock-in and reducing the overall storage TCO. NVMesh features a distributed block layer that allows unmodified applications to utilize pooled NVMe storage devices across a network at local speeds and latencies. Distributed NVMe storage resources are pooled with the ability to create arbitrary, dynamic block volumes that can be utilized by any host running the NVMesh block client.
  • 29
    lakeFS

    lakeFS

    Treeverse

    lakeFS enables you to manage your data lake the way you manage your code. Run parallel pipelines for experimentation and CI/CD for your data. Simplifying the lives of engineers, data scientists and analysts who are transforming the world with data. lakeFS is an open source platform that delivers resilience and manageability to object-storage based data lakes. With lakeFS you can build repeatable, atomic and versioned data lake operations, from complex ETL jobs to data science and analytics. lakeFS supports AWS S3, Azure Blob Storage and Google Cloud Storage (GCS) as its underlying storage service. It is API compatible with S3 and works seamlessly with all modern data frameworks such as Spark, Hive, AWS Athena, Presto, etc. lakeFS provides a Git-like branching and committing model that scales to exabytes of data by utilizing S3, GCS, or Azure Blob for storage.
  • 30
    Prodea

    Prodea

    Prodea

    Launch secure, scalable and globally compliant connected products with services within six months. Prodea provides the only IoT platform-as-a-service (PaaS) that was specifically designed for manufacturers of mass-market consumer home products. It is comprised of three main services. IoT Service X-Change Platform, for quickly launching connected products with services across global markets requiring minimal development. Insight™ Data Services, to gain key insights from user and product usage data. And EcoAdaptor™ Service, to enhance product value through cloud-to-cloud integration and interoperability with other products and services. Prodea has helped its global brand customers launch 100+ connected products, in less than six months on average, across six continents. This was made possible by using the Prodea X5 Program which was designed to work with our three main cloud services to help brands evolve their systems.