Best Application Development Software for Apache Spark

Compare the Top Application Development Software that integrates with Apache Spark as of June 2025

This a list of Application Development software that integrates with Apache Spark. Use the filters on the left to add additional filters for products that have integrations with Apache Spark. View the products that work with Apache Spark in the table below.

What is Application Development Software for Apache Spark?

Application development software is a type of software used to create applications and software programs. It typically includes code editors, compilers, and debuggers that allow developers to write, compile, and debug code. It also includes libraries of pre-written code that developers can use to create more complex and powerful applications. Compare and read user reviews of the best Application Development software for Apache Spark currently available using the table below. This list is updated regularly.

  • 1
    Kubernetes

    Kubernetes

    Kubernetes

    Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is. Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.
    Starting Price: Free
  • 2
    Sematext Cloud

    Sematext Cloud

    Sematext Group

    Sematext Cloud is an innovative, unified platform with all-in-one solution for infrastructure monitoring, application performance monitoring, log management, real user monitoring, and synthetic monitoring to provide unified, real-time observability of your entire technology stack. It's used by organizations of all sizes and across a wide range of industries, with the goal of driving collaboration between engineering and business teams, reducing the time of root-cause analysis, understanding user behaviour and tracking key business metrics. The main capabilities range from log monitoring to APM, server monitoring, database monitoring, network monitoring, uptime monitoring, website monitoring or container monitoring Find complete details on our website. Or better: start a free demo, no email address required.
    Leader badge
    Starting Price: $0
  • 3
    Jupyter Notebook

    Jupyter Notebook

    Project Jupyter

    The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
  • 4
    SingleStore

    SingleStore

    SingleStore

    SingleStore (formerly MemSQL) is a distributed, highly-scalable SQL database that can run anywhere. We deliver maximum performance for transactional and analytical workloads with familiar relational models. SingleStore is a scalable SQL database that ingests data continuously to perform operational analytics for the front lines of your business. Ingest millions of events per second with ACID transactions while simultaneously analyzing billions of rows of data in relational SQL, JSON, geospatial, and full-text search formats. SingleStore delivers ultimate data ingestion performance at scale and supports built in batch loading and real time data pipelines. SingleStore lets you achieve ultra fast query response across both live and historical data using familiar ANSI SQL. Perform ad hoc analysis with business intelligence tools, run machine learning algorithms for real-time scoring, perform geoanalytic queries in real time.
    Starting Price: $0.69 per hour
  • 5
    JupyterLab

    JupyterLab

    Jupyter

    Project Jupyter exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages. JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data. JupyterLab is flexible, configure and arrange the user interface to support a wide range of workflows in data science, scientific computing, and machine learning. JupyterLab is extensible and modular, write plugins that add new components and integrate with existing ones. The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include, data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. Jupyter supports over 40 programming languages, including Python, R, Julia, and Scala.
  • 6
    Instaclustr

    Instaclustr

    Instaclustr

    Instaclustr is the Open Source-as-a-Service company, delivering reliability at scale. We operate an automated, proven, and trusted managed environment, providing database, analytics, search, and messaging. We enable companies to focus internal development and operational resources on building cutting edge customer-facing applications. Instaclustr works with cloud providers including AWS, Heroku, Azure, IBM Cloud, and Google Cloud Platform. The company has SOC 2 certification and provides 24/7 customer support.
    Starting Price: $20 per node per month
  • 7
    PubSub+ Platform
    Solace PubSub+ Platform helps enterprises design, deploy and manage event-driven systems across hybrid and multi-cloud and IoT environments so they can be more event-driven and operate in real-time. The PubSub+ Platform includes the powerful PubSub+ Event Brokers, event management capabilities with PubSub+ Event Portal, as well as monitoring and integration capabilities all available via a single cloud console. PubSub+ allows easy creation of an event mesh, an interconnected network of event brokers, allowing for seamless and dynamic data movement across highly distributed network environments. PubSub+ Event Brokers can be deployed as fully managed cloud services, self-managed software in private cloud or on-premises environments, or as turnkey hardware appliances for unparalleled performance and low TCO. PubSub+ Event Portal is a complimentary toolset for design and governance of event-driven systems including both Solace and Kafka-based event broker environments.
  • 8
    Prophecy

    Prophecy

    Prophecy

    Prophecy enables many more users - including visual ETL developers and Data Analysts. All you need to do is point-and-click and write a few SQL expressions to create your pipelines. As you use the Low-Code designer to build your workflows - you are developing high quality, readable code for Spark and Airflow that is committed to your Git. Prophecy gives you a gem builder - for you to quickly develop and rollout your own Frameworks. Examples are Data Quality, Encryption, new Sources and Targets that extend the built-in ones. Prophecy provides best practices and infrastructure as managed services – making your life and operations simple! With Prophecy, your workflows are high performance and use scale-out performance & scalability of the cloud.
    Starting Price: $299 per month
  • 9
    Apache Mesos

    Apache Mesos

    Apache Software Foundation

    Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with API’s for resource management and scheduling across entire datacenter and cloud environments. Native support for launching containers with Docker and AppC images.Support for running cloud native and legacy applications in the same cluster with pluggable scheduling policies. HTTP APIs for developing new distributed applications, for operating the cluster, and for monitoring. Built-in Web UI for viewing cluster state and navigating container sandboxes.
  • 10
    IBM Analytics for Apache Spark
    IBM Analytics for Apache Spark is a flexible and integrated Spark service that empowers data science professionals to ask bigger, tougher questions, and deliver business value faster. It’s an easy-to-use, always-on managed service with no long-term commitment or risk, so you can begin exploring right away. Access the power of Apache Spark with no lock-in, backed by IBM’s open-source commitment and decades of enterprise experience. A managed Spark service with Notebooks as a connector means coding and analytics are easier and faster, so you can spend more of your time on delivery and innovation. A managed Apache Spark services gives you easy access to the power of built-in machine learning libraries without the headaches, time and risk associated with managing a Sparkcluster independently.
  • 11
    Progress DataDirect

    Progress DataDirect

    Progress Software

    Empowering applications with enterprise data is our passion here at Progress DataDirect. We offer cloud and on-premises data connectivity solutions across relational, NoSQL, Big Data, and SaaS data sources. Performance, reliability, and security are at the heart of everything we design for thousands of enterprises and the leading vendors in analytics, BI, and data management. Minimize your development costs with our portfolio of high-value connectors for a variety of data sources. Enjoy 24/7 world-class support and security for greater peace of mind. Connect with affordable, easy-to-use, and time-saving drivers for faster SQL access to your data. As a leader in data connectivity, keeping up with the evolving trends in space is our mission. But if we haven’t built the connector you need yet, reach out and we’ll help you develop the right solution. Embed connectivity in an application or service.
  • 12
    Foundational

    Foundational

    Foundational

    Identify code and optimization issues in real-time, prevent data incidents pre-deploy, and govern data-impacting code changes end to end—from the operational database to the user-facing dashboard. Automated, column-level data lineage, from the operational database all the way to the reporting layer, ensures every dependency is analyzed. Foundational automates data contract enforcement by analyzing every repository from upstream to downstream, directly from source code. Use Foundational to proactively identify code and data issues, find and prevent issues, and create controls and guardrails. Foundational can be set up in minutes with no code changes required.
  • 13
    Saagie

    Saagie

    Saagie

    The Saagie cloud data factory is a turnkey platform that lets you create and manage all your data & AI projects in a single interface, deployable in just a few clicks. Develop your use cases and test your AI models in a secure way with the Saagie data factory. Get your data and AI projects off the ground with a single interface and centralize your teams to make rapid progress. Whatever your maturity level, from your first data project to a data & AI-driven strategy, the Saagie platform is there for you. Simplify your workflows, boost your productivity, and make more informed decisions by unifying your work on a single platform. Transform your raw data into powerful insights by orchestrating your data pipelines. Get quick access to the information you need to make more informed decisions. Simplify the management and scalability of your data and AI infrastructure. Accelerate the time-to-production of your AI, machine learning, and deep learning models.
  • 14
    Astro

    Astro

    Astronomer

    For data teams looking to increase the availability of trusted data, Astronomer provides Astro, a modern data orchestration platform, powered by Apache Airflow, that enables the entire data team to build, run, and observe data pipelines-as-code. Astronomer is the commercial developer of Airflow, the de facto standard for expressing data flows as code, used by hundreds of thousands of teams across the world.
  • 15
    Azure Data Factory
    Integrate data silos with Azure Data Factory, a service built for all data integration needs and skill levels. Easily construct ETL and ELT processes code-free within the intuitive visual environment, or write your own code. Visually integrate data sources using more than 90+ natively built and maintenance-free connectors at no added cost. Focus on your data—the serverless integration service does the rest. Data Factory provides a data integration and transformation layer that works across your digital transformation initiatives. Data Factory can help independent software vendors (ISVs) enrich their SaaS apps with integrated hybrid data as to deliver data-driven user experiences. Pre-built connectors and integration at scale enable you to focus on your users while Data Factory takes care of the rest.
  • 16
    HPE Ezmeral

    HPE Ezmeral

    Hewlett Packard Enterprise

    Run, manage, control and secure the apps, data and IT that run your business, from edge to cloud. HPE Ezmeral advances digital transformation initiatives by shifting time and resources from IT operations to innovations. Modernize your apps. Simplify your Ops. And harness data to go from insights to impact. Accelerate time-to-value by deploying Kubernetes at scale with integrated persistent data storage for app modernization on bare metal or VMs, in your data center, on any cloud or at the edge. Harness data and get insights faster by operationalizing the end-to-end process to build data pipelines. Bring DevOps agility to the machine learning lifecycle, and deliver a unified data fabric. Boost efficiency and agility in IT Ops with automation and advanced artificial intelligence. And provide security and control to eliminate risk and reduce costs. HPE Ezmeral Container Platform provides an enterprise-grade platform to deploy Kubernetes at scale for a wide range of use cases.
  • 17
    Apache Bigtop

    Apache Bigtop

    Apache Software Foundation

    Bigtop is an Apache Foundation project for Infrastructure Engineers and Data Scientists looking for comprehensive packaging, testing, and configuration of the leading open source big data components. Bigtop supports a wide range of components/projects, including, but not limited to, Hadoop, HBase and Spark. Bigtop packages Hadoop RPMs and DEBs, so that you can manage and maintain your Hadoop cluster. Bigtop provides an integrated smoke testing framework, alongside a suite of over 50 test files. Bigtop provides vagrant recipes, raw images, and (work-in-progress) docker recipes for deploying Hadoop from zero. Bigtop support many Operating Systems, including Debian, Ubuntu, CentOS, Fedora, openSUSE and many others. Bigtop includes tools and a framework for testing at various levels (packaging, platform, runtime, etc.) for both initial deployments as well as upgrade scenarios for the entire data platform, not just the individual components.
  • 18
    Apache Zeppelin
    Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more. IPython interpreter provides comparable user experience like Jupyter Notebook. This release includes Note level dynamic form, note revision comparator and ability to run paragraph sequentially, instead of simultaneous paragraph execution in previous releases. Interpreter lifecycle manager automatically terminate interpreter process on idle timeout. So resources are released when they're not in use.
  • 19
    PySpark

    PySpark

    PySpark

    PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment. PySpark supports most of Spark’s features such as Spark SQL, DataFrame, Streaming, MLlib (Machine Learning) and Spark Core. Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrame and can also act as distributed SQL query engine. Running on top of Spark, the streaming feature in Apache Spark enables powerful interactive and analytical applications across both streaming and historical data, while inheriting Spark’s ease of use and fault tolerance characteristics.
  • 20
    SQL

    SQL

    SQL

    SQL is a domain-specific programming language used for accessing, managing, and manipulating relational databases and relational database management systems.
  • 21
    AI Squared

    AI Squared

    AI Squared

    Empower data scientists and application developers to collaborate on ML projects. Build, load, optimize and test models and integrations before publishing to end-users for integration into live applications. Reduce data science workload and improve decision-making by storing and sharing ML models across the organization. Publish updates to automatically push changes to models in production. Drive efficiency by instantly providing ML-powered insights within any web-based business application. Our self-service, drag-and-drop browser extension enables analysts and business users to integrate models into any web-based application with zero code.
  • 22
    Deequ

    Deequ

    Deequ

    Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets. We are happy to receive feedback and contributions. Deequ depends on Java 8. Deequ version 2.x only runs with Spark 3.1, and vice versa. If you rely on a previous Spark version, please use a Deequ 1.x version (legacy version is maintained in legacy-spark-3.0 branch). We provide legacy releases compatible with Apache Spark versions 2.2.x to 3.0.x. The Spark 2.2.x and 2.3.x releases depend on Scala 2.11 and the Spark 2.4.x, 3.0.x, and 3.1.x releases depend on Scala 2.12. Deequ's purpose is to "unit-test" data to find errors early, before the data gets fed to consuming systems or machine learning algorithms. In the following, we will walk you through a toy example to showcase the most basic usage of our library.
  • 23
    RunCode

    RunCode

    RunCode

    RunCode offers online developer workspaces, which are environments that allow you to work on code projects in a web browser. These workspaces provide you with a full development environment, including a code editor, a terminal, and access to a range of tools and libraries. They are designed to be easy to use and allow you to get started quickly without the need to set up a local development environment on your own computer.
    Starting Price: $20/month/user
  • 24
    Apache Mahout

    Apache Mahout

    Apache Software Foundation

    Apache Mahout is a powerful, scalable, and versatile machine learning library designed for distributed data processing. It offers a comprehensive set of algorithms for various tasks, including classification, clustering, recommendation, and pattern mining. Built on top of the Apache Hadoop ecosystem, Mahout leverages MapReduce and Spark to enable data processing on large-scale datasets. Apache Mahout(TM) is a distributed linear algebra framework and mathematically expressive Scala DSL designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed backends. Matrix computations are a fundamental part of many scientific and engineering applications, including machine learning, computer vision, and data analysis. Apache Mahout is designed to handle large-scale data processing by leveraging the power of Hadoop and Spark.
  • 25
    Precisely Connect
    Integrate data seamlessly from legacy systems into next-gen cloud and data platforms with one solution. Connect helps you take control of your data from mainframe to cloud. Integrate data through batch and real-time ingestion for advanced analytics, comprehensive machine learning and seamless data migration. Connect leverages the expertise Precisely has built over decades as a leader in mainframe sort and IBM i data availability and security to lead the industry in accessing and integrating complex data. Access to all your enterprise data for the most critical business projects is ensured by support for a wide range of sources and targets for all your ELT and CDC needs.
  • Previous
  • You're on page 1
  • Next