Alternatives to MLlib
Compare MLlib alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to MLlib in 2026. Compare features, ratings, user reviews, pricing, and more from MLlib competitors and alternatives in order to make an informed decision for your business.
-
1
Apache Spark
Apache Software Foundation
Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources. -
2
Apache PredictionIO
Apache
Apache PredictionIO® is an open-source machine learning server built on top of a state-of-the-art open-source stack for developers and data scientists to create predictive engines for any machine learning task. It lets you quickly build and deploy an engine as a web service on production with customizable templates. Respond to dynamic queries in real-time once deployed as a web service, evaluate and tune multiple engine variants systematically, and unify data from multiple platforms in batch or in real-time for comprehensive predictive analytics. Speed up machine learning modeling with systematic processes and pre-built evaluation measures, support machine learning and data processing libraries such as Spark MLLib and OpenNLP. Implement your own machine learning models and seamlessly incorporate them into your engine. Simplify data infrastructure management. Apache PredictionIO® can be installed as a full machine learning stack, bundled with Apache Spark, MLlib, HBase, Akka HTTP, etc.Starting Price: Free -
3
Apache Mahout
Apache Software Foundation
Apache Mahout is a powerful, scalable, and versatile machine learning library designed for distributed data processing. It offers a comprehensive set of algorithms for various tasks, including classification, clustering, recommendation, and pattern mining. Built on top of the Apache Hadoop ecosystem, Mahout leverages MapReduce and Spark to enable data processing on large-scale datasets. Apache Mahout(TM) is a distributed linear algebra framework and mathematically expressive Scala DSL designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed backends. Matrix computations are a fundamental part of many scientific and engineering applications, including machine learning, computer vision, and data analysis. Apache Mahout is designed to handle large-scale data processing by leveraging the power of Hadoop and Spark. -
4
Amazon EMR
Amazon
Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting. -
5
PySpark
PySpark
PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment. PySpark supports most of Spark’s features such as Spark SQL, DataFrame, Streaming, MLlib (Machine Learning) and Spark Core. Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrame and can also act as distributed SQL query engine. Running on top of Spark, the streaming feature in Apache Spark enables powerful interactive and analytical applications across both streaming and historical data, while inheriting Spark’s ease of use and fault tolerance characteristics. -
6
E-MapReduce
Alibaba
EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface. -
7
Spark Streaming
Apache Software Foundation
Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs. It supports Java, Scala and Python. Spark Streaming recovers both lost work and operator state (e.g. sliding windows) out of the box, without any extra code on your part. By running on Spark, Spark Streaming lets you reuse the same code for batch processing, join streams against historical data, or run ad-hoc queries on stream state. Build powerful interactive applications, not just analytics. Spark Streaming is developed as part of Apache Spark. It thus gets tested and updated with each Spark release. You can run Spark Streaming on Spark's standalone cluster mode or other supported cluster resource managers. It also includes a local run mode for development. In production, Spark Streaming uses ZooKeeper and HDFS for high availability. -
8
Deeplearning4j
Deeplearning4j
DL4J takes advantage of the latest distributed computing frameworks including Apache Spark and Hadoop to accelerate training. On multi-GPUs, it is equal to Caffe in performance. The libraries are completely open-source, Apache 2.0, and maintained by the developer community and Konduit team. Deeplearning4j is written in Java and is compatible with any JVM language, such as Scala, Clojure, or Kotlin. The underlying computations are written in C, C++, and Cuda. Keras will serve as the Python API. Eclipse Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Apache Spark, DL4J brings AI to business environments for use on distributed GPUs and CPUs. There are a lot of parameters to adjust when you're training a deep-learning network. We've done our best to explain them, so that Deeplearning4j can serve as a DIY tool for Java, Scala, Clojure, and Kotlin programmers. -
9
Azure Databricks
Microsoft
Unlock insights from all your data and build artificial intelligence (AI) solutions with Azure Databricks, set up your Apache Spark™ environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. Azure Databricks provides the latest versions of Apache Spark and allows you to seamlessly integrate with open source libraries. Spin up clusters and build quickly in a fully managed Apache Spark environment with the global scale and availability of Azure. Clusters are set up, configured, and fine-tuned to ensure reliability and performance without the need for monitoring. Take advantage of autoscaling and auto-termination to improve total cost of ownership (TCO). -
10
Deequ
Deequ
Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets. We are happy to receive feedback and contributions. Deequ depends on Java 8. Deequ version 2.x only runs with Spark 3.1, and vice versa. If you rely on a previous Spark version, please use a Deequ 1.x version (legacy version is maintained in legacy-spark-3.0 branch). We provide legacy releases compatible with Apache Spark versions 2.2.x to 3.0.x. The Spark 2.2.x and 2.3.x releases depend on Scala 2.11 and the Spark 2.4.x, 3.0.x, and 3.1.x releases depend on Scala 2.12. Deequ's purpose is to "unit-test" data to find errors early, before the data gets fed to consuming systems or machine learning algorithms. In the following, we will walk you through a toy example to showcase the most basic usage of our library. -
11
IBM Analytics Engine provides an architecture for Hadoop clusters that decouples the compute and storage tiers. Instead of a permanent cluster formed of dual-purpose nodes, the Analytics Engine allows users to store data in an object storage layer such as IBM Cloud Object Storage and spins up clusters of computing notes when needed. Separating compute from storage helps to transform the flexibility, scalability and maintainability of big data analytics platforms. Build on an ODPi compliant stack with pioneering data science tools with the broader Apache Hadoop and Apache Spark ecosystem. Define clusters based on your application's requirements. Choose the appropriate software pack, version, and size of the cluster. Use as long as required and delete as soon as an application finishes jobs. Configure clusters with third-party analytics libraries and packages. Deploy workloads from IBM Cloud services like machine learning.Starting Price: $0.014 per hour
-
12
Apache Phoenix
Apache Software Foundation
Apache Phoenix enables OLTP and operational analytics in Hadoop for low-latency applications by combining the best of both worlds. The power of standard SQL and JDBC APIs with full ACID transaction capabilities and the flexibility of late-bound, schema-on-read capabilities from the NoSQL world by leveraging HBase as its backing store. Apache Phoenix is fully integrated with other Hadoop products such as Spark, Hive, Pig, Flume, and Map Reduce. Become the trusted data platform for OLTP and operational analytics for Hadoop through well-defined, industry-standard APIs. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. Direct use of the HBase API, along with coprocessors and custom filters, results in performance on the order of milliseconds for small queries, or seconds for tens of millions of rows.Starting Price: Free -
13
Spark NLP
John Snow Labs
Experience the power of large language models like never before, unleashing the full potential of Natural Language Processing (NLP) with Spark NLP, the open source library that delivers scalable LLMs. The full code base is open under the Apache 2.0 license, including pre-trained models and pipelines. The only NLP library built natively on Apache Spark. The most widely used NLP library in the enterprise. Spark ML provides a set of machine learning applications that can be built using two main components, estimators and transformers. The estimators have a method that secures and trains a piece of data to such an application. The transformer is generally the result of a fitting process and applies changes to the target dataset. These components have been embedded to be applicable to Spark NLP. Pipelines are a mechanism for combining multiple estimators and transformers in a single workflow. They allow multiple chained transformations along a machine-learning task.Starting Price: Free -
14
IBM Analytics for Apache Spark is a flexible and integrated Spark service that empowers data science professionals to ask bigger, tougher questions, and deliver business value faster. It’s an easy-to-use, always-on managed service with no long-term commitment or risk, so you can begin exploring right away. Access the power of Apache Spark with no lock-in, backed by IBM’s open-source commitment and decades of enterprise experience. A managed Spark service with Notebooks as a connector means coding and analytics are easier and faster, so you can spend more of your time on delivery and innovation. A managed Apache Spark services gives you easy access to the power of built-in machine learning libraries without the headaches, time and risk associated with managing a Sparkcluster independently.
-
15
Azure HDInsight
Microsoft
Run popular open-source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using Azure HDInsight, a customizable, enterprise-grade service for open-source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open-source project ecosystem with the global scale of Azure. Easily migrate your big data workloads and processing to the cloud. Open-source projects and clusters are easy to spin up quickly without the need to install hardware or manage infrastructure. Big data clusters reduce costs through autoscaling and pricing tiers that allow you to pay for only what you use. Enterprise-grade security and industry-leading compliance with more than 30 certifications helps protect your data. Optimized components for open-source technologies such as Hadoop and Spark keep you up to date. -
16
Apache Bigtop
Apache Software Foundation
Bigtop is an Apache Foundation project for Infrastructure Engineers and Data Scientists looking for comprehensive packaging, testing, and configuration of the leading open source big data components. Bigtop supports a wide range of components/projects, including, but not limited to, Hadoop, HBase and Spark. Bigtop packages Hadoop RPMs and DEBs, so that you can manage and maintain your Hadoop cluster. Bigtop provides an integrated smoke testing framework, alongside a suite of over 50 test files. Bigtop provides vagrant recipes, raw images, and (work-in-progress) docker recipes for deploying Hadoop from zero. Bigtop support many Operating Systems, including Debian, Ubuntu, CentOS, Fedora, openSUSE and many others. Bigtop includes tools and a framework for testing at various levels (packaging, platform, runtime, etc.) for both initial deployments as well as upgrade scenarios for the entire data platform, not just the individual components. -
17
Wallaroo.AI
Wallaroo.AI
Wallaroo facilitates the last-mile of your machine learning journey, getting ML into your production environment to impact the bottom line, with incredible speed and efficiency. Wallaroo is purpose-built from the ground up to be the easy way to deploy and manage ML in production, unlike Apache Spark, or heavy-weight containers. ML with up to 80% lower cost and easily scale to more data, more models, more complex models. Wallaroo is designed to enable data scientists to quickly and easily deploy their ML models against live data, whether to testing environments, staging, or prod. Wallaroo supports the largest set of machine learning training frameworks possible. You’re free to focus on developing and iterating on your models while letting the platform take care of deployment and inference at speed and scale. -
18
Yandex Data Proc
Yandex
You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.Starting Price: $0.19 per hour -
19
RoyalCyber eCatalyst
RoyalCyber
Ecatalyst is a proprietary product which can be integrated with ecommerce software’s like Hybris, Magento etc., and uses events generated by the sites to provide predictions such as personalized predictions, Similar Product predictions, Complementary predictions, Contextual predictions to the users. A proprietary decision making engine that provides predictions and suggestions about the products based on the event traffic of the products in a website. Uses advanced statistical and machine learning techniques for predictions and suggestions. It’s built on top of Big Data architecture and uses HBase and Apache Spark. Highly customizable and provides Intelligent and personalized recommendations to the customers. Captures all the events in real time and provides contextual recommendations to the user. Built on top of Big Data architecture providing great scalability without compromising performance. -
20
Oracle Machine Learning
Oracle
Machine learning uncovers hidden patterns and insights in enterprise data, generating new value for the business. Oracle Machine Learning accelerates the creation and deployment of machine learning models for data scientists using reduced data movement, AutoML technology, and simplified deployment. Increase data scientist and developer productivity and reduce their learning curve with familiar open source-based Apache Zeppelin notebook technology. Notebooks support SQL, PL/SQL, Python, and markdown interpreters for Oracle Autonomous Database so users can work with their language of choice when developing models. A no-code user interface supporting AutoML on Autonomous Database to improve both data scientist productivity and non-expert user access to powerful in-database algorithms for classification and regression. Data scientists gain integrated model deployment from the Oracle Machine Learning AutoML User Interface. -
21
Apache Eagle
Apache Software Foundation
Apache Eagle (called Eagle in the following) is an open source analytics solution for identifying security and performance issues instantly on big data platforms, e.g. Apache Hadoop, Apache Spark etc. It analyzes data activities, yarn applications, jmx metrics, and daemon logs etc., provides state-of-the-art alert engine to identify security breach, performance issues and shows insights. Big data platform normally generates huge amount of operational logs and metrics in realtime. Eagle is founded to solve hard problems in securing and tuning performance for big data platforms by ensuring metrics, logs always available and alerting immediately even under huge traffic. Streaming operational logs and data activities into Eagle platform, including but not limited to audit logs, map/reduce jobs, yarn resource usage, jmx metrics and various daemon logs etc. Generate alerts, show historical trend, and correlate alert with raw data. -
22
HugeGraph
HugeGraph
HugeGraph is a fast-speed and highly-scalable graph database. Billions of vertices and edges can be easily stored into and queried from HugeGraph due to its excellent OLTP ability. As compliance to Apache TinkerPop 3 framework, various complicated graph queries can be accomplished through Gremlin (a powerful graph traversal language). Among its features, it provides compliance to Apache TinkerPop 3, supporting Gremlin. Schema Metadata Management, including VertexLabel, EdgeLabel, PropertyKey and IndexLabel. Multi-type Indexes, supporting exact query, range query and complex conditions combination query. Plug-in Backend Store Driver Framework, supporting RocksDB, Cassandra, ScyllaDB, HBase and MySQL now and easy to add other backend store driver if needed. Integration with Hadoop/Spark. HugeGraph relies on the TinkerPop framework, we refer to the storage structure of Titan and the schema definition of DataStax. -
23
Google Cloud Dataproc
Google
Dataproc makes open source data and analytics processing fast, easy, and more secure in the cloud. Build custom OSS clusters on custom machines faster. Whether you need extra memory for Presto or GPUs for Apache Spark machine learning, Dataproc can help accelerate your data and analytics processing by spinning up a purpose-built cluster in 90 seconds. Easy and affordable cluster management. With autoscaling, idle cluster deletion, per-second pricing, and more, Dataproc can help reduce the total cost of ownership of OSS so you can focus your time and resources elsewhere. Security built in by default. Encryption by default helps ensure no piece of data is unprotected. With JobsAPI and Component Gateway, you can define permissions for Cloud IAM clusters, without having to set up networking or gateway nodes. -
24
Available in IBM Watson® Studio and Watson™ Knowledge Catalog, the data refinery tool saves data preparation time by quickly transforming large amounts of raw data into consumable, quality information that’s ready for analytics. Interactively discover, cleanse, and transform your data with over 100 built-in operations. No coding skills are required. Understand the quality and distribution of your data using dozens of built-in charts, graphs, and statistics. Automatically detect data types and business classifications. Access and explore data residing in a wide spectrum of data sources within your organization or the cloud. Automatically enforce policies set by data governance professionals. Schedule data flow executions for repeatable outcomes. Monitor results and receive notifications. Easily scale out via Apache Spark to apply transformation recipes on full data sets. No management of Apache Spark clusters needed.
-
25
Stackable
Stackable
The Stackable data platform was designed with openness and flexibility in mind. It provides you with a curated selection of the best open source data apps like Apache Kafka, Apache Druid, Trino, and Apache Spark. While other current offerings either push their proprietary solutions or deepen vendor lock-in, Stackable takes a different approach. All data apps work together seamlessly and can be added or removed in no time. Based on Kubernetes, it runs everywhere, on-prem or in the cloud. stackablectl and a Kubernetes cluster are all you need to run your first stackable data platform. Within minutes, you will be ready to start working with your data. Configure your one-line startup command right here. Similar to kubectl, stackablectl is designed to easily interface with the Stackable Data Platform. Use the command line utility to deploy and manage stackable data apps on Kubernetes. With stackablectl, you can create, delete, and update components.Starting Price: Free -
26
BigBI
BigBI
BigBI enables data specialists to build their own powerful big data pipelines interactively & efficiently, without any coding! BigBI unleashes the power of Apache Spark enabling: Scalable processing of real Big Data (up to 100X faster) Integration of traditional data (SQL, batch files) with modern data sources including semi-structured (JSON, NoSQL DBs, Elastic, Hadoop), and unstructured (Text, Audio, video), Integration of streaming data, cloud data, AI/ML & graphs -
27
ML.NET
Microsoft
ML.NET is a free, open source, and cross-platform machine learning framework designed for .NET developers to build custom machine learning models using C# or F# without leaving the .NET ecosystem. It supports various machine learning tasks, including classification, regression, clustering, anomaly detection, and recommendation systems. ML.NET integrates with other popular ML frameworks like TensorFlow and ONNX, enabling additional scenarios such as image classification and object detection. It offers tools like Model Builder and the ML.NET CLI, which utilize Automated Machine Learning (AutoML) to simplify the process of building, training, and deploying high-quality models. These tools automatically explore different algorithms and settings to find the best-performing model for a given scenario.Starting Price: Free -
28
Oracle Cloud Infrastructure (OCI) Data Flow is a fully managed Apache Spark service to perform processing tasks on extremely large data sets without infrastructure to deploy or manage. This enables rapid application delivery because developers can focus on app development, not infrastructure management. OCI Data Flow handles infrastructure provisioning, network setup, and teardown when Spark jobs are complete. Storage and security are also managed, which means less work is required for creating and managing Spark applications for big data analysis. With OCI Data Flow, there are no clusters to install, patch, or upgrade, which saves time and operational costs for projects. OCI Data Flow runs each Spark job in private dedicated resources, eliminating the need for upfront capacity planning. With OCI Data Flow, IT only needs to pay for the infrastructure resources that Spark jobs use while they are running.Starting Price: $0.0085 per GB per hour
-
29
JanusGraph
JanusGraph
JanusGraph is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. JanusGraph is a project under The Linux Foundation, and includes participants from Expero, Google, GRAKN.AI, Hortonworks, IBM and Amazon. Elastic and linear scalability for a growing data and user base. Data distribution and replication for performance and fault tolerance. Multi-datacenter high availability and hot backups. All functionality is totally free. No need to buy commercial licenses. JanusGraph is fully open source under the Apache 2 license. JanusGraph is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. Support for ACID and eventual consistency. In addition to online transactional processing (OLTP), JanusGraph supports global graph analytics (OLAP) with its Apache Spark integration. -
30
iomete
iomete
Modern lakehouse built on top of Apache Iceberg and Apache Spark. Includes: Serverless lakehouse, Serverless Spark Jobs, SQL editor, Advanced data catalog and built-in BI (or connect 3rd party BI e.g. Tableau, Looker). iomete has an extreme value proposition with compute prices is equal to AWS on-demand pricing. No mark-ups. AWS users get our platform basically for free.Starting Price: Free -
31
Pepperdata
Pepperdata, Inc.
Pepperdata autonomous cost optimization for data-intensive workloads such as Apache Spark is the only solution that delivers 30-47% greater cost savings continuously and in real time with no application changes or manual tuning. Deployed on over 20,000+ clusters, Pepperdata Capacity Optimizer provides resource optimization and full-stack observability in some of the largest and most complex environments in the world, enabling customers to run Spark on 30% less infrastructure on average. In the last decade, Pepperdata has helped top enterprises such as Citibank, Autodesk, Royal Bank of Canada, members of the Fortune 10, and mid-sized companies save over $250 million. -
32
Apache Hive
Apache Software Foundation
The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. Apache Hive is an open source project run by volunteers at the Apache Software Foundation. Previously it was a subproject of Apache® Hadoop®, but has now graduated to become a top-level project of its own. We encourage you to learn about the project and contribute your expertise. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. -
33
Apache HBase
The Apache Software Foundation
Use Apache HBase™ when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Automatic failover support between RegionServers. Easy to use Java API for client access. Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options. Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX. -
34
Alibaba Cloud Machine Learning Platform for AI
Alibaba Cloud
An end-to-end platform that provides various machine learning algorithms to meet your data mining and analysis requirements. Machine Learning Platform for AI provides end-to-end machine learning services, including data processing, feature engineering, model training, model prediction, and model evaluation. Machine learning platform for AI combines all of these services to make AI more accessible than ever. Machine Learning Platform for AI provides a visualized web interface allowing you to create experiments by dragging and dropping different components to the canvas. Machine learning modeling is a simple, step-by-step procedure, improving efficiencies and reducing costs when creating an experiment. Machine Learning Platform for AI provides more than one hundred algorithm components, covering such scenarios as regression, classification, clustering, text analysis, finance, and time series.Starting Price: $1.872 per hour -
35
Apache Sentry
Apache Software Foundation
Apache Sentry™ is a system for enforcing fine grained role based authorization to data and metadata stored on a Hadoop cluster. Apache Sentry has successfully graduated from the Incubator in March of 2016 and is now a Top-Level Apache project. Apache Sentry is a granular, role-based authorization module for Hadoop. Sentry provides the ability to control and enforce precise levels of privileges on data for authenticated users and applications on a Hadoop cluster. Sentry currently works out of the box with Apache Hive, Hive Metastore/HCatalog, Apache Solr, Impala and HDFS (limited to Hive table data). Sentry is designed to be a pluggable authorization engine for Hadoop components. It allows you to define authorization rules to validate a user or application’s access requests for Hadoop resources. Sentry is highly modular and can support authorization for a wide variety of data models in Hadoop. -
36
Apache Kylin
Apache Software Foundation
Apache Kylin™ is an open source, distributed Analytical Data Warehouse for Big Data; it was designed to provide OLAP (Online Analytical Processing) capability in the big data era. By renovating the multi-dimensional cube and precalculation technology on Hadoop and Spark, Kylin is able to achieve near constant query speed regardless of the ever-growing data volume. Reducing query latency from minutes to sub-second, Kylin brings online analytics back to big data. Kylin can analyze 10+ billions of rows in less than a second. No more waiting on reports for critical decisions. Kylin connects data on Hadoop to BI tools like Tableau, PowerBI/Excel, MSTR, QlikSense, Hue and SuperSet, making the BI on Hadoop faster than ever. As an Analytical Data Warehouse, Kylin offers ANSI SQL on Hadoop/Spark and supports most ANSI SQL query functions. Kylin can support thousands of interactive queries at the same time, thanks to the low resource consumption of each query. -
37
Horovod
Horovod
Horovod was originally developed by Uber to make distributed deep learning fast and easy to use, bringing model training time down from days and weeks to hours and minutes. With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.Starting Price: Free -
38
Hadoop
Apache Software Foundation
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2). -
39
DeepNLP
SparkCognition
SparkCognition, a leading industrial AI company, has developed a natural language processing solution that automates workflows of unstructured data within organizations so humans can focus on high-value business decisions. The DeepNLP product uses advanced machine learning techniques to automate the retrieval of information, the classification of documents, and content analytics. The DeepNLP product integrates into existing workflows to enable organizations to better respond to changes in their business and quickly get answers to specific queries or analytics that support decision-making. -
40
Study Fetch
StudyFetch
StudyFetch is a revolutionary new platform that allows you to upload your course materials and create interactive study sets. You can study with an AI tutor, create flashcards, generate notes, take practice tests, and more. Spark.e, our AI tutor, allows you to interact directly with your study materials. You can ask questions, create flashcards, take practice tests, and customize your learning experience. StudyFetch's AI, Spark.e, utilizes advanced machine learning algorithms to offer a tailored, interactive tutoring experience. Once you upload your study materials, Spark.e scans and indexes them, making the content searchable and accessible for real-time queries. -
41
PI.EXCHANGE
PI.EXCHANGE
Easily connect your data to the engine, either through uploading a file or connecting to a database. Then, start analyzing your data through visualizations, or prepare your data for machine learning modeling with the data wrangling actions with repeatable recipes. Get the most out of your data by building machine learning models, using regression, classification or clustering algorithms - all without any code. Uncover insights into your data, using the feature importance, prediction explanation, and what-if tools. Make predictions and integrate them seamlessly into your existing systems through our connectors, ready to go so you can start taking action.Starting Price: $39 per month -
42
Amazon MSK
Amazon
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. Apache Kafka is an open-source platform for building real-time streaming data pipelines and applications. With Amazon MSK, you can use native Apache Kafka APIs to populate data lakes, stream changes to and from databases, and power machine learning and analytics applications. Apache Kafka clusters are challenging to setup, scale, and manage in production. When you run Apache Kafka on your own, you need to provision servers, configure Apache Kafka manually, replace servers when they fail, orchestrate server patches and upgrades, architect the cluster for high availability, ensure data is durably stored and secured, setup monitoring and alarms, and carefully plan scaling events to support load changes.Starting Price: $0.0543 per hour -
43
MXNet
The Apache Software Foundation
A hybrid front-end seamlessly transitions between Gluon eager imperative mode and symbolic mode to provide both flexibility and speed. Scalable distributed training and performance optimization in research and production is enabled by the dual parameter server and Horovod support. Deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl. A thriving ecosystem of tools and libraries extends MXNet and enables use-cases in computer vision, NLP, time series and more. Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision-making process have stabilized in a manner consistent with other successful ASF projects. Join the MXNet scientific community to contribute, learn, and get answers to your questions. -
44
Apache Mesos
Apache Software Foundation
Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with API’s for resource management and scheduling across entire datacenter and cloud environments. Native support for launching containers with Docker and AppC images.Support for running cloud native and legacy applications in the same cluster with pluggable scheduling policies. HTTP APIs for developing new distributed applications, for operating the cluster, and for monitoring. Built-in Web UI for viewing cluster state and navigating container sandboxes. -
45
Apache Accumulo
Apache Corporation
With Apache Accumulo, users can store and manage large data sets across a cluster. Accumulo uses Apache Hadoop's HDFS to store its data and Apache ZooKeeper for consensus. While many users interact directly with Accumulo, several open source projects use Accumulo as their underlying store. To learn more about Accumulo, take the Accumulo tour, read the user manual and run the Accumulo example code. Feel free to contact us if you have any questions. Accumulo has a programming mechanism (called Iterators) that can modify key/value pairs at various points in the data management process. Every Accumulo key/value pair has its own security label which limits query results based off user authorizations. Accumulo runs on a cluster using one or more HDFS instances. Nodes can be added or removed as the amount of data stored in Accumulo changes. -
46
Weka
University of Waikato
Weka is a collection of machine learning algorithms for data mining tasks. It contains tools for data preparation, classification, regression, clustering, association rules mining, and visualization. Found only on the islands of New Zealand, the Weka is a flightless bird with an inquisitive nature. The name is pronounced like this, and the bird sounds like this. Weka is open source software issued under the GNU General Public License. We have put together several free online courses that teach machine learning and data mining using Weka. The videos for the courses are available on Youtube. An exciting and potentially far-reaching development in computer science is the invention and application of methods of machine learning (ML). These enable a computer program to automatically analyze a large body of data and decide what information is most relevant. This crystallized information can then be used to automatically make predictions or to help people make decisions faster. -
47
IBM Databand
IBM
Monitor your data health and pipeline performance. Gain unified visibility for pipelines running on cloud-native tools like Apache Airflow, Apache Spark, Snowflake, BigQuery, and Kubernetes. An observability platform purpose built for Data Engineers. Data engineering is only getting more challenging as demands from business stakeholders grow. Databand can help you catch up. More pipelines, more complexity. Data engineers are working with more complex infrastructure than ever and pushing higher speeds of release. It’s harder to understand why a process has failed, why it’s running late, and how changes affect the quality of data outputs. Data consumers are frustrated with inconsistent results, model performance, and delays in data delivery. Not knowing exactly what data is being delivered, or precisely where failures are coming from, leads to persistent lack of trust. Pipeline logs, errors, and data quality metrics are captured and stored in independent, isolated systems. -
48
Apache Storm
Apache Software Foundation
Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! Apache Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Apache Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Apache Storm integrates with the queueing and database technologies you already use. An Apache Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial. -
49
QC Ware Forge
QC Ware
Unique and efficient turn-key algorithms for data scientists. Powerful circuit building blocks for quantum engineers. Turn-key algorithm implementations for data scientists, financial analysts, and engineers. Explore problems in binary optimization, machine learning, linear algebra, and monte carlo sampling on simulators and real quantum hardware. No prior experience with quantum computing is required. Use NISQ data loader circuits to load classical data into quantum states to use with your algorithms. Use circuit building blocks for linear algebra with distance estimation and matrix multiplication circuits. Use our circuit building blocks to create your own algorithms. Get a significant performance boost for D-Wave hardware and use the latest improvements for gate-based approaches. Try out quantum data loaders and algorithms with guaranteed speed-ups on clustering, classification, and regression.Starting Price: $2,500 per hour -
50
Beaker Notebook
Two Sigma Open Source
BeakerX is a collection of kernels and extensions to the Jupyter interactive computing environment. It provides JVM support, Spark cluster support, polyglot programming, interactive plots, tables, forms, publishing, and more. All of BeakerX’s JVM languages plus Python and JavaScript have APIs for interactive time-series, scatter plots, histograms, heatmaps, and treemaps. The widgets remain interactive in both notebooks saved to disk, and notebooks published to the web. They include unique features for handling many points, nanosecond resolution, zooming, and exporting. BeakerX’s table widget automatically recognizes pandas data frames and allows you to search, sort, drag, filter, format, select, graph, hide, pin, and export to CSV or clipboard. This makes connecting to spreadsheets quickly and easy. BeakerX has a Spark magic with GUIs for configuration, status, progress, and interrupt of Spark jobs. You can either use the GUI or create your own SparkSession with code.