Alternatives to MPI for Python (mpi4py)

Compare MPI for Python (mpi4py) alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to MPI for Python (mpi4py) in 2025. Compare features, ratings, user reviews, pricing, and more from MPI for Python (mpi4py) competitors and alternatives in order to make an informed decision for your business.

  • 1
    GASP

    GASP

    AeroSoft

    GASP is a structured/unstructured, multi-block CFD flow solver which solves the Reynolds Averaged Navier-Stokes (RANS) equations as well as the heat conduction equation for solid bodies. Hierarchical-tree based organization. Pre- and post-processing in one interface. Solves steady and unsteady 3-D, Reynolds-Averaged, Navier-Stokes Equations (RANS) and subsets. Multi-block structured/unstructured grid topology. Unstructured mesh support for tetrahdra, hexahedra, prisms, and pyramids. Integration with portable extensible toolkit for scientific computation library. Uncoupling of systems including turbulence and chemistry for improved computational efficiency. Support for most parallel computers, including clusters. Integrated domain decomposition is transparent to the user.
  • 2
    AWS ParallelCluster
    AWS ParallelCluster is an open-source cluster management tool that simplifies the deployment and management of High-Performance Computing (HPC) clusters on AWS. It automates the setup of required resources, including compute nodes, a shared filesystem, and a job scheduler, supporting multiple instance types and job submission queues. Users can interact with ParallelCluster through a graphical user interface, command-line interface, or API, enabling flexible cluster configuration and management. The tool integrates with job schedulers like AWS Batch and Slurm, facilitating seamless migration of existing HPC workloads to the cloud with minimal modifications. AWS ParallelCluster is available at no additional charge; users only pay for the AWS resources consumed by their applications. With AWS ParallelCluster, you can use a simple text file to model, provision, and dynamically scale the resources needed for your applications in an automated and secure manner.
  • 3
    statsmodels

    statsmodels

    statsmodels

    statsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests and statistical data exploration. An extensive list of result statistics is available for each estimator. The results are tested against existing statistical packages to ensure that they are correct. The package is released under the open-source Modified BSD (3-clause) license. statsmodels supports specifying models using R-style formulas and pandas DataFrames. Have a look at dir(results) to see available results. Attributes are described in results.__doc__ and results methods have their own docstrings. You can also use numpy arrays instead of formulas. The easiest way to install statsmodels is to install it as part of the Anaconda distribution, a cross-platform distribution for data analysis and scientific computing. This is the recommended installation method for most users.
  • 4
    AWS Parallel Computing Service
    AWS Parallel Computing Service (AWS PCS) is a managed service that simplifies running and scaling high-performance computing workloads and building scientific and engineering models on AWS using Slurm. It enables the creation of complete, elastic environments that integrate computing, storage, networking, and visualization tools, allowing users to focus on research and innovation without the burden of infrastructure management. AWS PCS offers managed updates and built-in observability features, enhancing cluster operations and maintenance. Users can build and deploy scalable, reliable, and secure HPC clusters through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDK. The service supports various use cases, including tightly coupled workloads like computer-aided engineering, high-throughput computing such as genomics analysis, accelerated computing with GPUs, and custom silicon like AWS Trainium and AWS Inferentia.
    Starting Price: $0.5977 per hour
  • 5
    Nextflow

    Nextflow

    Seqera Labs

    Data-driven computational pipelines. Nextflow enables scalable and reproducible scientific workflows using software containers. It allows the adaptation of pipelines written in the most common scripting languages. Its fluent DSL simplifies the implementation and deployment of complex parallel and reactive workflows on clouds and clusters. Nextflow is built around the idea that Linux is the lingua franca of data science. Nextflow allows you to write a computational pipeline by making it simpler to put together many different tasks. You may reuse your existing scripts and tools and you don't need to learn a new language or API to start using it. Nextflow supports Docker and Singularity containers technology. This, along with the integration of the GitHub code-sharing platform, allows you to write self-contained pipelines, manage versions, and rapidly reproduce any former configuration. Nextflow provides an abstraction layer between your pipeline's logic and the execution layer.
  • 6
    Torch

    Torch

    Torch

    Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation. The goal of Torch is to have maximum flexibility and speed in building your scientific algorithms while making the process extremely simple. Torch comes with a large ecosystem of community-driven packages in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking among others, and builds on top of the Lua community. At the heart of Torch are the popular neural network and optimization libraries which are simple to use, while having maximum flexibility in implementing complex neural network topologies. You can build arbitrary graphs of neural networks, and parallelize them over CPUs and GPUs in an efficient manner.
  • 7
    VSim

    VSim

    Tech-X

    VSim is the Multiphysics Simulation Software used by design engineers and research scientists who require precise solutions for challenging problems. VSim’s unique combination of Finite-Difference Time-Domain (FDTD), Particle-in-Cell (PIC), and Charged Fluid (Finite Volume) methods deliver accurate results for a variety of situations, including plasma modeling. As a parallel software application, VSim can efficiently handle problems at scale, and simulations run quickly using algorithms designed for high-performance computing systems. Trusted by researchers in 30 countries, and used by engineers in industries from aerospace to semiconductor manufacturing, With documented accuracy, VSim provides results that users can trust. Created by a team of computational scientists, Tech-X’s code has been cited thousands of times in the scientific literature, and VSim can be found at many of the world’s top research institutions.
  • 8
    Rocks

    Rocks

    Rocks

    Rocks is an open source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints, and visualization tiled-display walls. Since May 2000, the Rocks group has been addressing the difficulties of deploying manageable clusters with the goal of making clusters easy to deploy, manage, upgrade, and scale. The latest update, Rocks 7.0, codenamed Manzanita, is a 64-bit-only release based upon CentOS 7.4, with all updates applied as of December 1, 2017. Rocks include many tools, such as Message Passing Interface (MPI), which are integral components that make a group of computers into a cluster. Installations can be customized with additional software packages at install time by using special user-supplied CDs. The Spectre/Meltdown security vulnerabilities affect (nearly) all hardware and are addressed by OS updates.
  • 9
    Graph Engine

    Graph Engine

    Microsoft

    Graph Engine (GE) is a distributed in-memory data processing engine, underpinned by a strongly-typed RAM store and a general distributed computation engine. The distributed RAM store provides a globally addressable high-performance key-value store over a cluster of machines. Through the RAM store, GE enables the fast random data access power over a large distributed data set. The capability of fast data exploration and distributed parallel computing makes GE a natural large graph processing platform. GE supports both low-latency online query processing and high-throughput offline analytics on billion-node large graphs. Schema does matter when we need to process data efficiently. Strongly-typed data modeling is crucial for compact data storage, fast data access, and clear data semantics. GE is good at managing billions of run-time objects of varied sizes. One byte counts as the number of objects goes large. GE provides fast memory allocation and reallocation with high memory ratios.
  • 10
    ruffus

    ruffus

    ruffus

    Ruffus is a computation pipeline library for python. It is open-sourced, powerful and user-friendly, and widely used in science and bioinformatics. Ruffus is designed to allow scientific and other analyses to be automated with the minimum of fuss and the least effort. Suitable for the simplest of tasks. Handles even fiendishly complicated pipelines which would cause make or scons to go cross-eyed and recursive. No "clever magic", no pre-processing. Unambitious, the lightweight syntax which tries to do this one small thing well. Ruffus is available under the permissive MIT free software license. This permits free use and inclusion even within proprietary software. It is good practice to run your pipeline in a temporary, “working” directory away from your original data. Ruffus is a lightweight python module for building computational pipelines. Ruffus requires Python 2.6 or higher or Python 3.0 or higher.
  • 11
    DeepSpeed

    DeepSpeed

    Microsoft

    DeepSpeed is an open source deep learning optimization library for PyTorch. It's designed to reduce computing power and memory use, and to train large distributed models with better parallelism on existing computer hardware. DeepSpeed is optimized for low latency, high throughput training. DeepSpeed can train DL models with over a hundred billion parameters on the current generation of GPU clusters. It can also train up to 13 billion parameters in a single GPU. DeepSpeed is developed by Microsoft and aims to offer distributed training for large-scale models. It's built on top of PyTorch, which specializes in data parallelism.
  • 12
    Dask

    Dask

    Dask

    Dask is open source and freely available. It is developed in coordination with other community projects like NumPy, pandas, and scikit-learn. Dask uses existing Python APIs and data structures to make it easy to switch between NumPy, pandas, scikit-learn to their Dask-powered equivalents. Dask's schedulers scale to thousand-node clusters and its algorithms have been tested on some of the largest supercomputers in the world. But you don't need a massive cluster to get started. Dask ships with schedulers designed for use on personal machines. Many people use Dask today to scale computations on their laptop, using multiple cores for computation and their disk for excess storage. Dask exposes lower-level APIs letting you build custom systems for in-house applications. This helps open source leaders parallelize their own packages and helps business leaders scale custom business logic.
  • 13
    Signals Inventa

    Signals Inventa

    PerkinElmer Informatics

    Signals Inventa, formerly known as Signals Lead Discovery, is a next-generation data management system for the analysis of scientific results. Powered by PerkinElmer’s innovative Signals Data Factory technology it enables you to efficiently access and analyze all scientific results collected throughout the research and development lifecycle. No matter what you make, Signals Inventa helps you decide which tests offer the best results, what you should make next, what to test further, and lots more. With Signals Inventa, data is normalized, staged, and ready to explore. Signals Inventa expands scientific understanding with a range of scientific-analytical methods, including R-group decomposition, chemical clustering, matched molecular pair analysis, maximum chemical substructure, blast search of both internal and external databases, and sequence alignment.
  • 14
    AGVortex

    AGVortex

    AGVortex

    AGVortex is the CAE program for mathematical modelling of fluid and gas flows around airfoils. Here is implemented innovational solver based upon vorticity dynamics. This approach allows to resolve LES turbulence model on the multi-core processors and clusters that use parallel computing. But it needs much less computing power. Fewer number of equations and unknown functions in flow dynamics gives a gain in calculation process and imposes lower requirements for the computational resources. So, technologically, 3D modelling using LES turbulence model is possible today. An application consists of 3D editor, control panel, modeling area. The planned improvements: new solid types, turbulence models based upon vorticity . The trial version has limitations on the size of the grid, maximum Reynolds number, restrictions in the settings. System requirements: Win x64, vc++ redistributable, openGL drivers.
  • 15
    Tencent Cloud GPU Service
    Cloud GPU Service is an elastic computing service that provides GPU computing power with high-performance parallel computing capabilities. As a powerful tool at the IaaS layer, it delivers high computing power for deep learning training, scientific computing, graphics and image processing, video encoding and decoding, and other highly intensive workloads. Improve your business efficiency and competitiveness with high-performance parallel computing capabilities. Set up your deployment environment quickly with auto-installed GPU drivers, CUDA, and cuDNN and preinstalled driver images. Accelerate distributed training and inference by using TACO Kit, an out-of-the-box computing acceleration engine provided by Tencent Cloud.
  • 16
    Frost 3D Universal
    Frost 3D software allows you to develop scientific models of permafrost thermal regimes under the thermal influence of pipelines, production wells, hydraulic constructions, etc., taking into account the thermal stabilization of the ground. The software package is based on ten years experience in the field of programming, computational geometry, numerical methods, 3D visualization, and parallelization of computational algorithms. Creation of 3D computational domain with surface topography and soil lithology; 3D reconstruction of pipelines, boreholes, basements, and foundations of buildings; Import of 3D objects including Wavefront (OBJ), StereoLitho (STL), 3D Studio Max (3DS) and Frost 3D Objects (F3O); Library of thermophysical properties of the ground, building elements, climatic factors and the parameters of cooling units; Specification of thermal and hydrological properties of 3D objects and heat transfer parameters on the surfaces of objects.
  • 17
    ScaleCloud

    ScaleCloud

    ScaleMatrix

    Data-intensive AI, IoT and HPC workloads requiring multiple parallel processes have always run best on expensive high-end processors or accelerators, such as Graphic Processing Units (GPU). Moreover, when running compute-intensive workloads on cloud-based solutions, businesses and research organizations have had to accept tradeoffs, many of which were problematic. For example, the age of processors and other hardware in cloud environments is often incompatible with the latest applications or high energy expenditure levels that cause concerns related to environmental values. In other cases, certain aspects of cloud solutions have simply been frustrating to deal with. This has limited flexibility for customized cloud environments to support business needs or trouble finding right-size billing models or support.
  • 18
    PanGu-α

    PanGu-α

    Huawei

    PanGu-α is developed under the MindSpore and trained on a cluster of 2048 Ascend 910 AI processors. The training parallelism strategy is implemented based on MindSpore Auto-parallel, which composes five parallelism dimensions to scale the training task to 2048 processors efficiently, including data parallelism, op-level model parallelism, pipeline model parallelism, optimizer model parallelism and rematerialization. To enhance the generalization ability of PanGu-α, we collect 1.1TB high-quality Chinese data from a wide range of domains to pretrain the model. We empirically test the generation ability of PanGu-α in various scenarios including text summarization, question answering, dialogue generation, etc. Moreover, we investigate the effect of model scales on the few-shot performances across a broad range of Chinese NLP tasks. The experimental results demonstrate the superior capabilities of PanGu-α in performing various tasks under few-shot or zero-shot settings.
  • 19
    OpenTuner

    OpenTuner

    OpenTuner

    Program autotuning has been demonstrated in many domains to achieve better or more portable performance. However, autotuners themselves are often not very portable between projects because using a domain-informed search space representation is critical to achieving good results and because no single search technique performs best for all problems. OpenTuner is a new framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy-to-use interface for communicating with the tuned program. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously, techniques that perform well will receive larger testing budgets, and techniques which perform poorly will be disabled.
  • 20
    Semantic UI React
    Semantic UI React is the official React integration for Semantic UI. jQuery-free, declarative API, augmentation, shorthand props, sub-components, auto controlled state. jQuery is a DOM manipulation library. It reads from and writes to the DOM. React uses a virtual DOM (a JavaScript representation of the real DOM). React only writes patch updates to the DOM, but never reads from it. It is not feasible to keep real DOM manipulations in sync with React's virtual DOM. Because of this, all jQuery functionality has been re-implemented in React. Control the rendered HTML tag, or render one component as another component. Extra props are passed to the component you are rendering as. Augmentation is powerful. You can compose component features and props without adding extra nested components. Shorthand props generate markup for you, making many use cases a breeze. All object props are spread on the child components.
  • 21
    regon

    regon

    regon

    litex.regon, a frontend for Polish REGON database. Simple, pythonic wrapper for REGON database. To access its SOAP API, one needs a user key issued by REGON administrators. REGONAPI accepts one argument, the service URL (provided by REGON administrators). After login, one can start querying the database. Single REGON number (either 9 or 14 digits long), single 10 digit KRS number, single NIP (10 digits strings), collection of REGONs (all of them have to be either 14 or 9 digits long), collection of KRSs, collection of NIPs. Only one parameter is used in the query. If multiple ones are passed, first from the above list is taken into account. Additionally, a detailed parameter can be passed, detailed=True causes the search method to fetch the default detailed report. If one knows the REGON of a business entity and a detailed report name, a full report can be fetched directly.
  • 22
    Spread.NET

    Spread.NET

    GrapeCity

    Explore the possibilities of your .NET enterprise apps with these dependency-free .NET spreadsheet components. .NET spreadsheet components are advanced software components that allow professional developers to add complete Excel-like functionality to their desktop applications. The .NET spreadsheet control includes support for Excel import/export, full cell customization, an extensive calculation engine with over 450 functions and more, all with zero dependencies on Excel. Leverage the extensive .NET spreadsheet API and powerful calculation engine to create analysis, budgeting, dashboard, data collection and management, scientific, and financial applications. Every platform of Spread.NET ensures maximum performance and speed for your enterprise apps, and its modular structure means you only need to add what you use to your .NET spreadsheet apps.
    Starting Price: $1499.00/year/user
  • 23
    imageio

    imageio

    imageio

    Imageio is a Python library that provides an easy interface to read and write a wide range of image data, including animated images, volumetric data, and scientific formats. It is cross-platform, runs on Python 3.5+, and is easy to install. Imageio is written in pure Python, so installation is easy. Imageio works on Python 3.5+. It also works on Pypy. Imageio depends on Numpy and Pillow. For some formats, imageio needs additional libraries/executables (e.g. ffmpeg), which imageio helps you to download/install. If something doesn’t work as it should, you need to know where to search for causes. The overview on this page aims to help you in this regard by giving you an idea of how things work, and - hence - where things may go sideways.
  • 24
    Galactica
    Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. Galactica is a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%.
  • 25
    XRCLOUD

    XRCLOUD

    XRCLOUD

    GPU cloud computing is a GPU-based computing service with real-time, high-speed parallel computing and floating-point computing capacity. It is ideal for various scenarios such as 3D graphics applications, video decoding, deep learning, and scientific computing. GPU instances can be managed just like a standard ECS with speed and ease, which effectively relieves computing pressures. RTX6000 GPU contains thousands of computing units and shows substantial advantages in parallel computing. For optimized deep learning, massive computing can be completed in a short time. GPU Direct seamlessly supports the transmission of big data among networks. Built-in acceleration framework, it can focus on the core tasks by quick deployment and fast instance distribution. We offer optimal cloud performance at a transparent price. The price of our cloud solution is open and cost-effective. You may choose to charge on-demand, and you can also get more discounts by subscribing to resources.
    Starting Price: $4.13 per month
  • 26
    Healnet

    Healnet

    Healx

    Rare diseases are often not well studied and there is a limited understanding of many of the aspects necessary to support a drug discovery program. Our AI platform, Healnet, overcomes these challenges by analyzing millions of drug and disease data points to find novel connections that could be turned into new treatment opportunities. By applying frontier technologies across the discovery and development pipeline, we can run multiple stages in parallel and at scale. One disease, one target, one drug: it's an overly simple model, yet it's the one used by nearly all pharmaceutical companies. The next generation of drug discovery is AI-powered, parallel and hypothesis-free. Bringing together the key three drug discovery paradigms.
  • 27
    Mako

    Mako

    Mako

    It provides a familiar, non-XML syntax that compiles into Python modules for maximum performance. Mako's syntax and API borrows from the best ideas of many others, including Django and Jinja2 templates, Cheetah, Myghty, and Genshi. Conceptually, Mako is an embedded Python (i.e. Python Server Page) language, which refines the familiar ideas of componentized layout and inheritance to produce one of the most straightforward and flexible models available, while also maintaining close ties to Python calling and scoping semantics. As templates are ultimately compiled into Python bytecode, Mako's approach is extremely efficient and was originally written to be just as fast as Cheetah. Today, Mako is very close in speed to Jinja2, which uses a similar approach and for which Mako was an inspiration. Can access variables from their enclosing scope as well as the template's request context
  • 28
    AHED (Advanced Heat Exchanger Design)
    HRS-AHED is a new software solution for the calculation of shell and tube heat exchangers featuring fluids and mixing assistant, sensible heat / condensation calculations, single pass and multi pass units, with or w/o baffles and many more features: fluid database, customizable geometries, project sharing, batch calculation, vibration analysis, reporting, and much more. To ensure the best calculation results, the we have conducted a thorough search through scientific literature to make sure the best and most recent methods for calculation in heat transfer engineering are used in our software. Many heat exchanger projects have been designed successfully with AHED, making it an industrially proven software solution.
    Starting Price: €50/3months/user
  • 29
    Substrate

    Substrate

    Substrate

    Substrate is the platform for agentic AI. Elegant abstractions and high-performance components, optimized models, vector database, code interpreter, and model router. Substrate is the only compute engine designed to run multi-step AI workloads. Describe your task by connecting components and let Substrate run it as fast as possible. We analyze your workload as a directed acyclic graph and optimize the graph, for example, merging nodes that can be run in a batch. The Substrate inference engine automatically schedules your workflow graph with optimized parallelism, reducing the complexity of chaining multiple inference APIs. No more async programming, just connect nodes and let Substrate parallelize your workload. Our infrastructure guarantees your entire workload runs in the same cluster, often on the same machine. You won’t spend fractions of a second per task on unnecessary data roundtrips and cross-region HTTP transport.
  • 30
    Syncfusion Essential Studio
    Includes more than 1,600 components and frameworks for Windows Forms, WPF, ASP.NET (Web Forms, MVC, Core), UWP, WinUI, Xamarin, Flutter, JavaScript, Angular, Blazor, Vue and React. Includes top requested components such as charts, grids, schedulers, diagrams, maps, gauges, docking, ribbons, and many more! Working with the industry’s best and brightest minds to streamline your business. Includes more than 1,700 components and frameworks for major platforms. A wide range of product demos and training, including video tutorials, documentation, and KBs. Every control is fine-tuned to work with a high volume of data. Create powerful apps by viewing and editing Excel, PDF, Word, and PowerPoint files. Truly unlimited dedicated support system via the public forum, feature & feedback page, live chat, and support tickets. Easy integration of tools to blend Syncfusion controls with your project.
    Starting Price: $495 one-time payment
  • 31
    PyQtGraph

    PyQtGraph

    PyQtGraph

    PyQtGraph is a pure-python graphics and GUI library built on PyQt/PySide and NumPy. It is intended for use in mathematics/scientific/engineering applications. Despite being written entirely in python, the library is very fast due to its heavy leverage of NumPy for number crunching and Qt's GraphicsView framework for fast display. PyQtGraph is distributed under the MIT open-source license. Basic 2D plotting in interactive view boxes. Line and scatter plots. Data can be panned/scaled by mouse. Fast drawing for real-time data display and interaction. Displays most data types (int or float; any bit depth; RGB, RGBA, or luminance). Functions for slicing multidimensional images at arbitrary angles (great for MRI data). Rapid update for video display or real-time interaction. Image display with interactive lookup tables and level control. Mesh rendering with isosurface generation. Interactive viewports rotate/zoom with mouse. Basic 3D scenegraph for easier programming.
  • 32
    Amazon EC2 UltraClusters
    Amazon EC2 UltraClusters enable you to scale to thousands of GPUs or purpose-built machine learning accelerators, such as AWS Trainium, providing on-demand access to supercomputing-class performance. They democratize supercomputing for ML, generative AI, and high-performance computing developers through a simple pay-as-you-go model without setup or maintenance costs. UltraClusters consist of thousands of accelerated EC2 instances co-located in a given AWS Availability Zone, interconnected using Elastic Fabric Adapter (EFA) networking in a petabit-scale nonblocking network. This architecture offers high-performance networking and access to Amazon FSx for Lustre, a fully managed shared storage built on a high-performance parallel file system, enabling rapid processing of massive datasets with sub-millisecond latencies. EC2 UltraClusters provide scale-out capabilities for distributed ML training and tightly coupled HPC workloads, reducing training times.
  • 33
    CoresHub

    CoresHub

    CoresHub

    Coreshub provides GPU cloud services, AI training clusters, parallel file storage, and image repositories, delivering secure, reliable, and high-performance cloud-based AI training and inference environments. The platform offers a range of solutions, including computing power market, model inference, and various industry-specific applications. Coreshub's core team comprises experts from Tsinghua University, leading AI companies, IBM, renowned venture capital firms, and major internet corporations, bringing extensive AI technical expertise and ecosystem resources. The platform emphasizes an independent and open cooperative ecosystem, actively collaborating with AI model suppliers and hardware manufacturers. Coreshub's AI computing platform enables unified scheduling and intelligent management of diverse heterogeneous computing power, meeting AI computing operation, maintenance, and management needs in a one-stop manner.
    Starting Price: $0.24 per hour
  • 34
    Apache Mahout

    Apache Mahout

    Apache Software Foundation

    Apache Mahout is a powerful, scalable, and versatile machine learning library designed for distributed data processing. It offers a comprehensive set of algorithms for various tasks, including classification, clustering, recommendation, and pattern mining. Built on top of the Apache Hadoop ecosystem, Mahout leverages MapReduce and Spark to enable data processing on large-scale datasets. Apache Mahout(TM) is a distributed linear algebra framework and mathematically expressive Scala DSL designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed backends. Matrix computations are a fundamental part of many scientific and engineering applications, including machine learning, computer vision, and data analysis. Apache Mahout is designed to handle large-scale data processing by leveraging the power of Hadoop and Spark.
  • 35
    NumPy

    NumPy

    NumPy

    Fast and versatile, the NumPy vectorization, indexing, and broadcasting concepts are the de-facto standards of array computing today. NumPy offers comprehensive mathematical functions, random number generators, linear algebra routines, Fourier transforms, and more. NumPy supports a wide range of hardware and computing platforms, and plays well with distributed, GPU, and sparse array libraries. The core of NumPy is well-optimized C code. Enjoy the flexibility of Python with the speed of compiled code. NumPy’s high level syntax makes it accessible and productive for programmers from any background or experience level. NumPy brings the computational power of languages like C and Fortran to Python, a language much easier to learn and use. With this power comes simplicity: a solution in NumPy is often clear and elegant.
  • 36
    Shapelets

    Shapelets

    Shapelets

    Powerful computing at your fingertips. Parallel computing, groundbreaking algorithms, so what are you waiting for? Designed to empower data scientists in business. Get the fastest computing in an all-inclusive time-series platform. Shapelets provides you with analytical features such as causality, discords and motif discovery, forecasting, clustering, etc. Run, extend and integrate your own algorithms into the Shapelets platform to make the most of Big Data analysis. Shapelets integrates seamlessly with any data collection and storage solution. It also integrates with MS Office and any other visualization tool to simplify and share insights without any technical acumen. Our UI works with the server to bring you interactive visualizations. You can make the most of your metadata and represent it in the many different visual graphs provided by our modern interface. Shapelets enables users from the oil, gas, and energy industry to perform real-time analysis of operational data.
  • 37
    Edison Scientific

    Edison Scientific

    Edison Scientific

    Edison Scientific is an AI platform designed to automate and accelerate scientific research, enabling users to move from hypothesis to validated results within a single environment. The platform integrates literature synthesis, data analysis, and molecular design workflows, allowing research teams to complete end-to-end scientific investigations at dramatically increased speed. At its core is Kosmos, an autonomous research system that performs hundreds of research tasks in parallel, transforming multimodal datasets into comprehensive reports with validated findings and publication-ready figures. Kosmos synthesizes scientific literature, public databases, and proprietary datasets, identifies novel therapeutic targets, uncovers biological mechanisms, and supports the iterative design and optimization of molecular candidates. Validated in real research settings, Kosmos has demonstrated the ability to achieve results that typically require months of human effort in a single day.
  • 38
    Ansys HPC
    With the Ansys HPC software suite, you can use today’s multicore computers to perform more simulations in less time. These simulations can be bigger, more complex and more accurate than ever using high-performance computing (HPC). The various Ansys HPC licensing options let you scale to whatever computational level of simulation you require, from single-user or small user group options for entry-level parallel processing up to virtually unlimited parallel capacity. For large user groups, Ansys facilitates highly scalable, multiple parallel processing simulations for the most challenging projects when needed. Apart from parallel computing, Ansys also offers solutions for parametric computing, which enables you to more fully explore the design parameters (size, weight, shape, materials, mechanical properties, etc.) of your product early in the development process.
  • 39
    pygame

    pygame

    pygame

    Pygame is a set of Python modules designed for writing video games. Pygame adds functionality on top of the excellent SDL library. This allows you to create fully featured games and multimedia programs in the python language. Pygame is highly portable and runs on nearly every platform and operating system. Pygame is free. Released under the LGPL license, you can create open-source, freeware, shareware, and commercial games with it. With dual-core CPUs common, and 8-core CPUs cheaply available on desktop systems, making use of multi-core CPUs allows you to do more in your game. Selected pygame functions release the dreaded python GIL, which is something you can do from C code. Uses optimized C and assembly code for core functions. C code is often 10-20 times faster than python code, and assembly code can easily be 100x or more times faster than python code. Comes with many operating systems. Just an apt-get, emerge, pkg_add, or just install away.
  • 40
    Elastic GPU Service
    Elastic computing instances with GPU computing accelerators suitable for scenarios (such as artificial intelligence (specifically deep learning and machine learning), high-performance computing, and professional graphics processing). Elastic GPU Service provides a complete service system that combines software and hardware to help you flexibly allocate resources, elastically scale your system, improve computing power, and lower the cost of your AI-related business. It applies to scenarios (such as deep learning, video encoding and decoding, video processing, scientific computing, graphical visualization, and cloud gaming). Elastic GPU Service provides GPU-accelerated computing capabilities and ready-to-use, scalable GPU computing resources. GPUs have unique advantages in performing mathematical and geometric computing, especially floating-point and parallel computing. GPUs provide 100 times the computing power of their CPU counterparts.
    Starting Price: $69.51 per month
  • 41
    AWS HPC

    AWS HPC

    Amazon

    AWS High Performance Computing (HPC) services empower users to execute large-scale simulations and deep learning workloads in the cloud, providing virtually unlimited compute capacity, high-performance file systems, and high-throughput networking. This suite of services accelerates innovation by offering a broad range of cloud-based tools, including machine learning and analytics, enabling rapid design and testing of new products. Operational efficiency is maximized through on-demand access to compute resources, allowing users to focus on complex problem-solving without the constraints of traditional infrastructure. AWS HPC solutions include Elastic Fabric Adapter (EFA) for low-latency, high-bandwidth networking, AWS Batch for scaling computing jobs, AWS ParallelCluster for simplified cluster deployment, and Amazon FSx for high-performance file systems. These services collectively provide a flexible and scalable environment tailored to diverse HPC workloads.
  • 42
    Adaptive Modeler
    Altreva Adaptive Modeler is a software application for forecasting stocks, ETFs, forex currency pairs, cryptocurrencies, commodities or other markets. Based on unique and innovative technology, it creates market simulation models in which thousands of virtual traders apply their own trading strategies to real-world market data to trade, compete and adapt on a virtual market. Their collective behavior is used to generate one-step-ahead forecasts and trading signals. Models coevolve in parallel with the real market without overfitting to historical data. This results in better adaptation to changing market conditions and more consistent performance.
    Starting Price: $495.00/one-time/user
  • 43
    Scilab

    Scilab

    Scilab Enterprises

    Numerical analysis or Scientific computing is the study of approximation techniques for numerically solving mathematical problems. Scilab provides graphics functions to visualize, annotate and export data and offers many ways to create and customize various types of plots and charts. Scilab is a high level programming language for scientific programming. It enables a rapid prototyping of algorithms, without having to deal with the complexity of other more low level programming language such as C and Fortran (memory management, variable definition). This is natively handled by Scilab, which results in a few lines of code for complex mathematical operations, where other languages would require much longer codes. It also comes with advanced data structure such as polynomials, matrices and graphic handles and provides an easily operable development environment.
  • 44
    Slurm
    Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), is a free, open-source job scheduler and cluster management system for Linux and Unix-like kernels. It's designed to manage compute jobs on high performance computing (HPC) clusters and high throughput computing (HTC) environments, and is used by many of the world's supercomputers and computer clusters.
  • 45
    Tambo

    Tambo

    Tambo

    Tambo is an open source AI orchestration framework focused on React front-end apps, letting developers build rich, generative UI assistants that respond to natural language. With Tambo, you register React components and tools once, and the system handles when and how to display UI components (forms, dashboards, charts, etc.), manage state, and call APIs/tools as needed. It supports features like message-thread history, streaming UI/content, suggested actions, authentication, and integration with Model Context Protocol (MCP) servers for context or external data. There’s a pre-built component library to accelerate development (e.g., control bars, message threads, generative forms), CLI tools, hosting via Tambo Cloud, and self-hosting options. Plans range from a free tier (with message/usage limits and community support) to paid tiers that offer higher message volumes, team seats, SSO/RBAC, SLAs, observability, and more.
  • 46
    yarl

    yarl

    Python Software Foundation

    All URL parts, scheme, user, password, host, port, path, query, and fragment are accessible by properties. All URL manipulations produce a new URL object. Strings passed to constructor and modification methods are automatically encoded giving canonical representation as result. Regular properties are percent-decoded, use raw_ versions for getting encoded strings. Human-readable representation of URL is available as .human_repr(). PyPI contains binary wheels for Linux, Windows and MacOS. If you want to install yarl on another operating system (like Alpine Linux, which is not manylinux-compliant because of the missing glibc and therefore, cannot be used with our wheels) the tarball will be used to compile the library from the source code. It requires a C compiler and Python headers installed. Please note that the pure-Python (uncompiled) version is much slower. However, PyPy always uses a pure-Python implementation, and, as such, it is unaffected by this variable.
  • 47
    Pylons

    Pylons

    Python Software Foundation

    The Pylons web framework is designed for building web applications and sites in an easy and concise manner. They can range from as small as a single Python module, to a substantial directory layout for larger and more complex web applications. Pylons comes with project templates that help boot-strap a new web application project, or you can start from scratch and set things up exactly as desired. A framework to make writing web applications in Python easy. Utilizes a minimalist, component-based philosophy that makes it easy to expand on. Harness existing knowledge about Python. Extensible application design. Fast and efficient, an incredibly small per-request call stack provides top performance. Uses existing and well-tested Python packages. Pylons 1.0 series is stable and production-ready but in maintenance-only mode. The Pylons Project now maintains the Pyramid web framework for future development. Pylons 1.0 users should strongly consider using Pyramid for their next project.
  • 48
    cryptography

    cryptography

    cryptography

    cryptography includes both high-level recipes and low-level interfaces to common cryptographic algorithms such as symmetric ciphers, message digests, and key derivation functions. Encrypt with cryptography’s high-level symmetric encryption recipe. cryptography is broadly divided into two levels. One with safe cryptographic recipes that require little to no configuration choices. These are safe and easy to use and don’t require developers to make many decisions. The other level is low-level cryptographic primitives. These are often dangerous and can be used incorrectly. They require making decisions and having an in-depth knowledge of the cryptographic concepts at work. Because of the potential danger in working at this level, this is referred to as the “hazardous materials” or “hazmat” layer. These live in the cryptography.hazmat package, and their documentation will always contain an admonition at the top.
  • 49
    Ansys LS-DYNA
    Ansys LS-DYNA is the industry-leading explicit simulation software used for applications like drop tests, impact and penetration, smashes and crashes, occupant safety, and more. Ansys LS-DYNA is the most used explicit simulation program in the world and is capable of simulating the response of materials to short periods of severe loading. Its many elements, contact formulations, material models and other controls can be used to simulate complex models with control over all the details of the problem. LS-DYNA delivers a diverse array of analyses with extremely fast and efficient parallelization. Engineers can tackle simulations involving material failure and look at how the failure progresses through a part or through a system. Models with large amounts of parts or surfaces interacting with each other are also easily handled, and the interactions and load passing between complex behaviors are modeled accurately.
  • 50
    zope.interface

    zope.interface

    Python Software Foundation

    This package is intended to be independently reusable in any Python project. It is maintained by the Zope Toolkit project. This package provides an implementation of “object interfaces” for Python. Interfaces are a mechanism for labeling objects as conforming to a given API or contract. So, this package can be considered as an implementation of the Design By Contract methodology support in Python. Interfaces are objects that specify (document) the external behavior of objects that “provide” them. An interface specifies behavior through informal documentation in a doc string, attribute definitions, and invariants, which are conditions that must hold for objects that provide the interface. Attribute definitions specify specific attributes. They define the attribute name and provide documentation and constraints of attribute values. Attribute definitions can take a number of forms.