Compare the Top HPC Software for Cloud as of October 2024

HPC SaaS Clear Filters

What is HPC Software for Cloud?

High-performance computing (HPC) software and systems enable advanced and rapid data processing and calculations by harnessing and aggregating computing power and computer clusters. Compare and read user reviews of the best HPC software for Cloud currently available using the table below. This list is updated regularly.

  • 1
    UberCloud

    UberCloud

    Simr (formerly UberCloud)

    Simr (formerly UberCloud) is a cutting-edge platform for Simulation Operations Automation (SimOps). It streamlines and automates complex simulation workflows, enhancing productivity and collaboration. Leveraging cloud-based infrastructure, Simr offers scalable, cost-effective solutions for industries like automotive, aerospace, and electronics. Trusted by leading global companies, Simr empowers engineers to innovate efficiently and effectively. Simr supports a variety of CFD, FEA and other CAE software including Ansys, COMSOL, Abaqus, CST, STAR-CCM+, MATLAB, Lumerical and more. Simr automates every major cloud including Microsoft Azure, Amazon AWS, and Google GCP.
  • 2
    Covalent

    Covalent

    Agnostiq

    Covalent’s serverless HPC architecture allows you to easily scale jobs from your laptop to your HPC/Cloud. Covalent is a Pythonic workflow tool for computational scientists, AI/ML software engineers, and anyone who needs to run experiments on limited or expensive computing resources including quantum computers, HPC clusters, GPU arrays, and cloud services. Covalent enables a researcher to run computation tasks on an advanced hardware platform – such as a quantum computer or serverless HPC cluster – using a single line of code. The latest release of Covalent includes two new feature sets and three major enhancements. True to its modular nature, Covalent now allows users to define custom pre- and post-hooks to electrons to facilitate various use cases from setting up remote environments (using DepsPip) to running custom functions.
    Starting Price: Free
  • 3
    TotalView

    TotalView

    Perforce

    TotalView debugging software provides the specialized tools you need to quickly debug, analyze, and scale high-performance computing (HPC) applications. This includes highly dynamic, parallel, and multicore applications that run on diverse hardware — from desktops to supercomputers. Improve HPC development efficiency, code quality, and time-to-market with TotalView’s powerful tools for faster fault isolation, improved memory optimization, and dynamic visualization. Simultaneously debug thousands of threads and processes. Purpose-built for multicore and parallel computing, TotalView delivers a set of tools providing unprecedented control over processes and thread execution, along with deep visibility into program states and data.
  • 4
    Intel DevCloud
    Intel® DevCloud offers complimentary access to a wide range of Intel® architectures to help you get instant hands-on experience with Intel® software and execute your edge, AI, high-performance computing (HPC), and rendering workloads. With preinstalled Intel® optimized frameworks, tools, and libraries, you have everything you need to fast-track your learning and project prototyping. Learn, prototype, test, and run your workloads for free on a cluster of the latest Intel® hardware and software. Learn through a new suite of curated experiences, including market vertical samples, Jupyter Notebook tutorials, and more. Build your solution in JupyterLab and test with bare metal or develop your containerized solution. Quickly bring it to Intel DevCloud for testing. Optimize your solution for a specific target edge device with the deep learning workbench and take advantage of the new, more robust telemetry dashboard.
    Starting Price: Free
  • 5
    Google Cloud GPUs
    Speed up compute jobs like machine learning and HPC. A wide selection of GPUs to match a range of performance and price points. Flexible pricing and machine customizations to optimize your workload. High-performance GPUs on Google Cloud for machine learning, scientific computing, and 3D visualization. NVIDIA K80, P100, P4, T4, V100, and A100 GPUs provide a range of compute options to cover your workload for each cost and performance need. Optimally balance the processor, memory, high-performance disk, and up to 8 GPUs per instance for your individual workload. All with the per-second billing, so you only pay only for what you need while you are using it. Run GPU workloads on Google Cloud Platform where you have access to industry-leading storage, networking, and data analytics technologies. Compute Engine provides GPUs that you can add to your virtual machine instances. Learn what you can do with GPUs and what types of GPU hardware are available.
    Starting Price: $0.160 per GPU
  • 6
    PowerFLOW

    PowerFLOW

    Dassault Systèmes

    By leveraging our unique, inherently transient Lattice Boltzmann-based physics PowerFLOW CFD solution performs simulations that accurately predict real world conditions. Using the PowerFLOW suite, engineers evaluate product performance early in the design process prior to any prototype being built — when the impact of change is most significant for design and budgets. PowerFLOW imports fully complex model geometry and accurately and efficiently performs aerodynamic, aeroacoustic and thermal management simulations. Automated domain discretization and turbulence modeling with wall treatment eliminates the need for manual volume meshing and boundary layer meshing. Confidently run PowerFLOW simulations using large number of compute cores on common High Performance Computing (HPC) platforms.
  • 7
    Ansys HPC
    With the Ansys HPC software suite, you can use today’s multicore computers to perform more simulations in less time. These simulations can be bigger, more complex and more accurate than ever using high-performance computing (HPC). The various Ansys HPC licensing options let you scale to whatever computational level of simulation you require, from single-user or small user group options for entry-level parallel processing up to virtually unlimited parallel capacity. For large user groups, Ansys facilitates highly scalable, multiple parallel processing simulations for the most challenging projects when needed. Apart from parallel computing, Ansys also offers solutions for parametric computing, which enables you to more fully explore the design parameters (size, weight, shape, materials, mechanical properties, etc.) of your product early in the development process.
  • 8
    Arm MAP
    No need to change your code or the way you build it. Profiling for applications running on more than one server and multiple processes. Clear views of bottlenecks in I/O, in computing, in a thread, or in multi-process activity. Deep insight into actual processor instruction types that affect your performance. View memory usage over time to discover high watermarks and changes across the complete memory footprint. Arm MAP is a unique scalable low-overhead profiler, available standalone or as part of the Arm Forge debug and profile suite. It helps server and HPC code developers to accelerate their software by revealing the causes of slow performance. It is used from multicore Linux workstations through to supercomputers. You can profile realistic test cases that you care most about with typically under 5% runtime overhead. The interactive user interface is clear and intuitive, designed for developers and computational scientists.
  • 9
    Arm Forge
    Build reliable and optimized code for the right results on multiple Server and HPC architectures, from the latest compilers and C++ standards to Intel, 64-bit Arm, AMD, OpenPOWER, and Nvidia GPU hardware. Arm Forge combines Arm DDT, the leading debugger for time-saving high-performance application debugging, Arm MAP, the trusted performance profiler for invaluable optimization advice across native and Python HPC codes, and Arm Performance Reports for advanced reporting capabilities. Arm DDT and Arm MAP are also available as standalone products. Efficient application development for Linux Server and HPC with Full technical support from Arm experts. Arm DDT is the debugger of choice for developing of C++, C, or Fortran parallel, and threaded applications on CPUs, and GPUs. Its powerful intuitive graphical interface helps you easily detect memory bugs and divergent behavior at all scales, making Arm DDT the number one debugger in research, industry, and academia.
  • 10
    Azure HPC Cache
    Keep important work moving. Azure HPC Cache lets your Azure compute resources work more efficiently against your NFS workloads in your network-attached storage (NAS) or in Azure Blob storage. Enable quicker file access by scaling your cache based on workload—improving application performance regardless of storage capacity. Provide low-latency hybrid storage support for both on-premises NAS and Azure Blob storage. Flexibly store data via traditional, on-premises NAS storage and Azure Blob storage. Azure HPC Cache supports hybrid architectures including NFSv3 via Azure NetApp Files, Dell EMC Isilon, Azure Blob storage, and other NAS products. Azure HPC Cache provides an aggregated namespace, so you can present hot data needed by applications in a singular directory structure and reduce client complexity.
  • 11
    FieldView

    FieldView

    Intelligent Light

    Over the past two decades, software technologies have advanced greatly and HPC computing has scaled by orders of magnitude. Our human ability to comprehend simulation results has remained the same. Simply making plots and movies in the traditional way does not scale when dealing with multi-billion cell meshes or ten’s of thousands of timesteps. Automated solution assessment is accelerated when features and quantitative properties can be produced directly via eigen analysis or machine learning. Easy-to-use industry standard FieldView desktop coupled to the powerful VisIt Prime backend.
  • 12
    NVIDIA NGC
    NVIDIA GPU Cloud (NGC) is a GPU-accelerated cloud platform optimized for deep learning and scientific computing. NGC manages a catalog of fully integrated and optimized deep learning framework containers that take full advantage of NVIDIA GPUs in both single GPU and multi-GPU configurations. NVIDIA train, adapt, and optimize (TAO) is an AI-model-adaptation platform that simplifies and accelerates the creation of enterprise AI applications and services. By fine-tuning pre-trained models with custom data through a UI-based, guided workflow, enterprises can produce highly accurate models in hours rather than months, eliminating the need for large training runs and deep AI expertise. Looking to get started with containers and models on NGC? This is the place to start. Private Registries from NGC allow you to secure, manage, and deploy your own assets to accelerate your journey to AI.
  • 13
    HPE Pointnext

    HPE Pointnext

    Hewlett Packard

    This confluence put new demands on HPC storage as the input/output patterns of both workloads could not be more different. And it is happening right now. A recent study of the independent analyst firm Intersect360 found out that 63% of the HPC users today already are running machine learning programs. Hyperion Research forecasts that, at current course and speed, HPC storage spending in public sector organizations and enterprises will grow 57% faster than spending for HPC compute for the next three years. Seymour Cray once said, "Anyone can build a fast CPU. The trick is to build a fast system.” When it comes to HPC and AI, anyone can build fast file storage. The trick is to build a fast, but also cost-effective and scalable file storage system. We achieve this by embedding the leading parallel file systems into parallel storage products from HPE with cost effectiveness built in.
  • 14
    ScaleCloud

    ScaleCloud

    ScaleMatrix

    Data-intensive AI, IoT and HPC workloads requiring multiple parallel processes have always run best on expensive high-end processors or accelerators, such as Graphic Processing Units (GPU). Moreover, when running compute-intensive workloads on cloud-based solutions, businesses and research organizations have had to accept tradeoffs, many of which were problematic. For example, the age of processors and other hardware in cloud environments is often incompatible with the latest applications or high energy expenditure levels that cause concerns related to environmental values. In other cases, certain aspects of cloud solutions have simply been frustrating to deal with. This has limited flexibility for customized cloud environments to support business needs or trouble finding right-size billing models or support.
  • 15
    Azure FXT Edge Filer
    Create cloud-integrated hybrid storage that works with your existing network-attached storage (NAS) and Azure Blob Storage. This on-premises caching appliance optimizes access to data in your datacenter, in Azure, or across a wide-area network (WAN). A combination of software and hardware, Microsoft Azure FXT Edge Filer delivers high throughput and low latency for hybrid storage infrastructure supporting high-performance computing (HPC) workloads.Scale-out clustering provides non-disruptive NAS performance scaling. Join up to 24 FXT nodes per cluster to scale to millions of IOPS and hundreds of GB/s. When you need performance and scale in file-based workloads, Azure FXT Edge Filer keeps your data on the fastest path to processing resources. Managing data storage is easy with Azure FXT Edge Filer. Shift aging data to Azure Blob Storage to keep it easily accessible with minimal latency. Balance on-premises and cloud storage.
  • 16
    Kombyne

    Kombyne

    Kombyne

    Kombyne™ is an innovative new SaaS high-performance computing (HPC) workflow tool, initially developed for customers in the defense, automotive, and aerospace industries and academic research. It allows users to subscribe to a range of workflow solutions for HPC CFD jobs, from on-the-fly extract generation and rendering to simulation steering. Interactive monitoring and control are also available, all with minimal simulation disruption and no reliance on VTK. The need for large files is eliminated via extract workflows and real-time visualization. An in-transit workflow uses a separate process that quickly receives data from the solver code and performs visualization and analysis without interfering with the running solver. This process, called an endpoint, can directly output extracts, cutting planes or point samples for data science and can render images as well. The Endpoint can also act as a bridge to popular visualization codes.
  • 17
    Arm Allinea Studio
    Arm Allinea Studio is a suite of tools for developing server and HPC applications on Arm-based platforms. It contains Arm-specific compilers and libraries, and debug and optimization tools. Arm Performance Libraries provide optimized standard core math libraries for high-performance computing applications on Arm processors. The library routines, which are available through both Fortran and C interfaces. Arm Performance Libraries are built with OpenMP across many BLAS, LAPACK, FFT, and sparse routines in order to maximize your performance in multi-processor environments.
  • 18
    Nimbix Supercomputing Suite
    The Nimbix Supercomputing Suite is a set of flexible and secure as-a-service high-performance computing (HPC) solutions. This as-a-service model for HPC, AI, and Quantum in the cloud provides customers with access to one of the broadest HPC and supercomputing portfolios, from hardware to bare metal-as-a-service to the democratization of advanced computing in the cloud across public and private data centers. Nimbix Supercomputing Suite allows you access to HyperHub Application Marketplace, our high-performance marketplace with over 1,000 applications and workflows. Leverage powerful dedicated BullSequana HPC servers as bare metal-as-a-service for the best of infrastructure and on-demand scalability, convenience, and agility. Federated supercomputing-as-a-service offers a unified service console to manage all compute zones and regions in a public or private HPC, AI, and supercomputing federation.
  • 19
    Kao Data

    Kao Data

    Kao Data

    Kao Data leads the industry, pioneering the development and operation of data centres engineered for AI and advanced computing. With a hyperscale-inspired and industrial scale platform, we provide our customers with a secure, scalable and sustainable home for their compute. Kao Data leads the industry in pioneering the development and operation of data centres engineered for AI and advanced computing. With our Harlow campus the home for a variety of mission-critical HPC deployments - we are the UK’s number one choice for power-intensive, high density, GPU-powered computing. With rapid on-ramps into all major cloud providers, we can make your hybrid AI and HPC ambitions a reality.
  • 20
    Fuzzball
    Fuzzball accelerates innovation for researchers and scientists by eliminating the burdens of infrastructure provisioning and management. Fuzzball streamlines and optimizes high-performance computing (HPC) workload design and execution. A user-friendly GUI for designing, editing, and executing HPC jobs. Comprehensive control and automation of all HPC tasks via CLI. Automated data ingress and egress with full compliance logs. Native integration with GPUs and both on-prem and cloud storage on-prem and cloud storage. Human-readable, portable workflow files that execute anywhere. CIQ’s Fuzzball modernizes traditional HPC with an API-first, container-optimized architecture. Operating on Kubernetes, it provides all the security, performance, stability, and convenience found in modern software and infrastructure. Fuzzball not only abstracts the infrastructure layer but also automates the orchestration of complex workflows, driving greater efficiency and collaboration.
  • 21
    Moab HPC Suite

    Moab HPC Suite

    Adaptive Computing

    Moab® HPC Suite is a workload and resource orchestration platform that automates the scheduling, managing, monitoring, and reporting of HPC workloads on massive scale. Its patented intelligence engine uses multi-dimensional policies and advanced future modeling to optimize workload start and run times on diverse resources. These policies balance high utilization and throughput goals with competing workload priorities and SLA requirements, thereby accomplishing more work in less time and in the right priority order. Moab HPC Suite optimizes the value and usability of HPC systems while reducing management cost and complexity.
  • Previous
  • You're on page 1
  • Next