NVIDIA Magnum IONVIDIA
|
SolarWinds Storage Resource MonitorSolarWinds
|
|||||
Related Products
|
||||||
About
NVIDIA Magnum IO is the architecture for parallel, intelligent data center I/O. It maximizes storage, network, and multi-node, multi-GPU communications for the world’s most important applications, using large language models, recommender systems, imaging, simulation, and scientific research. Magnum IO utilizes storage I/O, network I/O, in-network compute, and I/O management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems. It supports NVIDIA CUDA-X libraries and makes the best use of a range of NVIDIA GPU and networking hardware topologies to achieve optimal throughput and low latency. In multi-GPU, multi-node systems, slow CPU, single-thread performance is in the critical path of data access from local or remote storage devices. With storage I/O acceleration, the GPU bypasses the CPU and system memory, and accesses remote storage via 8x 200 Gb/s NICs, achieving up to 1.6 TB/s of raw storage bandwidth.
|
About
Storage Resource Monitor, formerly Storage Resource Manager, is a fully comprehensive, multi-vendor capacity monitoring and storage performance software solution. Scalable and powerful, Storage Resource Monitor provides intuitive dashboards and charts to facilitate faster issue diagnosis and troubleshooting. The solution also enables users to map the physical SAN environment (LUNs) to the virtual machines in their VMware infrastructure, helping them pinpoint resource bottlenecks and contention issues across virtual and storage environments. Core features include multi-vendor storage monitoring, automated storage capacity planning, storage performance monitoring, storage I/O hotspot detection, storage environment reporting, and prebuilt alerts and automatic baselines.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
AI researchers, data scientists, and HPC developers needing a tool to eliminate I/O bottlenecks in multi-GPU, multi-node environments
|
Audience
For global businesses and large enterprises.
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
No information available.
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationNVIDIA
Founded: 1993
United States
www.nvidia.com/en-us/data-center/magnum-io/
|
Company InformationSolarWinds
Founded: 2000
United States
www.solarwinds.com
|
|||||
Alternatives |
Alternatives |
|||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Categories |
Categories |
|||||
Data Center Management Features
Audit Trail
Behavior-Based Acceleration
Cross Reference System
Device Auto Discovery
Diagnostic Testing
Import / Export Data
JCL Management
Multi-Platform
Multi-User
Power Management
Sarbanes-Oxley Compliance
|
||||||
Integrations
Apache Spark
CUDA
NVIDIA NetQ
NVIDIA virtual GPU
SolarWinds Network Performance Monitor (NPM)
SolarWinds Server & Application Monitor
SolarWinds Virtualization Manager
SolarWinds Web Performance Monitor
|
Integrations
Apache Spark
CUDA
NVIDIA NetQ
NVIDIA virtual GPU
SolarWinds Network Performance Monitor (NPM)
SolarWinds Server & Application Monitor
SolarWinds Virtualization Manager
SolarWinds Web Performance Monitor
|
|||||
|
|
|