+
+

Related Products

  • RunPod
    159 Ratings
    Visit Website
  • Google AI Studio
    9 Ratings
    Visit Website
  • Vertex AI
    732 Ratings
    Visit Website
  • Google Compute Engine
    1,159 Ratings
    Visit Website
  • LM-Kit.NET
    19 Ratings
    Visit Website
  • Ango Hub
    15 Ratings
    Visit Website
  • Amazon Bedrock
    74 Ratings
    Visit Website
  • Stack AI
    33 Ratings
    Visit Website
  • Kamatera
    151 Ratings
    Visit Website
  • Google Cloud BigQuery
    1,871 Ratings
    Visit Website

About

Amazon EC2 Capacity Blocks for ML enable you to reserve accelerated compute instances in Amazon EC2 UltraClusters for your machine learning workloads. This service supports Amazon EC2 P5en, P5e, P5, and P4d instances, powered by NVIDIA H200, H100, and A100 Tensor Core GPUs, respectively, as well as Trn2 and Trn1 instances powered by AWS Trainium. You can reserve these instances for up to six months in cluster sizes ranging from one to 64 instances (512 GPUs or 1,024 Trainium chips), providing flexibility for various ML workloads. Reservations can be made up to eight weeks in advance. By colocating in Amazon EC2 UltraClusters, Capacity Blocks offer low-latency, high-throughput network connectivity, facilitating efficient distributed training. This setup ensures predictable access to high-performance computing resources, allowing you to plan ML development confidently, run experiments, build prototypes, and accommodate future surges in demand for ML applications.

About

Up to 8 NVidia® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores. This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations. We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth. Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz. We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth. Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz. The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized. Despite having less tensor cores than the V100, it is able to process tensor operations faster due to a different architecture. Second generation AMD EPYC Rome, up to 96 threads with a boost clock of 3.35GHz.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Companies in search of a solution to get scalable access to high-performance compute instances for their machine learning training and inference workloads

Audience

IT teams searching for a premium dedicated GPU server solution

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

$3.01 per hour
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Amazon
Founded: 1994
United States
aws.amazon.com/ec2/capacityblocks/

Company Information

DataCrunch
Finland
datacrunch.io

Alternatives

Alternatives

Categories

Categories

Integrations

AWS Neuron
AWS Nitro System
AWS Trainium
Amazon EC2
Amazon EC2 G5 Instances
Amazon EC2 Inf1 Instances
Amazon EC2 P4 Instances
Amazon EC2 P5 Instances
Amazon EC2 Trn1 Instances
Amazon EC2 Trn2 Instances
Amazon EC2 UltraClusters
Amazon EKS
Amazon Elastic Container Service (Amazon ECS)
Amazon SageMaker
Amazon Web Services (AWS)
Greenovative
PyTorch
TensorFlow

Integrations

AWS Neuron
AWS Nitro System
AWS Trainium
Amazon EC2
Amazon EC2 G5 Instances
Amazon EC2 Inf1 Instances
Amazon EC2 P4 Instances
Amazon EC2 P5 Instances
Amazon EC2 Trn1 Instances
Amazon EC2 Trn2 Instances
Amazon EC2 UltraClusters
Amazon EKS
Amazon Elastic Container Service (Amazon ECS)
Amazon SageMaker
Amazon Web Services (AWS)
Greenovative
PyTorch
TensorFlow
Claim Amazon EC2 Capacity Blocks for ML and update features and information
Claim Amazon EC2 Capacity Blocks for ML and update features and information
Claim DataCrunch and update features and information
Claim DataCrunch and update features and information