AWS AI FactoriesAmazon
|
AWS NeuronAmazon Web Services
|
|||||
Related Products
|
||||||
About
AWS AI Factories is a fully-managed solution that embeds high-performance AI infrastructure directly into a customer’s own data center. You supply the space and power, and AWS deploys a dedicated, secure AI environment optimized for training and inference. It includes leading AI accelerators (such as AWS Trainium chips or NVIDIA GPUs), low-latency networking, high-performance storage, and integration with AWS’s AI services, such as Amazon SageMaker and Amazon Bedrock, giving immediate access to foundational models and AI tools without separate licensing or contracts. AWS handles the full deployment, maintenance, and management, eliminating the typical months-long effort to build comparable infrastructure. Each deployment is isolated, operating like a private AWS Region, which meets strict data sovereignty, compliance, and regulatory requirements, making it particularly suited for sectors with sensitive data.
|
About
It supports high-performance training on AWS Trainium-based Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances. For model deployment, it supports high-performance and low-latency inference on AWS Inferentia-based Amazon EC2 Inf1 instances and AWS Inferentia2-based Amazon EC2 Inf2 instances. With Neuron, you can use popular frameworks, such as TensorFlow and PyTorch, and optimally train and deploy machine learning (ML) models on Amazon EC2 Trn1, Inf1, and Inf2 instances with minimal code changes and without tie-in to vendor-specific solutions. AWS Neuron SDK, which supports Inferentia and Trainium accelerators, is natively integrated with PyTorch and TensorFlow. This integration ensures that you can continue using your existing workflows in these popular frameworks and get started with only a few lines of code changes. For distributed model training, the Neuron SDK supports libraries, such as Megatron-LM and PyTorch Fully Sharded Data Parallel (FSDP).
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
Enterprises, regulated industries, and government agencies in search of a solution to build, train, and deploy large models while retaining full control over data location and compliance
|
Audience
Organizations in need of an SDK solution with a compiler, runtime, and profiling tools that unlocks high-performance and cost-effective deep learning acceleration
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
No information available.
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationAmazon
Founded: 1994
United States
aws.amazon.com/about-aws/global-infrastructure/ai-factories/
|
Company InformationAmazon Web Services
Founded: 2006
United States
aws.amazon.com/machine-learning/neuron/
|
|||||
Alternatives |
Alternatives |
|||||
|
|
|
|||||
|
|
||||||
|
|
|
|||||
|
|
||||||
Categories |
Categories |
|||||
Integrations
AWS Trainium
Amazon SageMaker
Amazon Web Services (AWS)
AWS Deep Learning AMIs
AWS Deep Learning Containers
Amazon Bedrock
Amazon EC2
Amazon EC2 Capacity Blocks for ML
Amazon EC2 G5 Instances
Amazon EC2 Inf1 Instances
|
Integrations
AWS Trainium
Amazon SageMaker
Amazon Web Services (AWS)
AWS Deep Learning AMIs
AWS Deep Learning Containers
Amazon Bedrock
Amazon EC2
Amazon EC2 Capacity Blocks for ML
Amazon EC2 G5 Instances
Amazon EC2 Inf1 Instances
|
|||||
|
|
|