Best Machine Learning Software for Amazon SageMaker - Page 2

Compare the Top Machine Learning Software that integrates with Amazon SageMaker as of August 2025 - Page 2

This a list of Machine Learning software that integrates with Amazon SageMaker. Use the filters on the left to add additional filters for products that have integrations with Amazon SageMaker. View the products that work with Amazon SageMaker in the table below.

  • 1
    Amazon SageMaker Canvas
    Amazon SageMaker Canvas expands access to machine learning (ML) by providing business analysts with a visual interface that allows them to generate accurate ML predictions on their own, without requiring any ML experience or having to write a single line of code. Visual point-and-click interface to connect, prepare, analyze, and explore data for building ML models and generating accurate predictions. Automatically build ML models to run what-if analysis and generate single or bulk predictions with a few clicks. Boost collaboration between business analysts and data scientists by sharing, reviewing, and updating ML models across tools. Import ML models from anywhere and generate predictions directly in Amazon SageMaker Canvas. With Amazon SageMaker Canvas, you can import data from disparate sources, select values you want to predict, automatically prepare and explore data, and quickly and more easily build ML models. You can then analyze models and generate accurate predictions.
  • 2
    Amazon SageMaker Edge
    The SageMaker Edge Agent allows you to capture data and metadata based on triggers that you set so that you can retrain your existing models with real-world data or build new models. Additionally, this data can be used to conduct your own analysis, such as model drift analysis. We offer three options for deployment. GGv2 (~ size 100MB) is a fully integrated AWS IoT deployment mechanism. For those customers with a limited device capacity, we have a smaller built-in deployment mechanism within SageMaker Edge. For customers who have a preferred deployment mechanism, we support third party mechanisms that can be plugged into our user flow. Amazon SageMaker Edge Manager provides a dashboard so you can understand the performance of models running on each device across your fleet. The dashboard helps you visually understand overall fleet health and identify the problematic models through a dashboard in the console.
  • 3
    Amazon SageMaker Clarify
    Amazon SageMaker Clarify provides machine learning (ML) developers with purpose-built tools to gain greater insights into their ML training data and models. SageMaker Clarify detects and measures potential bias using a variety of metrics so that ML developers can address potential bias and explain model predictions. SageMaker Clarify can detect potential bias during data preparation, after model training, and in your deployed model. For instance, you can check for bias related to age in your dataset or in your trained model and receive a detailed report that quantifies different types of potential bias. SageMaker Clarify also includes feature importance scores that help you explain how your model makes predictions and produces explainability reports in bulk or real time through online explainability. You can use these reports to support customer or internal presentations or to identify potential issues with your model.
  • 4
    Amazon SageMaker JumpStart
    Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can access built-in algorithms with pretrained models from model hubs, pretrained foundation models to help you perform tasks such as article summarization and image generation, and prebuilt solutions to solve common use cases. In addition, you can share ML artifacts, including ML models and notebooks, within your organization to accelerate ML model building and deployment. SageMaker JumpStart provides hundreds of built-in algorithms with pretrained models from model hubs, including TensorFlow Hub, PyTorch Hub, HuggingFace, and MxNet GluonCV. You can also access built-in algorithms using the SageMaker Python SDK. Built-in algorithms cover common ML tasks, such as data classifications (image, text, tabular) and sentiment analysis.
  • 5
    Amazon SageMaker Autopilot
    Amazon SageMaker Autopilot eliminates the heavy lifting of building ML models. You simply provide a tabular dataset and select the target column to predict, and SageMaker Autopilot will automatically explore different solutions to find the best model. You then can directly deploy the model to production with just one click or iterate on the recommended solutions to further improve the model quality. You can use Amazon SageMaker Autopilot even when you have missing data. SageMaker Autopilot automatically fills in the missing data, provides statistical insights about columns in your dataset, and automatically extracts information from non-numeric columns, such as date and time information from timestamps.
  • 6
    Amazon SageMaker Model Monitor
    With Amazon SageMaker Model Monitor, you can select the data you would like to monitor and analyze without the need to write any code. SageMaker Model Monitor lets you select data from a menu of options such as prediction output, and captures metadata such as timestamp, model name, and endpoint so you can analyze model predictions based on the metadata. You can specify the sampling rate of data capture as a percentage of overall traffic in the case of high volume real-time predictions, and the data is stored in your own Amazon S3 bucket. You can also encrypt this data, configure fine-grained security, define data retention policies, and implement access control mechanisms for secure access. Amazon SageMaker Model Monitor offers built-in analysis in the form of statistical rules, to detect drifts in data and model quality. You can also write custom rules and specify thresholds for each rule.
  • 7
    Amazon SageMaker Pipelines
    Using Amazon SageMaker Pipelines, you can create ML workflows with an easy-to-use Python SDK, and then visualize and manage your workflow using Amazon SageMaker Studio. You can be more efficient and scale faster by storing and reusing the workflow steps you create in SageMaker Pipelines. You can also get started quickly with built-in templates to build, test, register, and deploy models so you can get started with CI/CD in your ML environment quickly. Many customers have hundreds of workflows, each with a different version of the same model. With the SageMaker Pipelines model registry, you can track these versions in a central repository where it is easy to choose the right model for deployment based on your business requirements. You can use SageMaker Studio to browse and discover models, or you can access them through the SageMaker Python SDK.
  • 8
    Amazon SageMaker Model Deployment
    Amazon SageMaker makes it easy to deploy ML models to make predictions (also known as inference) at the best price-performance for any use case. It provides a broad selection of ML infrastructure and model deployment options to help meet all your ML inference needs. It is a fully managed service and integrates with MLOps tools, so you can scale your model deployment, reduce inference costs, manage models more effectively in production, and reduce operational burden. From low latency (a few milliseconds) and high throughput (hundreds of thousands of requests per second) to long-running inference for use cases such as natural language processing and computer vision, you can use Amazon SageMaker for all your inference needs.
  • 9
    Robust Intelligence

    Robust Intelligence

    Robust Intelligence

    The Robust Intelligence Platform integrates seamlessly into your ML lifecycle to eliminate model failures. The platform detects your model’s vulnerabilities, prevents aberrant data from entering your AI system, and detects statistical data issues like drift. At the core of our test-based approach is a single test. Each test measures your model’s robustness to a specific type of production model failure. Stress Testing runs hundreds of these tests to measure model production readiness. The results of these tests are used to auto-configure a custom AI Firewall that protects the model against the specific forms of failure to which a given model is susceptible. Finally, Continuous Testing runs these tests during production, providing automated root cause analysis informed by the underlying cause of any single test failure. Using all three elements of the Robust Intelligence platform together helps ensure ML Integrity.
  • 10
    AWS Neuron

    AWS Neuron

    Amazon Web Services

    It supports high-performance training on AWS Trainium-based Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances. For model deployment, it supports high-performance and low-latency inference on AWS Inferentia-based Amazon EC2 Inf1 instances and AWS Inferentia2-based Amazon EC2 Inf2 instances. With Neuron, you can use popular frameworks, such as TensorFlow and PyTorch, and optimally train and deploy machine learning (ML) models on Amazon EC2 Trn1, Inf1, and Inf2 instances with minimal code changes and without tie-in to vendor-specific solutions. AWS Neuron SDK, which supports Inferentia and Trainium accelerators, is natively integrated with PyTorch and TensorFlow. This integration ensures that you can continue using your existing workflows in these popular frameworks and get started with only a few lines of code changes. For distributed model training, the Neuron SDK supports libraries, such as Megatron-LM and PyTorch Fully Sharded Data Parallel (FSDP).
  • 11
    WhyLabs

    WhyLabs

    WhyLabs

    Enable observability to detect data and ML issues faster, deliver continuous improvements, and avoid costly incidents. Start with reliable data. Continuously monitor any data-in-motion for data quality issues. Pinpoint data and model drift. Identify training-serving skew and proactively retrain. Detect model accuracy degradation by continuously monitoring key performance metrics. Identify risky behavior in generative AI applications and prevent data leakage. Protect your generative AI applications are safe from malicious actions. Improve AI applications through user feedback, monitoring, and cross-team collaboration. Integrate in minutes with purpose-built agents that analyze raw data without moving or duplicating it, ensuring privacy and security. Onboard the WhyLabs SaaS Platform for any use cases using the proprietary privacy-preserving integration. Security approved for healthcare and banks.
  • 12
    Amazon EC2 Capacity Blocks for ML
    Amazon EC2 Capacity Blocks for ML enable you to reserve accelerated compute instances in Amazon EC2 UltraClusters for your machine learning workloads. This service supports Amazon EC2 P5en, P5e, P5, and P4d instances, powered by NVIDIA H200, H100, and A100 Tensor Core GPUs, respectively, as well as Trn2 and Trn1 instances powered by AWS Trainium. You can reserve these instances for up to six months in cluster sizes ranging from one to 64 instances (512 GPUs or 1,024 Trainium chips), providing flexibility for various ML workloads. Reservations can be made up to eight weeks in advance. By colocating in Amazon EC2 UltraClusters, Capacity Blocks offer low-latency, high-throughput network connectivity, facilitating efficient distributed training. This setup ensures predictable access to high-performance computing resources, allowing you to plan ML development confidently, run experiments, build prototypes, and accommodate future surges in demand for ML applications.
  • 13
    Amazon EC2 UltraClusters
    Amazon EC2 UltraClusters enable you to scale to thousands of GPUs or purpose-built machine learning accelerators, such as AWS Trainium, providing on-demand access to supercomputing-class performance. They democratize supercomputing for ML, generative AI, and high-performance computing developers through a simple pay-as-you-go model without setup or maintenance costs. UltraClusters consist of thousands of accelerated EC2 instances co-located in a given AWS Availability Zone, interconnected using Elastic Fabric Adapter (EFA) networking in a petabit-scale nonblocking network. This architecture offers high-performance networking and access to Amazon FSx for Lustre, a fully managed shared storage built on a high-performance parallel file system, enabling rapid processing of massive datasets with sub-millisecond latencies. EC2 UltraClusters provide scale-out capabilities for distributed ML training and tightly coupled HPC workloads, reducing training times.
  • 14
    Amazon EC2 Trn2 Instances
    Amazon EC2 Trn2 instances, powered by AWS Trainium2 chips, are purpose-built for high-performance deep learning training of generative AI models, including large language models and diffusion models. They offer up to 50% cost-to-train savings over comparable Amazon EC2 instances. Trn2 instances support up to 16 Trainium2 accelerators, providing up to 3 petaflops of FP16/BF16 compute power and 512 GB of high-bandwidth memory. To facilitate efficient data and model parallelism, Trn2 instances feature NeuronLink, a high-speed, nonblocking interconnect, and support up to 1600 Gbps of second-generation Elastic Fabric Adapter (EFAv2) network bandwidth. They are deployed in EC2 UltraClusters, enabling scaling up to 30,000 Trainium2 chips interconnected with a nonblocking petabit-scale network, delivering 6 exaflops of compute performance. The AWS Neuron SDK integrates natively with popular machine learning frameworks like PyTorch and TensorFlow.
  • 15
    CognitiveScale Cortex AI
    Developing AI solutions requires an engineering approach that is resilient, open and repeatable to ensure necessary quality and agility is achieved. Until today these efforts are missing the foundation to address these challenges amid a sea of point tools and fast changing models and data. Collaborative developer platform for automating development and control of AI applications across multiple personas. Derive hyper-detailed customer profiles from enterprise data to predict behaviors in real-time and at scale. Generate AI-powered models designed to continuously learn and achieve clearly defined business outcomes. Enables organizations to explain and prove compliance with applicable rules and regulations. CognitiveScale's Cortex AI Platform addresses enterprise AI use cases through modular platform offerings. Our customers consume and leverage its capabilities as microservices within their enterprise AI initiatives.