Best Data Management Software for Amazon Web Services (AWS) - Page 13

Compare the Top Data Management Software that integrates with Amazon Web Services (AWS) as of November 2025 - Page 13

This a list of Data Management software that integrates with Amazon Web Services (AWS). Use the filters on the left to add additional filters for products that have integrations with Amazon Web Services (AWS). View the products that work with Amazon Web Services (AWS) in the table below.

  • 1
    1Data

    1Data

    Sincera

    1Data is a modern, flexible data management platform that enables businesses to integrate, analyze, and act on data through low-code/no-code tools. It helps break down data silos, ingesting data from multiple sources via connectors (databases, APIs, files, messages), and lets users define business rules and workflows to automate analysis and trigger actions. Components include connectors, rules, workflows, a data dictionary, exception management, and metrics. Users can build simple or complex flows of data/events/rules to achieve specific outcomes, track what data exists and how it’s used via the data dictionary, and monitor, manage, or resolve data exceptions. 1Data also supports use cases like ETL/data pipelines, data lakes/warehouses, data standardization, reconciliation and cleansing, and data distribution.
  • 2
    Teleskope

    Teleskope

    Teleskope

    Teleskope is a modern data protection platform designed to automate data security, privacy, and compliance at enterprise scale. It continuously discovers and catalogs data across cloud, SaaS, structured, and unstructured sources, classifying over 150 entity types such as PII, PHI, PCI, and secrets with high precision and high throughput. Once sensitive data is identified, Teleskope enables automated remediation, such as redaction, masking, encryption, deletion, and access correction, while integrating into developer workflows via its API-first model and supporting deployment as SaaS, managed, or self-hosted. The platform also builds prevention capabilities, embedding into SDLC pipelines to stop sensitive data from entering production systems, support safe AI adoption (without using unchecked sensitive data), handle data subject rights requests (DSARs), and map findings to regulatory standards (GDPR, CPRA, PCI-DSS, ISO, NIST, CIS).
  • 3
    Decision Moments
    Mindtree Decision Moments is the first data analytics platform to apply continuous learning algorithms to large data pools. Using this innovative sense-and-respond system, companies can uncover compelling insights that improve over time and create more value from their digital transformation. Decision Moments is an agile and customizable data intelligence platform that simplifies technological complexity by easily adapting to fit the requirements of your organization’s existing data analytics investment. And it’s also flexible enough to modify in response to changes in the market, technologies or business needs. To gain the full value and cost savings of a data analytics platform, Decision Moments is powered by Microsoft Azure services, including the Cortana Intelligence Suite, in a cloud-native solution. Mindtree’s Decision Moments provides your key decision makers with the platform they need to make sense of large amounts of data from multiple sources.
  • 4
    Unravel

    Unravel

    Unravel Data

    Unravel makes data work anywhere: on Azure, AWS, GCP or in your own data center– Optimizing performance, automating troubleshooting and keeping costs in check. Unravel helps you monitor, manage, and improve your data pipelines in the cloud and on-premises – to drive more reliable performance in the applications that power your business. Get a unified view of your entire data stack. Unravel collects performance data from every platform, system, and application on any cloud then uses agentless technologies and machine learning to model your data pipelines from end to end. Explore, correlate, and analyze everything in your modern data and cloud environment. Unravel’s data model reveals dependencies, issues, and opportunities, how apps and resources are being used, what’s working and what’s not. Don’t just monitor performance – quickly troubleshoot and rapidly remediate issues. Leverage AI-powered recommendations to automate performance improvements, lower costs, and prepare.
  • 5
    Cazena

    Cazena

    Cazena

    Cazena’s Instant Data Lake accelerates time to analytics and AI/ML from months to minutes. Powered by its patented automated data platform, Cazena delivers the first SaaS experience for data lakes. Zero operations required. Enterprises need a data lake that easily supports all of their data and tools for analytics, machine learning and AI. To be effective, a data lake must offer secure data ingestion, flexible data storage, access and identity management, tool integration, optimization and more. Cloud data lakes are complicated to do yourself, which is why they require expensive teams. Cazena’s Instant Cloud Data Lakes are instantly production-ready for data loading and analytics. Everything is automated, supported on Cazena’s SaaS Platform with continuous Ops and self-service access via the Cazena SaaS Console. Cazena's Instant Data Lakes are turnkey and production-ready for secure data ingest, storage and analytics.
  • 6
    Denodo

    Denodo

    Denodo Technologies

    The core technology to enable modern data integration and data management solutions. Quickly connect disparate structured and unstructured sources. Catalog your entire data ecosystem. Data stays in the sources and it is accessed on demand, with no need to create another copy. Build data models that suit the needs of the consumer, even across multiple sources. Hide the complexity of your back-end technologies from the end users. The virtual model can be secured and consumed using standard SQL and other formats like REST, SOAP and OData. Easy access to all types of data. Full data integration and data modeling capabilities. Active Data Catalog and self-service capabilities for data & metadata discovery and data preparation. Full data security and data governance capabilities. Fast intelligent execution of data queries. Real-time data delivery in any format. Ability to create data marketplaces. Decoupling of business applications from data systems to facilitate data-driven strategies.
  • 7
    Imply

    Imply

    Imply

    Imply is a real-time analytics platform built on Apache Druid, designed to handle large-scale, high-performance OLAP (Online Analytical Processing) workloads. It offers real-time data ingestion, fast query performance, and the ability to perform complex analytical queries on massive datasets with low latency. Imply is tailored for organizations that need interactive analytics, real-time dashboards, and data-driven decision-making at scale. It provides a user-friendly interface for data exploration, along with advanced features such as multi-tenancy, fine-grained access controls, and operational insights. With its distributed architecture and scalability, Imply is well-suited for use cases in streaming data analytics, business intelligence, and real-time monitoring across industries.
  • 8
    Commvault HyperScale X
    Accelerate hybrid cloud adoption, scale-out as needed, and manage data workloads from a single intuitive platform. An intuitive scale-out solution that’s fully integrated with Commvault’s Intelligent Data Management platform. Accelerate your digital transformation journey with unmatched scalability, security, and resiliency. Simple, flexible data protection for all workloads including containers, virtual, and databases. Built-in resiliency ensures data availability during concurrent hardware failures. Data reuse via copy data management that provides instant recovery of VMs and live production copies for DevOps and testing. High-performance backup and recovery with automatic load balancing, enhanced RPO, and reduced RTO. Cost-optimized cloud data mobility to move data to, from, within, and between clouds. Disaster recovery testing of replicas directly from the hardware.
  • 9
    DynaCenter
    Race Migration makes data migration easy by automating the migration of server workloads between dissimilar physical, virtual, and cloud platforms. Race Migration’s DynaCenter cloud migration tool automatically migrates your existing physical or virtual servers to almost any physical, virtual or cloud platform. Our unique method is faster and more cost-­effective than other migration solutions, with no performance impact or risk of vendor lock­-in. As a selected Amazon Web Services (AWS) partner, DynaCenter offers simplified installation and configuration through the AWS marketplace to both Virtual Private Cloud (VPC) and the GovCloud. Reduce the inefficiency and risk of manual migrations. DynaCenter offers fully automated, unattended application migration to the cloud. Custom transformation allows you to automate target environment changes or software installs to reduce the time it takes to configure your cloud environments.
  • 10
    ArangoDB

    ArangoDB

    ArangoDB

    Natively store data for graph, document and search needs. Utilize feature-rich access with one query language. Map data natively to the database and access it with the best patterns for the job – traversals, joins, search, ranking, geospatial, aggregations – you name it. Polyglot persistence without the costs. Easily design, scale and adapt your architectures to changing needs and with much less effort. Combine the flexibility of JSON with semantic search and graph technology for next generation feature extraction even for large datasets.
  • 11
    Claravine

    Claravine

    Claravine

    Claravine is redefining data integrity for the global enterprise. The Data Standards Cloud makes it easy for teams to standardize, connect, and control data collaboratively, across the organization. Leading brands use Claravine to take greater ownership and control of their data from the start, for better decisions, stickier consumer experiences, and increased ROI.
  • 12
    Talend Data Catalog
    Talend Data Catalog gives your organization a single, secure point of control for your data. With robust tools for search and discovery, and connectors to extract metadata from virtually any data source, Data Catalog makes it easy to protect your data, govern your analytics, manage data pipelines, and accelerate your ETL processes. Data Catalog automatically crawls, profiles, organizes, links, and enriches all your metadata. Up to 80% of the information associated with the data is documented automatically and kept up-to-date through smart relationships and machine learning, continually delivering the most current data to the user. Make data governance a team sport with a secure single point of control where you can collaborate to improve data accessibility, accuracy, and business relevance. Support data privacy and regulatory compliance with intelligent data lineage tracing and compliance tracking.
  • 13
    InfoSum

    InfoSum

    InfoSum

    InfoSum unlocks data’s limitless potential. Using patented, privacy-first technology, InfoSum connects customer records between and amongst companies, without ever sharing data. Customers across financial services, content distribution, connected television, eCommerce, gaming, and entertainment all trust InfoSum to seamlessly and compliantly connect their customer data to other partners through privacy-safe, permissioned, data networks. There are many applications for InfoSum’s technology, including standard ‘data-onboarding’ to much more sophisticated use cases that allow for the creation of owned identity platforms, the development of new data and advertising products, and the formation of entirely new marketplaces. InfoSum was founded in 2015. The company has multiple patents, protecting its invention of the ‘non-movement of data.’ InfoSum is based in the US, UK and CE, with offices, and customers across Europe and North America. The company is poised for exponential growth
  • 14
    Skyflow

    Skyflow

    Skyflow

    Skyflow lets you run workflows, logic and analytics on fully encrypted data. Skyflow leverages multiple encryption and tokenization techniques for optimal security. Manage data residency, access, and policy enforcement, with auditable logs and provenance. Get to compliance in minutes, not weeks. Our trusted infrastructure and simple REST and SQL APIs make it easy. Tokenization for compliance, plus an encrypted data store so you can search, analyze, and use secure data. Run Skyflow in a virtual private cloud you choose. Use it as secure gateway, zero trust data store, and more. Replace a difficult-to-maintain patchwork of point solutions with a single cost-effective data vault. Leverage the power of your sensitive data in any workflow or application without ever decrypting the data.
  • 15
    Cortex Data Lake
    Collect, transform and integrate your enterprise’s security data to enable Palo Alto Networks solutions. Radically simplify security operations by collecting, transforming and integrating your enterprise’s security data. Facilitate AI and machine learning with access to rich data at cloud native scale. Significantly improve detection accuracy with trillions of multi-source artifacts. Cortex XDR™ is the industry’s only prevention, detection, and response platform that runs on fully integrated endpoint, network and cloud data. Prisma™ Access protects your applications, remote networks and mobile users in a consistent manner, wherever they are. A cloud-delivered architecture connects all users to all applications, whether they’re at headquarters, branch offices or on the road. The combination of Cortex™ Data Lake and Panorama™ management delivers an economical, cloud-based logging solution for Palo Alto Networks Next-Generation Firewalls. Zero hardware, cloud scale, available anywhere.
  • 16
    GenRocket

    GenRocket

    GenRocket

    Enterprise synthetic test data solutions. In order to generate test data that accurately reflects the structure of your application or database, it must be easy to model and maintain each test data project as changes to the data model occur throughout the lifecycle of the application. Maintain referential integrity of parent/child/sibling relationships across the data domains within an application database or across multiple databases used by multiple applications. Ensure the consistency and integrity of synthetic data attributes across applications, data sources and targets. For example, a customer name must always match the same customer ID across multiple transactions simulated by real-time synthetic data generation. Customers want to quickly and accurately create their data model as a test data project. GenRocket offers 10 methods for data model setup. XTS, DDL, Scratchpad, Presets, XSD, CSV, YAML, JSON, Spark Schema, Salesforce.
  • 17
    Code Ocean

    Code Ocean

    Code Ocean

    The Code Ocean Computational Workbench speeds usability, coding and data tool integration, and DevOps and lifecycle tasks by closing technology gaps with a highly intuitive, ready-to-use user experience. Ready-to-use RStudio, Jupyter, Shiny, Terminal, and Git. Choice of popular languages. Access to any size of data and storage type. Configure and generate Docker environments. One-click access to AWS compute resources. Using the Code Ocean Computational Workbench app panel researchers share results by generating and publishing easy-to-use, point-n-click, web analysis apps to teams of scientists without any IT, coding, or using the command line. Create and deploy interactive analysis. Used in standard web browsers. Easy to share and collaborate. Reuseable, easy to manage. Offering an easy-to-use application and repository researchers can quickly organize, publish, and secure project-based Compute Capsules, data assets, and research results.
  • 18
    OctoData

    OctoData

    SoyHuCe

    OctoData is deployed at a lower cost, in Cloud hosting and includes personalized support from the definition of your needs to the use of the solution. OctoData is based on innovative open-source technologies and knows how to adapt to open up to future possibilities. Its Supervisor offers a management interface that allows you to quickly capture, store and exploit a growing quantity and variety of data. With OctoData, prototype and industrialize your massive data recovery solutions in the same environment, including in real time. Thanks to the exploitation of your data, obtain precise reports, explore new possibilities, increase your productivity and gain in profitability.
  • 19
    Crux

    Crux

    Crux

    Find out why the heavy hitters are using the Crux external data automation platform to scale external data integration, transformation, and observability without increasing headcount. Our cloud-native data integration technology accelerates the ingestion, preparation, observability and ongoing delivery of any external dataset. The result is that we can ensure you get quality data in the right place, in the right format when you need it. Leverage automatic schema detection, delivery schedule inference, and lifecycle management to build pipelines from any external data source quickly. Enhance discoverability throughout your organization through a private catalog of linked and matched data products. Enrich, validate, and transform any dataset to quickly combine it with other data sources and accelerate analytics.
  • 20
    Daft

    Daft

    Daft

    Daft is a framework for ETL, analytics and ML/AI at scale. Its familiar Python dataframe API is built to outperform Spark in performance and ease of use. Daft plugs directly into your ML/AI stack through efficient zero-copy integrations with essential Python libraries such as Pytorch and Ray. It also allows requesting GPUs as a resource for running models. Daft runs locally with a lightweight multithreaded backend. When your local machine is no longer sufficient, it scales seamlessly to run out-of-core on a distributed cluster. Daft can handle User-Defined Functions (UDFs) in columns, allowing you to apply complex expressions and operations to Python objects with the full flexibility required for ML/AI. Daft runs locally with a lightweight multithreaded backend. When your local machine is no longer sufficient, it scales seamlessly to run out-of-core on a distributed cluster.
  • 21
    ZinkML

    ZinkML

    ZinkML Technologies

    ZinkML is a zero-code data science platform designed to address the challenges faced by organizations in leveraging data effectively. By providing a visual and intuitive interface, it eliminates the need for extensive coding expertise, making data science accessible to a broader range of users. ZinkML streamlines the entire data science lifecycle, from data ingestion and preparation to model building, deployment, and monitoring. Users can drag-and-drop components to create complex data pipelines, explore data visually, and build predictive models without writing a single line of code. The platform also offers automated feature engineering, model selection, and hyperparameter tuning, accelerating the model development process. Moreover, ZinkML provides robust collaboration features, enabling teams to work together seamlessly on data science projects. By democratizing data science, we empower companies to extract maximum value from their data and drive better decision-making.
  • 22
    Data Sentinel

    Data Sentinel

    Data Sentinel

    As a business leader, you need to trust your data and be 100% certain that it’s well-governed, compliant, and accurate. Including all data, in all sources, and in all locations, without limitations. Understand your data assets. Audit for risk, compliance, and quality in support of your project. Catalog a complete data inventory across all sources and data types, creating a shared understanding of your data assets. Run a one-time, fast, affordable, and accurate audit of your data. PCI, PII, and PHI audits are fast, accurate, and complete. As a service, with no software to purchase. Measure and audit data quality and data duplication across all of your enterprise data assets, cloud-native and on-premises. Comply with global data privacy regulations at scale. Discover, classify, track, trace and audit privacy compliance. Monitor PII/PCI/PHI data propagation and automate DSAR compliance processes.
  • 23
    Adele

    Adele

    Adastra

    Adele is an intuitive platform designed to simplify the migration of data pipelines from any legacy system to a target platform. It empowers users with full control over the functional migration process, while its intelligent mapping capabilities offer valuable insights. By reverse-engineering data pipelines, Adele creates data lineage mappings and extracts metadata, enhancing visibility and understanding of data flows.
  • 24
    Syntho

    Syntho

    Syntho

    Syntho typically deploys in the safe environment of our customers so that (sensitive) data never leaves the safe and trusted environment of the customer. Connect to the source data and target environment with our out-of-the-box connectors. Syntho can connect with every leading database & filesystem and supports 20+ database connectors and 5+ filesystem connectors. Define the type of synthetization you would like to run, realistically mask or synthesize new values, automatically detect sensitive data types. Utilize and share the protected data securely, ensuring compliance and privacy are maintained throughout its usage.
  • 25
    AWS Transfer Family
    AWS Transfer Family is a fully managed service that enables businesses to securely transfer files to and from AWS using widely adopted protocols like FTP, SFTP, FTPS, and AS2. It helps organizations migrate and automate their file transfer workflows without the need for complex infrastructure setup. With built-in integration to Amazon S3 and other AWS services, AWS Transfer Family simplifies data management while offering scalability, security, and compliance. The service is ideal for industries that require secure and reliable file transfers, including financial services, healthcare, and manufacturing.
  • 26
    Precisely Connect
    Integrate data seamlessly from legacy systems into next-gen cloud and data platforms with one solution. Connect helps you take control of your data from mainframe to cloud. Integrate data through batch and real-time ingestion for advanced analytics, comprehensive machine learning and seamless data migration. Connect leverages the expertise Precisely has built over decades as a leader in mainframe sort and IBM i data availability and security to lead the industry in accessing and integrating complex data. Access to all your enterprise data for the most critical business projects is ensured by support for a wide range of sources and targets for all your ELT and CDC needs.