Alternatives to Infor Data Lake

Compare Infor Data Lake alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Infor Data Lake in 2024. Compare features, ratings, user reviews, pricing, and more from Infor Data Lake competitors and alternatives in order to make an informed decision for your business.

  • 1
    Zaloni Arena
    End-to-end DataOps built on an agile platform that improves and safeguards your data assets. Arena is the premier augmented data management platform. Our active data catalog enables self-service data enrichment and consumption to quickly control complex data environments. Customizable workflows that increase the accuracy and reliability of every data set. Use machine-learning to identify and align master data assets for better data decisioning. Complete lineage with detailed visualizations alongside masking and tokenization for superior security. We make data management easy. Arena catalogs your data, wherever it is and our extensible connections enable analytics to happen across your preferred tools. Conquer data sprawl challenges: Our software drives business and analytics success while providing the controls and extensibility needed across today’s decentralized, multi-cloud data complexity.
  • 2
    Lentiq

    Lentiq

    Lentiq

    Lentiq is a collaborative data lake as a service environment that’s built to enable small teams to do big things. Quickly run data science, machine learning and data analysis at scale in the cloud of your choice. With Lentiq, your teams can ingest data in real time and then process, clean and share it. From there, Lentiq makes it possible to build, train and share models internally. Simply put, data teams can collaborate with Lentiq and innovate with no restrictions. Data lakes are storage and processing environments, which provide ML, ETL, schema-on-read querying capabilities and so much more. Are you working on some data science magic? You definitely need a data lake. In the Post-Hadoop era, the big, centralized data lake is a thing of the past. With Lentiq, we use data pools, which are multi-cloud, interconnected mini-data lakes. They work together to give you a stable, secure and fast data science environment.
  • 3
    Huawei Cloud Data Lake Governance Center
    Simplify big data operations and build intelligent knowledge libraries with Data Lake Governance Center (DGC), a one-stop data lake operations platform that manages data design, development, integration, quality, and assets. Build an enterprise-class data lake governance platform with an easy-to-use visual interface. Streamline data lifecycle processes, utilize metrics and analytics, and ensure good governance across your enterprise. Define and monitor data standards, and get real-time alerts. Build data lakes quicker by easily setting up data integrations, models, and cleaning rules, to enable the discovery of new reliable data sources. Maximize the business value of data. With DGC, end-to-end data operations solutions can be designed for scenarios such as smart government, smart taxation, and smart campus. Gain new insights into sensitive data across your entire organization. DGC allows enterprises to define business catalogs, classifications, and terms.
    Starting Price: $428 one-time payment
  • 4
    Informatica Intelligent Data Management Cloud
    Our AI-powered Intelligent Data Platform is the industry's most comprehensive and modular platform. It helps you unleash the value of data across your enterprise—and empowers you to solve your most complex problems. Our platform defines a new standard for enterprise-class data management. We deliver best-in-class products and an integrated platform that unifies them, so you can power your business with intelligent data. Connect to any data from any source—and scale with confidence. You’re backed by a global platform that processes over 15 trillion cloud transactions every month. Future-proof your business with an end-to-end platform that delivers trusted data at scale across data management use cases. Our AI-powered architecture supports integration patterns and allows you to grow and evolve at your own speed. Our solution is modular, microservices-based and API-driven.
  • 5
    Data Lake on AWS
    Many Amazon Web Services (AWS) customers require a data storage and analytics solution that offers more agility and flexibility than traditional data management systems. A data lake is a new and increasingly popular way to store and analyze data because it allows companies to manage multiple data types from a wide variety of sources, and store this data, structured and unstructured, in a centralized repository. The AWS Cloud provides many of the building blocks required to help customers implement a secure, flexible, and cost-effective data lake. These include AWS managed services that help ingest, store, find, process, and analyze both structured and unstructured data. To support our customers as they build data lakes, AWS offers the data lake solution, which is an automated reference implementation that deploys a highly available, cost-effective data lake architecture on the AWS Cloud along with a user-friendly console for searching and requesting datasets.
  • 6
    Kylo

    Kylo

    Teradata

    Kylo is an open source enterprise-ready data lake management software platform for self-service data ingest and data preparation with integrated metadata management, governance, security and best practices inspired by Think Big's 150+ big data implementation projects. Self-service data ingest with data cleansing, validation, and automatic profiling. Wrangle data with visual sql and an interactive transform through a simple user interface. Search and explore data and metadata, view lineage, and profile statistics. Monitor health of feeds and services in the data lake. Track SLAs and troubleshoot performance. Design batch or streaming pipeline templates in Apache NiFi and register with Kylo to enable user self-service. Organizations can expend significant engineering effort moving data into Hadoop yet struggle to maintain governance and data quality. Kylo dramatically simplifies data ingest by shifting ingest to data owners through a simple guided UI.
  • 7
    AWS Lake Formation
    AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake lets you break down data silos and combine different types of analytics to gain insights and guide better business decisions. Setting up and managing data lakes today involves a lot of manual, complicated, and time-consuming tasks. This work includes loading data from diverse sources, monitoring those data flows, setting up partitions, turning on encryption and managing keys, defining transformation jobs and monitoring their operation, reorganizing data into a columnar format, deduplicating redundant data, and matching linked records. Once data has been loaded into the data lake, you need to grant fine-grained access to datasets, and audit access over time across a wide range of analytics and machine learning (ML) tools and services.
  • 8
    Cloudera

    Cloudera

    Cloudera

    Manage and secure the data lifecycle from the Edge to AI in any cloud or data center. Operates across all major public clouds and the private cloud with a public cloud experience everywhere. Integrates data management and analytic experiences across the data lifecycle for data anywhere. Delivers security, compliance, migration, and metadata management across all environments. Open source, open integrations, extensible, & open to multiple data stores and compute architectures. Deliver easier, faster, and safer self-service analytics experiences. Provide self-service access to integrated, multi-function analytics on centrally managed and secured business data while deploying a consistent experience anywhere—on premises or in hybrid and multi-cloud. Enjoy consistent data security, governance, lineage, and control, while deploying the powerful, easy-to-use cloud analytics experiences business users require and eliminating their need for shadow IT solutions.
  • 9
    Alation

    Alation

    Alation

    Alation is the first company to bring a data catalog to market. It radically improves how people find, understand, trust, use, and reuse data. Alation pioneered active, non-invasive data governance, which supports both data democratization and compliance at scale, so people have the data they need alongside guidance on how to use it correctly. By combining human insight with AI and machine learning, Alation tackles the toughest challenges in data today. More than 350 enterprises use Alation to make confident, data-driven decisions. American Family Insurance, Exelon, Munich Re, and Pfizer are all proud customers.
  • 10
    NewEvol

    NewEvol

    Sattrix Software Solutions

    NewEvol is the technologically advanced product suite that uses data science for advanced analytics to identify abnormalities in the data itself. Supported by visualization, rule-based alerting, automation, and responses, NewEvol becomes a more compiling proposition for any small to large enterprise. Machine Learning (ML) and security intelligence feed makes NewEvol a more robust system to cater to challenging business demands. NewEvol Data Lake is super easy to deploy and manage. You don’t require a team of expert data administrators. As your company’s data need grows, it automatically scales and reallocates resources accordingly. NewEvol Data Lake has extensive data ingestion to perform enrichment across multiple sources. It helps you ingest data from multiple formats such as delimited, JSON, XML, PCAP, Syslog, etc. It offers enrichment with the help of a best-of-breed contextually aware event analytics model.
  • 11
    Cortex Data Lake
    Collect, transform and integrate your enterprise’s security data to enable Palo Alto Networks solutions. Radically simplify security operations by collecting, transforming and integrating your enterprise’s security data. Facilitate AI and machine learning with access to rich data at cloud native scale. Significantly improve detection accuracy with trillions of multi-source artifacts. Cortex XDR™ is the industry’s only prevention, detection, and response platform that runs on fully integrated endpoint, network and cloud data. Prisma™ Access protects your applications, remote networks and mobile users in a consistent manner, wherever they are. A cloud-delivered architecture connects all users to all applications, whether they’re at headquarters, branch offices or on the road. The combination of Cortex™ Data Lake and Panorama™ management delivers an economical, cloud-based logging solution for Palo Alto Networks Next-Generation Firewalls. Zero hardware, cloud scale, available anywhere.
  • 12
    Archon Data Store

    Archon Data Store

    Platform 3 Solutions

    Archon Data Store™ is a powerful and secure open-source based archive lakehouse platform designed to store, manage, and provide insights from massive volumes of data. With its compliance features and minimal footprint, it enables large-scale search, processing, and analysis of structured, unstructured, & semi-structured data across your organization. Archon Data Store combines the best features of data warehouses and data lakes into a single, simplified platform. This unified approach eliminates data silos, streamlining data engineering, analytics, data science, and machine learning workflows. Through metadata centralization, optimized data storage, and distributed computing, Archon Data Store maintains data integrity. Its common approach to data management, security, and governance helps you operate more efficiently and innovate faster. Archon Data Store provides a single platform for archiving and analyzing all your organization's data while delivering operational efficiencies.
  • 13
    Qlik Data Integration
    The Qlik Data Integration platform for managed data lakes automates the process of providing continuously updated, accurate, and trusted data sets for business analytics. Data engineers have the agility to quickly add new sources and ensure success at every step of the data lake pipeline from real-time data ingestion, to refinement, provisioning, and governance. A simple and universal solution for continually ingesting enterprise data into popular data lakes in real-time. A model-driven approach for quickly designing, building, and managing data lakes on-premises or in the cloud. Deliver a smart enterprise-scale data catalog to securely share all of your derived data sets with business users.
  • 14
    TIBCO Cloud Metadata
    One challenge in metadata management is the lack of connection between silos of metadata used in IT, operations, analytics, and compliance. TIBCO Cloud™ Metadata software is a single solution that spans all your metadata: data dictionaries, business glossaries, and data catalogs. Built-in artificial intelligence (AI) and machine learning (ML) algorithms facilitate metadata classification and data lineages (horizontal, vertical, regulatory). Deliver the data context, coherency, and control you need to achieve the highest efficiency, best performance, and smartest decision-making across all your teams and departments. Effective execution, analysis, and governance require accurate and consistent metadata about your operations, analytics, and compliance efforts. Instead of multiple silos, use a single solution. Discover, harvest, and manage metadata from all the applications, databases, data lakes, enterprise data warehouses, APIs, social media, and streaming sources you use.
  • 15
    Datameer

    Datameer

    Datameer

    Datameer revolutionizes data transformation with a low-code approach, trusted by top global enterprises. Craft, transform, and publish data seamlessly with no code and SQL, simplifying complex data engineering tasks. Empower your data teams to make informed decisions confidently while saving costs and ensuring responsible self-service analytics. Speed up your analytics workflow by transforming datasets to answer ad-hoc questions and support operational dashboards. Empower everyone on your team with our SQL or Drag-and-Drop to transform your data in an intuitive and collaborative workspace. And best of all, everything happens in Snowflake. Datameer is designed and optimized for Snowflake to reduce data movement and increase platform adoption. Some of the problems Datameer solves: - Analytics is not accessible - Drowning in backlog - Long development
  • 16
    BryteFlow

    BryteFlow

    BryteFlow

    BryteFlow builds the most efficient automated environments for analytics ever. It converts Amazon S3 into an awesome analytics platform by leveraging the AWS ecosystem intelligently to deliver data at lightning speeds. It complements AWS Lake Formation and automates the Modern Data Architecture providing performance and productivity. You can completely automate data ingestion with BryteFlow Ingest’s simple point-and-click interface while BryteFlow XL Ingest is great for the initial full ingest for very large datasets. No coding is needed! With BryteFlow Blend you can merge data from varied sources like Oracle, SQL Server, Salesforce and SAP etc. and transform it to make it ready for Analytics and Machine Learning. BryteFlow TruData reconciles the data at the destination with the source continually or at a frequency you select. If data is missing or incomplete you get an alert so you can fix the issue easily.
  • 17
    Dataleyk

    Dataleyk

    Dataleyk

    Dataleyk is the secure, fully-managed cloud data platform for SMBs. Our mission is to make Big Data analytics easy and accessible to all. Dataleyk is the missing link in reaching your data-driven goals. Our platform makes it quick and easy to have a stable, flexible and reliable cloud data lake with near-zero technical knowledge. Bring all of your company data from every single source, explore with SQL and visualize with your favorite BI tool or our advanced built-in graphs. Modernize your data warehousing with Dataleyk. Our state-of-the-art cloud data platform is ready to handle your scalable structured and unstructured data. Data is an asset, Dataleyk is a secure, cloud data platform that encrypts all of your data and offers on-demand data warehousing. Zero maintenance, as an objective, may not be easy to achieve. But as an initiative, it can be a driver for significant delivery improvements and transformational results.
    Starting Price: €0.1 per GB
  • 18
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 19
    ChaosSearch

    ChaosSearch

    ChaosSearch

    Log analytics should not break the bank. Because most logging solutions use one or both of these technologies - Elasticsearch database and/ or Lucene index - the cost of operation is unreasonably high. ChaosSearch takes a revolutionary approach. We reinvented indexing, which allows us to pass along substantial cost savings to our customers. See for yourself with this price comparison calculator. ChaosSearch is a fully managed SaaS platform that allows you to focus on search and analytics in AWS S3 rather than spend time managing and tuning databases. Leverage your existing AWS S3 infrastructure and let us do the rest. Watch this short video to learn how our unique approach and architecture allow ChaosSearch to address the challenges of today’s data & analytic requirements. ChaosSearch indexes your data as-is, for log, SQL and ML analytics, without transformation, while auto-detecting native schemas. ChaosSearch is an ideal replacement for the commonly deployed Elasticsearch solutions.
    Starting Price: $750 per month
  • 20
    Oracle Cloud Infrastructure Data Lakehouse
    A data lakehouse is a modern, open architecture that enables you to store, understand, and analyze all your data. It combines the power and richness of data warehouses with the breadth and flexibility of the most popular open source data technologies you use today. A data lakehouse can be built from the ground up on Oracle Cloud Infrastructure (OCI) to work with the latest AI frameworks and prebuilt AI services like Oracle’s language service. Data Flow is a serverless Spark service that enables our customers to focus on their Spark workloads with zero infrastructure concepts. Oracle customers want to build advanced, machine learning-based analytics over their Oracle SaaS data, or any SaaS data. Our easy- to-use data integration connectors for Oracle SaaS, make creating a lakehouse to analyze all data with your SaaS data easy and reduces time to solution.
  • 21
    PoolParty

    PoolParty

    Semantic Web Company

    Integrate an award-winning Semantic AI platform to build smart applications and systems. Use PoolParty to automate metadata creation and make information readily available to be used, shared and analyzed. PoolParty links unstructured and structured data, and connects data which is scattered across databases. Benefit from the next generation of graph-based data and content analytics with state-of-the-art machine learning techniques. Benefit from your data. PoolParty increases the quality of data, which leads to more precise results from AI applications and improved decision-making. Understand why the world’s biggest companies are using Knowledge Graphs, and why yours should be too. Engage with experts, partners, and customers’ presentations to unlock the full potential of semantic technologies and 360-degree views. We have helped over 180 enterprise-level customers master the challenges of information management.
  • 22
    Azure Data Lake
    Azure Data Lake includes all the capabilities required to make it easy for developers, data scientists, and analysts to store data of any size, shape, and speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all of your data while making it faster to get up and running with batch, streaming, and interactive analytics. Azure Data Lake works with existing IT investments for identity, management, and security for simplified data management and governance. It also integrates seamlessly with operational stores and data warehouses so you can extend current data applications. We’ve drawn on the experience of working with enterprise customers and running some of the largest scale processing and analytics in the world for Microsoft businesses like Office 365, Xbox Live, Azure, Windows, Bing, and Skype. Azure Data Lake solves many of the productivity and scalability challenges that prevent you from maximizing the
  • 23
    Qubole

    Qubole

    Qubole

    Qubole is a simple, open, and secure Data Lake Platform for machine learning, streaming, and ad-hoc analytics. Our platform provides end-to-end services that reduce the time and effort required to run Data pipelines, Streaming Analytics, and Machine Learning workloads on any cloud. No other platform offers the openness and data workload flexibility of Qubole while lowering cloud data lake costs by over 50 percent. Qubole delivers faster access to petabytes of secure, reliable and trusted datasets of structured and unstructured data for Analytics and Machine Learning. Users conduct ETL, analytics, and AI/ML workloads efficiently in end-to-end fashion across best-of-breed open source engines, multiple formats, libraries, and languages adapted to data volume, variety, SLAs and organizational policies.
  • 24
    DataLakeHouse.io

    DataLakeHouse.io

    DataLakeHouse.io

    DataLakeHouse.io (DLH.io) Data Sync provides replication and synchronization of operational systems (on-premise and cloud-based SaaS) data into destinations of their choosing, primarily Cloud Data Warehouses. Built for marketing teams and really any data team at any size organization, DLH.io enables business cases for building single source of truth data repositories, such as dimensional data warehouses, data vault 2.0, and other machine learning workloads. Use cases are technical and functional including: ELT, ETL, Data Warehouse, Pipeline, Analytics, AI & Machine Learning, Data, Marketing, Sales, Retail, FinTech, Restaurant, Manufacturing, Public Sector, and more. DataLakeHouse.io is on a mission to orchestrate data for every organization particularly those desiring to become data-driven, or those that are continuing their data driven strategy journey. DataLakeHouse.io (aka DLH.io) enables hundreds of companies to managed their cloud data warehousing and analytics solutions.
  • 25
    IBM InfoSphere Information Governance Catalog
    IBM InfoSphere® Information Governance Catalog is a web-based tool that allows you to explore, understand and analyze information. You can create, manage and share a common business language, document and enact policies and rules, and track data lineage. Combine with IBM Watson® Knowledge Catalog to leverage existing curated data sets and extend your on-premises Information Governance Catalog investment to the cloud. A knowledge catalog allows you to put collected metadata into the hands of knowledge workers so data science and analytics communities can get easy access to the best assets for their purpose while still adhering to enterprise governance requirements. Provides a common business language and vocabulary to enable a deeper understanding of all your data assets, structured, semi-structured and unstructured. Documents governance policies and enacts rules to help you define how information should be structured, stored, transformed and moved.
  • 26
    Hydrolix

    Hydrolix

    Hydrolix

    Hydrolix is a streaming data lake that combines decoupled storage, indexed search, and stream processing to deliver real-time query performance at terabyte-scale for a radically lower cost. CFOs love the 4x reduction in data retention costs. Product teams love 4x more data to work with. Spin up resources when you need them and scale to zero when you don’t. Fine-tune resource consumption and performance by workload to control costs. Imagine what you can build when you don’t have to sacrifice data because of budget. Ingest, enrich, and transform log data from multiple sources including Kafka, Kinesis, and HTTP. Return just the data you need, no matter how big your data is. Reduce latency and costs, eliminate timeouts, and brute force queries. Storage is decoupled from ingest and query, allowing each to independently scale to meet performance and budget targets. Hydrolix’s high-density compression (HDX) typically reduces 1TB of stored data to 55GB.
    Starting Price: $2,237 per month
  • 27
    IBM Watson Knowledge Catalog
    Activate business-ready data for AI and analytics with intelligent cataloging, backed by active metadata and policy management. IBM Watson® Knowledge Catalog is a data catalog tool that powers intelligent, self-service discovery of data, models and more. The cloud-based enterprise metadata repository activates information for AI, machine learning (ML) and deep learning. Access, curate, categorize and share data, knowledge assets and their relationships, wherever they reside. Organize, define and manage enterprise data to provide the right context and drive value across needs like regulatory compliance and data monetization. Protect data, manage compliance and audit-readiness, and maintain client trust with active policy management and dynamic masking of sensitive data. Consume and transform data at the speed of business with intuitive dashboards and flows that can be shared with peers or analytics tools.
    Starting Price: $300 per instance
  • 28
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 29
    Upsolver

    Upsolver

    Upsolver

    Upsolver makes it incredibly simple to build a governed data lake and to manage, integrate and prepare streaming data for analysis. Define pipelines using only SQL on auto-generated schema-on-read. Easy visual IDE to accelerate building pipelines. Add Upserts and Deletes to data lake tables. Blend streaming and large-scale batch data. Automated schema evolution and reprocessing from previous state. Automatic orchestration of pipelines (no DAGs). Fully-managed execution at scale. Strong consistency guarantee over object storage. Near-zero maintenance overhead for analytics-ready data. Built-in hygiene for data lake tables including columnar formats, partitioning, compaction and vacuuming. 100,000 events per second (billions daily) at low cost. Continuous lock-free compaction to avoid “small files” problem. Parquet-based tables for fast queries.
  • 30
    Onehouse

    Onehouse

    Onehouse

    The only fully managed cloud data lakehouse designed to ingest from all your data sources in minutes and support all your query engines at scale, for a fraction of the cost. Ingest from databases and event streams at TB-scale in near real-time, with the simplicity of fully managed pipelines. Query your data with any engine, and support all your use cases including BI, real-time analytics, and AI/ML. Cut your costs by 50% or more compared to cloud data warehouses and ETL tools with simple usage-based pricing. Deploy in minutes without engineering overhead with a fully managed, highly optimized cloud service. Unify your data in a single source of truth and eliminate the need to copy data across data warehouses and lakes. Use the right table format for the job, with omnidirectional interoperability between Apache Hudi, Apache Iceberg, and Delta Lake. Quickly configure managed pipelines for database CDC and streaming ingestion.
  • 31
    Openbridge

    Openbridge

    Openbridge

    Uncover insights to supercharge sales growth using code-free, fully-automated data pipelines to data lakes or cloud warehouses. A flexible, standards-based platform to unify sales and marketing data for automating insights and smarter growth. Say goodbye to messy, expensive manual data downloads. Always know what you’ll pay and only pay for what you use. Fuel your tools with quick access to analytics-ready data. As certified developers, we only work with secure, official APIs. Get started quickly with data pipelines from popular sources. Pre-built, pre-transformed, and ready-to-go data pipelines. Unlock data from Amazon Vendor Central, Amazon Seller Central, Instagram Stories, Facebook, Amazon Advertising, Google Ads, and many others. Code-free data ingestion and transformation processes allow teams to realize value from their data quickly and cost-effectively. Data is always securely stored directly in a trusted, customer-owned data destination like Databricks, Amazon Redshift, etc.
    Starting Price: $149 per month
  • 32
    SAP PowerDesigner
    Build a blueprint of your current enterprise architecture and visualize the impact of change before it happens with SAP PowerDesigner software. With capabilities such as data modeling, link and sync, and metadata management, you can immediately capture architecture layers and requirements, tap into a powerful metadata repository, and share discoveries with your team. Create tight connections between your company’s unique requirements, language, and data models by using link-and-sync technology. Help developers and enterprise architects exchange information through a secure metadata repository that supports all modeling types. Simplify the lives of your users and IT staff with a flexible, wizard-driven system for documentation and Web-based reporting.
  • 33
    SOLIXCloud CDP

    SOLIXCloud CDP

    Solix Technologies

    SOLIXCloud CDP delivers cloud data management as-a-service for modern data-driven enterprises. Built on opensource, cloud native technologies SOLIXCloud CDP helps companies manage and process all of their structured, semi-structured and unstructured data for advanced anaytics, compliance, infrastructure optimization and data security. With features such as Solix Connect for data ingestion, Solix Data Governance, Solix Metadata Management and Solix Search, SOLIXCloud CDP offers a comprehensive cloud data management application framework to build and run data-driven applications such as SQL data warehouse, machine learning and artifitial intelligience while fulfilling the ever growing data management requirements of complex data regulations, data retention and consumer data privacy.
  • 34
    Talend Data Fabric
    Talend Data Fabric’s suite of cloud services efficiently handles all your integration and integrity challenges — on-premises or in the cloud, any source, any endpoint. Deliver trusted data at the moment you need it — for every user, every time. Ingest and integrate data, applications, files, events and APIs from any source or endpoint to any location, on-premise and in the cloud, easier and faster with an intuitive interface and no coding. Embed quality into data management and guarantee ironclad regulatory compliance with a thoroughly collaborative, pervasive and cohesive approach to data governance. Make the most informed decisions based on high quality, trustworthy data derived from batch and real-time processing and bolstered with market-leading data cleaning and enrichment tools. Get more value from your data by making it available internally and externally. Extensive self-service capabilities make building APIs easy— improve customer engagement.
  • 35
    IBM Storage Scale
    IBM Storage Scale is software-defined file and object storage that enables organizations to build a global data platform for artificial intelligence (AI), high-performance computing (HPC), advanced analytics, and other demanding workloads. Unlike traditional applications that work with structured data, today’s performance-intensive AI and analytics workloads operate on unstructured data, such as documents, audio, images, videos, and other objects. IBM Storage Scale software provides global data abstraction services that seamlessly connect multiple data sources across multiple locations, including non-IBM storage environments. It’s based on a massively parallel file system and can be deployed on multiple hardware platforms including x86, IBM Power, IBM zSystem mainframes, ARM-based POSIX client, virtual machines, and Kubernetes.
    Starting Price: $19.10 per terabyte
  • 36
    DQLabs

    DQLabs

    DQLabs, Inc

    DQLabs has a decade of experience in providing data related solutions to fortune 100 clients around data integration, data governance, data analytics, data visualization, and data science-related solutions. The platform has all the inbuilt features to make autonomous execution without any manual or configuration. With this AI and ML-powered tool, scalability, governance, and automation from end to end are possible. It also provides easy integration and compatibility with other tools in the data ecosystem. With the use of AI and Machine Learning, the decision is made possible in all aspects of data management. No more ETL, workflows, and rules – leverage the new world of AI decisioning in data management as the platform learns and reconfigures rules automatically as business strategy shifts and demands new data patterns, trends.
  • 37
    Alex Solutions

    Alex Solutions

    Alex Solutions

    The Alex Platform is your enterprise’s single source of data and business truth. Alex is a foundational pillar of our customer’s data-driven success. From day one of implementation, Alex is designed to start reducing complexity and creating value immediately. Alex Augmented Data Catalog is powered by the industry’s best machine learning, rapidly providing a unified, enterprise-wide data platform. No matter how complex your technical landscape may be, Alex Data Lineage helps you map and understand your data flows in an automated and secure way. Worldwide teams need worldwide coordination. Alex Intelligent Business Glossary’s beautiful UI and rich functionality is perfect for conducting global collaboration. Combat complexity of the multi-cloud and global enterprise by unifying all definitions, policies, metrics, rules, processes, workflows and more. Power global data governance programs.
  • 38
    Talend Data Catalog
    Talend Data Catalog gives your organization a single, secure point of control for your data. With robust tools for search and discovery, and connectors to extract metadata from virtually any data source, Data Catalog makes it easy to protect your data, govern your analytics, manage data pipelines, and accelerate your ETL processes. Data Catalog automatically crawls, profiles, organizes, links, and enriches all your metadata. Up to 80% of the information associated with the data is documented automatically and kept up-to-date through smart relationships and machine learning, continually delivering the most current data to the user. Make data governance a team sport with a secure single point of control where you can collaborate to improve data accessibility, accuracy, and business relevance. Support data privacy and regulatory compliance with intelligent data lineage tracing and compliance tracking.
  • 39
    Mozart Data

    Mozart Data

    Mozart Data

    Mozart Data is the all-in-one modern data platform that makes it easy to consolidate, organize, and analyze data. Start making data-driven decisions by setting up a modern data stack in an hour - no engineering required.
  • 40
    Alibaba Cloud Data Lake Formation
    A data lake is a centralized repository used for big data and AI computing. It allows you to store structured and unstructured data at any scale. Data Lake Formation (DLF) is a key component of the cloud-native data lake framework. DLF provides an easy way to build a cloud-native data lake. It seamlessly integrates with a variety of compute engines and allows you to manage the metadata in data lakes in a centralized manner and control enterprise-class permissions. Systematically collects structured, semi-structured, and unstructured data and supports massive data storage. Uses an architecture that separates computing from storage. You can plan resources on demand at low costs. This improves data processing efficiency to meet the rapidly changing business requirements. DLF can automatically discover and collect metadata from multiple engines and manage the metadata in a centralized manner to solve the data silo issues.
  • 41
    BigLake

    BigLake

    Google

    BigLake is a storage engine that unifies data warehouses and lakes by enabling BigQuery and open-source frameworks like Spark to access data with fine-grained access control. BigLake provides accelerated query performance across multi-cloud storage and open formats such as Apache Iceberg. Store a single copy of data with uniform features across data warehouses & lakes. Fine-grained access control and multi-cloud governance over distributed data. Seamless integration with open-source analytics tools and open data formats. Unlock analytics on distributed data regardless of where and how it’s stored, while choosing the best analytics tools, open source or cloud-native over a single copy of data. Fine-grained access control across open source engines like Apache Spark, Presto, and Trino, and open formats such as Parquet. Performant queries over data lakes powered by BigQuery. Integrates with Dataplex to provide management at scale, including logical data organization.
    Starting Price: $5 per TB
  • 42
    Infor OS
    Infor OS is your cloud operating platform for the future, designed to bring productivity, business processes and Artificial Intelligence together, and offer operational insights that were never accessible to a business before. The platform delivers technology that goes beyond enabling business, it drives it, putting the user at the centre of every experience, and serving as a unifying foundation for your entire business ecosystem. The result is a connected, intelligent network that automates, anticipates, predicts, and informs your stakeholders, unifying your business. By marrying business processes to employee communications, Infor OS's collaboration capabilities allow you to contextualise intelligence, make single sign-on a reality, and increase efficiency by enabling employees to work smarter and faster. Quickly develop enterprise capabilities tailored to your needs with the extensibility provided by Infor OS.
  • 43
    Varada

    Varada

    Varada

    Varada’s dynamic and adaptive big data indexing solution enables to balance performance and cost with zero data-ops. Varada’s unique big data indexing technology serves as a smart acceleration layer on your data lake, which remains the single source of truth, and runs in the customer cloud environment (VPC). Varada enables data teams to democratize data by operationalizing the entire data lake while ensuring interactive performance, without the need to move data, model or manually optimize. Our secret sauce is our ability to automatically and dynamically index relevant data, at the structure and granularity of the source. Varada enables any query to meet continuously evolving performance and concurrency requirements for users and analytics API calls, while keeping costs predictable and under control. The platform seamlessly chooses which queries to accelerate and which data to index. Varada elastically adjusts the cluster to meet demand and optimize cost and performance.
  • 44
    Dremio

    Dremio

    Dremio

    Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable.
  • 45
    Utilihive

    Utilihive

    Greenbird Integration Technology

    Utilihive is a cloud-native big data integration platform, purpose-built for the digital data-driven utility, offered as a managed service (SaaS). Utilihive is the leading Enterprise-iPaaS (iPaaS) that is purpose-built for energy and utility usage scenarios. Utilihive provides both the technical infrastructure platform (connectivity, integration, data ingestion, data lake, API management) and pre-configured integration content or accelerators (connectors, data flows, orchestrations, utility data model, energy data services, monitoring and reporting dashboards) to speed up the delivery of innovative data driven services and simplify operations. Utilities play a vital role towards achieving the Sustainable Development Goals and now have the opportunity to build universal platforms to facilitate the data economy in a new world including renewable energy. Seamless access to data is crucial to accelerate the digital transformation.
  • 46
    Atlan

    Atlan

    Atlan

    The modern data workspace. Make all your data assets from data tables to BI reports, instantly discoverable. Our powerful search algorithms combined with easy browsing experience, make finding the right asset, a breeze. Atlan auto-generates data quality profiles which make detecting bad data, dead easy. From automatic variable type detection & frequency distribution to missing values and outlier detection, we’ve got you covered. Atlan takes the pain away from governing and managing your data ecosystem! Atlan’s bots parse through SQL query history to auto construct data lineage and auto-detect PII data, allowing you to create dynamic access policies & best in class governance. Even non-technical users can directly query across multiple data lakes, warehouses & DBs using our excel-like query builder. Native integrations with tools like Tableau and Jupyter makes data collaboration come alive.
  • 47
    Datametica

    Datametica

    Datametica

    At Datametica, our birds with unprecedented capabilities help eliminate business risks, cost, time, frustration, and anxiety from the entire process of data warehouse migration to the cloud. Migration of existing data warehouse, data lake, ETL, and Enterprise business intelligence to the cloud environment of your choice using Datametica automated product suite. Architecting an end-to-end migration strategy, with workload discovery, assessment, planning, and cloud optimization. Starting from discovery and assessment of your existing data warehouse to planning the migration strategy – Eagle gives clarity on what’s needed to be migrated and in what sequence, how the process can be streamlined, and what are the timelines and costs. The holistic view of the workloads and planning reduces the migration risk without impacting the business.
  • 48
    DvSum

    DvSum

    DvSum

    DvSum is a AI-powered Data Intelligence platform that makes it remarkably easier for your data and analytics teams to discover, monitor, and govern data. With powerful AI-enabled algorithms, DvSum automatically catalogues, classifies, and curates your data and makes it available as an actionable Data Catalog. Propel your enterprise towards its digital and analytics enabled transformation goals with DvSum Data Intelligence.
    Starting Price: $1000/ per month
  • 49
    ASG Data Intelligence

    ASG Data Intelligence

    ASG Technologies

    The demand for data-driven insights and innovation has never been higher. Maintaining a competitive edge in today’s global enterprises hinges on the ability to leverage trusted data to make informed and strategic business decisions. Unfortunately, while most companies collect a wealth of data, it too often remains unused because business leaders either can’t find it or simply don’t understand and trust it. ASG Data Intelligence (ASG DI) is the solution for data distrust. It is a metadata-driven platform that makes technical data “smarter” with end-to-end views of the data and its movements (data lineage) combined with business meaning and usage guardrails. Data value is unleashed by making it available, understood and trusted to users across your organization – data scientists, analysts, marketers and more. Build trust in data with improved understanding of its origins, processes and business context.
  • 50
    Scribble Data

    Scribble Data

    Scribble Data

    Scribble Data empowers organizations to enrich their raw data and easily transform it to enable reliable and fast decision-making for persistent business problems. Data-driven decision support for your business. A data-to-decision platform that helps you generate high-fidelity insights to automate decision-making. Solve your persistent business decision-making problems instantly with advanced analytics powered by machine learning. Rest easy and focus your energy on critical tasks, while Enrich does the heavy lifting to ensure the availability of reliable and trustworthy data for decision-making. Leverage customized data-driven workflows for easy consumption of data, and reduce your dependence on data science and machine learning engineering teams. Go from concept to operational data product in a few weeks, not months with feature engineering capabilities that can prepare high volume and high complexity data at scale.