Alternatives to SHREWD Platform

Compare SHREWD Platform alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to SHREWD Platform in 2024. Compare features, ratings, user reviews, pricing, and more from SHREWD Platform competitors and alternatives in order to make an informed decision for your business.

  • 1
    Pentaho

    Pentaho

    Hitachi Vantara

    Accelerate data-driven transformation powered by intelligent data operations across your edge to multi-cloud data fabric. Pentaho lets you automate the daily tasks of collecting, integrating, governing, and analytics, on an intelligent platform providing an open and composable foundation for all enterprise data. Schedule your free demo to learn more about Pentaho Integration and Analytics, Data Catalog and Storage Optimizer.
  • 2
    Centralpoint
    Centralpoint is a Digital Experience Platform, and in Gartner's Magic Quadrant. It is used by over 350 clients worldwide going beyond Enterprise Content Management, securely authenticating (AD/SAML,OpenID, oAuth) all users for self service interaction. Centralpoint automatically aggregates your information from disparate sources, applying rich metadata against your rules, yielding true Knowledge Management; allowing you to search and relate disparate sets of data from anywhere. Centralpoint offers the most robust Module Gallery, out of the box, and can be installed on premise or in the Cloud. Be sure to see our solutions for Automating Metadata, Automating retention Policy Management, and simplifying the mash up of disparate data for the benefit of AI (Artificial Intelligence). Centralpoint is often used as an intelligent altternative to Sharepoint, allowing easy Migration tools. It can also be used for any secure portal solution for your public sites, Intranets, Members or Extranets.
  • 3
    Wavo

    Wavo

    Wavo

    We’ve released a revolutionary big data platform that gathers all information about a music business, providing a single source of truth for decisions. Every music business has hundreds of data sources. But they are siloed and fragmented. Our platform identifies and connects them to build a foundation of quality data that can be applied to all daily music business operations. To work efficiently and securely—and to surface valuable insight no one else can—record labels and agencies require a sophisticated data management and governance system, so that data is available, relevant, and usable at all times. As data sources are ingested into Wavo’s Big Data Platform, machine learning is deployed to tag data based on personalized templates, making it easy to access and drill-down into important information. This enables everyone in a music business to activate and deliver business-ready data, backed up and organized for immediate value.
  • 4
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 5
    Cazena

    Cazena

    Cazena

    Cazena’s Instant Data Lake accelerates time to analytics and AI/ML from months to minutes. Powered by its patented automated data platform, Cazena delivers the first SaaS experience for data lakes. Zero operations required. Enterprises need a data lake that easily supports all of their data and tools for analytics, machine learning and AI. To be effective, a data lake must offer secure data ingestion, flexible data storage, access and identity management, tool integration, optimization and more. Cloud data lakes are complicated to do yourself, which is why they require expensive teams. Cazena’s Instant Cloud Data Lakes are instantly production-ready for data loading and analytics. Everything is automated, supported on Cazena’s SaaS Platform with continuous Ops and self-service access via the Cazena SaaS Console. Cazena's Instant Data Lakes are turnkey and production-ready for secure data ingest, storage and analytics.
  • 6
    eDrain

    eDrain

    Eclettica

    Planning. Innovating. developing. From need to solution. eDrain DATA CLOUD PLATFORM. eDrain is a tool specialized in data collection, monitoring and production of aggregate reporting. It is a system that operates in the BigData field, able to integrate, thanks to a driver oriented mechanism, the collection of heterogeneous data. The implemented driver engine allows you to integrate a large number of data streams and devices simultaneously. Features. Customizing the dashboard. Add views. Customized widget creation. Configuration of new devices. Configuration of new flows. Configuration of new sensors. Custom report configuration. Check of the sensor status. Realtime original data flow. Definition of the logic of flows. Definition of analysis rules. Definition of warning thresholds. Events configuration. Elaboration of actions. Creation of new devices. Configuration of new stations. Latching new data streams. Management and verification of alerts.
  • 7
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 8
    DataLux

    DataLux

    Vivorbis

    A data management and analytics platform built to address data challenges and enable real-time decision making. DataLux comes with plug & play adaptors, providing aggregation of large data sets and the ability to gather and visualise insights in real-time. Use the data lake to pre-empt new innovations. Store data, ready for data modelling. Create portable applications by utilising containeristion in a public, private cloud or on premise. Bring multiple time-series market and inferred data together such as stock exchange tick data, stock market policy actions, related and cross-industry news, alternative datasets to extract causal information about stock markets, macroeconomics and more. Shape business decisions, product innovations by providing insights and informing key decisions to improve products. Run interdisciplinary A/B experiments across product development, design and engineering from ideation to decision making.
  • 9
    Hopsworks

    Hopsworks

    Logical Clocks

    Hopsworks is an open-source Enterprise platform for the development and operation of Machine Learning (ML) pipelines at scale, based around the industry’s first Feature Store for ML. You can easily progress from data exploration and model development in Python using Jupyter notebooks and conda to running production quality end-to-end ML pipelines, without having to learn how to manage a Kubernetes cluster. Hopsworks can ingest data from the datasources you use. Whether they are in the cloud, on‑premise, IoT networks, or from your Industry 4.0-solution. Deploy on‑premises on your own hardware or at your preferred cloud provider. Hopsworks will provide the same user experience in the cloud or in the most secure of air‑gapped deployments. Learn how to set up customized alerts in Hopsworks for different events that are triggered as part of the ingestion pipeline.
    Starting Price: $1 per month
  • 10
    PHEMI Health DataLab

    PHEMI Health DataLab

    PHEMI Systems

    The PHEMI Trustworthy Health DataLab is a unique, cloud-based, integrated big data management system that allows healthcare organizations to enhance innovation and generate value from healthcare data by simplifying the ingestion and de-identification of data with NSA/military-grade governance, privacy, and security built-in. Conventional products simply lock down data, PHEMI goes further, solving privacy and security challenges and addressing the urgent need to secure, govern, curate, and control access to privacy-sensitive personal healthcare information (PHI). This improves data sharing and collaboration inside and outside of an enterprise—without compromising the privacy of sensitive information or increasing administrative burden. PHEMI Trustworthy Health DataLab can scale to any size of organization, is easy to deploy and manage, connects to hundreds of data sources, and integrates with popular data science and business analysis tools.
  • 11
    GeoDB

    GeoDB

    GeoDB

    Less than 10% of a 260bn big data market is being exploited due to an inefficient process and the dominance of intermediaries. Our mission is to democratize the big data market and open the door to 90% of the not exploited data-sharing market. A decentralized system designed to build a data oracle network based on an open protocol for interaction between participants and a sustainable economy. Multifunctional DAPP & crypto wallet allows to get rewards for the generated data and use various DeFi tools in a user-friendly UX. GeoDB marketplace allows data buyers around the world to purchase users’ generated data from applications connected to GeoDB. Data Sources are participants who generate data that is uploaded through our proprietary and third-party partner apps. Validators mediate transfer of data and verify the contracts in a decentralized, efficient process using blockchain technology.
  • 12
    Protegrity

    Protegrity

    Protegrity

    Our platform allows businesses to use data—including its application in advanced analytics, machine learning, and AI—to do great things without worrying about putting customers, employees, or intellectual property at risk. The Protegrity Data Protection Platform doesn't just secure data—it simultaneously classifies and discovers data while protecting it. You can't protect what you don't know you have. Our platform first classifies data, allowing users to categorize the type of data that can mostly be in the public domain. With those classifications established, the platform then leverages machine learning algorithms to discover that type of data. Classification and discovery finds the data that needs to be protected. Whether encrypting, tokenizing, or applying privacy methods, the platform secures the data behind the many operational systems that drive the day-to-day functions of business, as well as the analytical systems behind decision-making.
  • 13
    GigaSpaces

    GigaSpaces

    GigaSpaces

    Smart DIH is an operational data hub that powers real-time modern applications. It unleashes the power of customers’ data by transforming data silos into assets, turning organizations into data-driven enterprises. Smart DIH consolidates data from multiple heterogeneous systems into a highly performant data layer. Low code tools empower data professionals to deliver data microservices in hours, shortening developing cycles and ensuring data consistency across all digital channels. XAP Skyline is a cloud-native, in memory data grid (IMDG) and developer framework designed for mission critical, cloud-native apps. XAP Skyline delivers maximal throughput, microsecond latency and scale, while maintaining transactional consistency. It provides extreme performance, significantly reducing data access time, which is crucial for real-time decisioning, and transactional applications. XAP Skyline is used in financial services, retail, and other industries where speed and scalability are critical.
  • 14
    Varada

    Varada

    Varada

    Varada’s dynamic and adaptive big data indexing solution enables to balance performance and cost with zero data-ops. Varada’s unique big data indexing technology serves as a smart acceleration layer on your data lake, which remains the single source of truth, and runs in the customer cloud environment (VPC). Varada enables data teams to democratize data by operationalizing the entire data lake while ensuring interactive performance, without the need to move data, model or manually optimize. Our secret sauce is our ability to automatically and dynamically index relevant data, at the structure and granularity of the source. Varada enables any query to meet continuously evolving performance and concurrency requirements for users and analytics API calls, while keeping costs predictable and under control. The platform seamlessly chooses which queries to accelerate and which data to index. Varada elastically adjusts the cluster to meet demand and optimize cost and performance.
  • 15
    Sadas Engine
    Sadas Engine is the fastest Columnar Database Management System both in Cloud and On Premise. Turn Data into Information with the fastest columnar Database Management System able to perform 100 times faster than transactional DBMSs and able to carry out searches on huge quantities of data over a period even longer than 10 years. Every day we work to ensure impeccable service and appropriate solutions to enhance the activities of your specific business. SADAS srl, a company of the AS Group , is dedicated to the development of Business Intelligence solutions, data analysis applications and DWH tools, relying on cutting-edge technology. The company operates in many sectors: banking, insurance, leasing, commercial, media and telecommunications, and in the public sector. Innovative software solutions for daily management needs and decision-making processes, in any sector
  • 16
    E-MapReduce

    E-MapReduce

    Alibaba

    EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface.
  • 17
    BryteFlow

    BryteFlow

    BryteFlow

    BryteFlow builds the most efficient automated environments for analytics ever. It converts Amazon S3 into an awesome analytics platform by leveraging the AWS ecosystem intelligently to deliver data at lightning speeds. It complements AWS Lake Formation and automates the Modern Data Architecture providing performance and productivity. You can completely automate data ingestion with BryteFlow Ingest’s simple point-and-click interface while BryteFlow XL Ingest is great for the initial full ingest for very large datasets. No coding is needed! With BryteFlow Blend you can merge data from varied sources like Oracle, SQL Server, Salesforce and SAP etc. and transform it to make it ready for Analytics and Machine Learning. BryteFlow TruData reconciles the data at the destination with the source continually or at a frequency you select. If data is missing or incomplete you get an alert so you can fix the issue easily.
  • 18
    CENX Service Assurance
    CENX Service Assurance allows you to see service topology, inventory, fault, and performance from all your disparate systems, correlated into a single view. With this insight, operators can optimize hybrid communications networks and achieve closed-loop automation. Thus operators can deliver next-generation services quickly, launch new business models and support new technologies – such as Network Functions Virtualization (NFV), Software-Defined Networking (SDN), Self-organizing Networks (SON), and 5G and IoT, cost-effectively. The CENX Service Assurance fundamentally changes the way service providers work with their networks by providing actionable visibility of the entire network in near real-time, generating closed loop triggers, performing inventory and fault correlation augmented with TCA (Threshold Crossover Alert) analysis on performance data from disparate systems. It opens up a whole new set of leading-edge enterprise use cases in the 5G domain.
  • 19
    EntelliFusion

    EntelliFusion

    Teksouth

    Teksouth’s EntelliFusion is a fully managed, end-to-end solution. Rather than piecing together several different platforms for data prep, data warehousing and governance, then deploying a great deal of IT resources to figure out how to make it all work; EntelliFusion's architecture provides a one-stop shop for outfitting an organizations data infrastructure. With EntelliFusion, data silos become centralized in a single platform for cross functional KPI's, creating holistic and powerful insights. EntelliFusion’s “military-born” technology has proven successful against the strenuous demands of the USA’s top echelon of military operations. In this capacity, it was massively scaled across the DOD for over twenty years. EntelliFusion is built on the latest Microsoft technologies and frameworks which allows it to be continually enhanced and innovated. It is data agnostic, infinitely scalable, and guarantees accuracy and performance to promote end-user tool adoption.
  • 20
    Qubole

    Qubole

    Qubole

    Qubole is a simple, open, and secure Data Lake Platform for machine learning, streaming, and ad-hoc analytics. Our platform provides end-to-end services that reduce the time and effort required to run Data pipelines, Streaming Analytics, and Machine Learning workloads on any cloud. No other platform offers the openness and data workload flexibility of Qubole while lowering cloud data lake costs by over 50 percent. Qubole delivers faster access to petabytes of secure, reliable and trusted datasets of structured and unstructured data for Analytics and Machine Learning. Users conduct ETL, analytics, and AI/ML workloads efficiently in end-to-end fashion across best-of-breed open source engines, multiple formats, libraries, and languages adapted to data volume, variety, SLAs and organizational policies.
  • 21
    Bizintel360

    Bizintel360

    Bizdata

    AI powered self-service advanced analytics platform. Connect data sources and derive visualizations without any programming. Cloud native advanced analytics platform that provides high-quality data supply and intelligent real-time analysis across the enterprise without any code. Connect different data sources of different formats. Enables identification of root cause problems. Reduce cycle time: source to target. Analytics without programming knowledge. Real time data refresh on the go. Connect data source of any format, stream data in real time or defined frequency to data lake and visualize them in advanced interactive search engine-based dashboards. Descriptive, predictive and prescriptive analytics in a single platform with the power of search engine and advanced visualization. No traditional technology required to see data in various visualization formats. Roll up, slice and dice data with various mathematical computation right inside Bizintel360 visualization.
  • 22
    Teradata Vantage
    As data volumes grow faster than ever, businesses struggle to get answers. Teradata Vantage™ solves this problem. Vantage uses 100 percent of available data to uncover real-time business intelligence at scale, powering the new era of Pervasive Data Intelligence. See all data from across the entire organization in one place, whenever it's needed, with preferred languages and tools. Start small and elastically scale compute or storage in areas that impact modern architecture. Vantage unifies analytics, Data Lakes, and Data Warehouses, all in the cloud to enable business intelligence. The importance of business intelligence increases. Frustration stems from four key challenges that arise when using existing data analytics platforms: Lack of proper tools and supportive environment needed to achieve quality results. Organizations do not authorize or provide proper accessibility to the necessary tools. Data preparation is difficult.
  • 23
    Atlan

    Atlan

    Atlan

    The modern data workspace. Make all your data assets from data tables to BI reports, instantly discoverable. Our powerful search algorithms combined with easy browsing experience, make finding the right asset, a breeze. Atlan auto-generates data quality profiles which make detecting bad data, dead easy. From automatic variable type detection & frequency distribution to missing values and outlier detection, we’ve got you covered. Atlan takes the pain away from governing and managing your data ecosystem! Atlan’s bots parse through SQL query history to auto construct data lineage and auto-detect PII data, allowing you to create dynamic access policies & best in class governance. Even non-technical users can directly query across multiple data lakes, warehouses & DBs using our excel-like query builder. Native integrations with tools like Tableau and Jupyter makes data collaboration come alive.
  • 24
    Semantix Data Platform (SDP)
    A Big Data Platform that generates intelligence and efficiency for your business with features that simplify the data journey. Create algorithms, Artificial Intelligence, Machine Learning and more for your business. With SDP you unify the entire data driven journey of your business from end to end, centralizing information and creating data-driven intelligence. Ingestion, engineering, science and data visualization in a single journey. Robust operations-ready and agnostic technology that facilitates data governance. Simple and intuitive Marketplace interface with ready-made algorithms and extensibility via APIs. The only Big Data platform to centralize and unify your entire business data journey.
  • 25
    BIRD Analytics

    BIRD Analytics

    Lightning Insights

    BIRD Analytics is a blazingly fast high performance, full-stack data management, and analytics platform to generate insights using agile BI and AI/ ML models. It covers all the aspects - starting from data ingestion, transformation, wrangling, modeling, storing, analyze data in real-time, that too on petabyte-scale data. BIRD provides self-service capabilities with Google-type search and powerful ChatBot integration. We’ve compiled our resources to provide the answers you seek. From industry uses cases to blog articles, learn more about how BIRD addresses Big Data pain points. Now that you’ve discovered the value of BIRD, schedule a demo to see the platform in action and uncover how it can transform your distinct data. Utilize AI/ML technologies for greater agility & responsiveness in decision-making, cost reduction, and improving customer experiences.
  • 26
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 27
    Paxata

    Paxata

    Paxata

    Paxata is a visually-dynamic, intuitive solution that enables business analysts to rapidly ingest, profile, and curate multiple raw datasets into consumable information in a self-service manner, greatly accelerating development of actionable business insights. In addition to empowering business analysts and SMEs, Paxata also provides a rich set of workload automation and embeddable data preparation capabilities to operationalize and deliver data preparation as a service within other applications. The Paxata Adaptive Information Platform (AIP) unifies data integration, data quality, semantic enrichment, re-use & collaboration, and also provides comprehensive data governance and audit capabilities with self-documenting data lineage. The Paxata AIP utilizes a native multi-tenant elastic cloud architecture and is the only modern information platform that is currently deployed as a multi-cloud hybrid information fabric.
  • 28
    Decision Moments
    Mindtree Decision Moments is the first data analytics platform to apply continuous learning algorithms to large data pools. Using this innovative sense-and-respond system, companies can uncover compelling insights that improve over time and create more value from their digital transformation. Decision Moments is an agile and customizable data intelligence platform that simplifies technological complexity by easily adapting to fit the requirements of your organization’s existing data analytics investment. And it’s also flexible enough to modify in response to changes in the market, technologies or business needs. To gain the full value and cost savings of a data analytics platform, Decision Moments is powered by Microsoft Azure services, including the Cortana Intelligence Suite, in a cloud-native solution. Mindtree’s Decision Moments provides your key decision makers with the platform they need to make sense of large amounts of data from multiple sources.
  • 29
    Mosaic

    Mosaic

    Mosaic.tech

    Mosaic is the first Strategic Finance Platform for agile planning, real-time reporting, and better decision-making. Easily consolidating insights across ERP, CRM, HRIS, and Billing systems, Mosaic empowers teams to work together in a unified, cross-functional platform that acts as a simple source of truth for the entire organization. Mosaic was founded in 2019 by three finance leaders frustrated by the slow speed, high complexity, and inefficiencies of existing tools in the market. Knowing the office of the CFO needed an overhaul, they set out to build a platform that would address the technical challenges modern-day finance and business teams face. Today, Mosaic is leveraged by some of the fastest-growing companies, helping them align, collaborate and plan for the future.
  • 30
    Keen

    Keen

    Keen.io

    Keen is the fully managed event streaming platform. Built upon trusted Apache Kafka, we make it easier than ever for you to collect massive volumes of event data with our real-time data pipeline. Use Keen’s powerful REST API and SDKs to collect event data from anything connected to the internet. Our platform allows you to store your data securely decreasing your operational and delivery risk with Keen. With storage infrastructure powered by Apache Cassandra, data is totally secure through transfer through HTTPS and TLS, then stored with multi-layer AES encryption. Once data is securely stored, utilize our Access Keys to be able to present data in arbitrary ways without having to re-architect your security or data model. Or, take advantage of Role-based Access Control (RBAC), allowing for completely customizable permission tiers, down to specific data points or queries.
    Starting Price: $149 per month
  • 31
    Indyco

    Indyco

    Indyco

    Start your top-down analysis from an aggregated view of a sample Data Platform, moving your mouse on the area you want to explore and finding out how is connected to other company information. From redesigning the business model of a supply chain to Enterprise Data Platform practices in a banking company, here are some real business cases with Indyco as a data modeling tool. This process helped to enhance the data culture within a leading company in Italy’s food and agriculture industry, and paved the way for self-service reporting. Business Users started interacting with the Conceptual Model projected on the wall, interacting with IT in a co-design session of their Data Platform. How a bank set up Data Platform design best practices adopting indyco and then conceptual modeling, automatic documentation, business glossary.
  • 32
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 33
    DataWorks

    DataWorks

    Alibaba Cloud

    DataWorks is a Big Data platform product launched by Alibaba Cloud. It provides one-stop Big Data development, data permission management, offline job scheduling, and other features. DataWorks works straight ‘out-the-box’ without the need to worry about complex underlying cluster establishment and operations & management. You can drag and drop nodes to create a workflow. You can also edit and debug your code online, and ask other developers to join you. Supports data integration, MaxCompute SQL, MaxCompute MR, machine learning, and shell tasks. Supports task monitoring and sends alarms when errors occur to avoid service interruptions. Runs millions of tasks concurrently and supports hourly, daily, weekly, and monthly schedules. DataWorks is the best platform for building big data warehouses and provides comprehensive data warehousing services. DataWorks provides a full solution for data aggregation, data processing, data governance, and data services.
  • 34
    jethro

    jethro

    jethro

    Data-driven decision-making has unleashed a surge of business data and a rise in user demand to analyze it. This trend drives IT departments to migrate off expensive Enterprise Data Warehouses (EDW) toward cost-effective Big Data platforms like Hadoop or AWS. These new platforms come with a Total Cost of Ownership (TCO) that is about 10 times lower. They are not ideal for interactive BI applications, however, as they fail to match the high performance and user concurrency of legacy EDWs. For this exact reason, we developed Jethro. Customers use Jethro for interactive BI on Big Data. Jethro is a transparent middle tier that requires no changes to existing apps or data. It is self-driving with no maintenance required. Jethro is compatible with BI tools like Tableau, Qlik, and Microstrategy and is data source agnostic. Jethro delivers on the demands of business users allowing for thousands of concurrent users to run complicated queries over billions of records.
  • 35
    Gravwell

    Gravwell

    Gravwell

    Gravwell is an all-you-can-ingest data fusion analytics platform that enables complete context and root cause analytics for security and business data. Gravwell was founded to bring the benefits of usable machine data to all customers: large or small, text or binary, security or operational. When experienced hackers and big data experts team-up you get an analytics platform capable of things never seen before. Gravwell enables security analytics that go well beyond log data into industrial processes, vehicle fleets, IT infrastructure, or everything combined. Need to hunt down a suspected access breach? Gravwell can correlate building access logs and run facial recognition machine learning against camera data to isolate multiple subjects entering a facility with a single badge-in. We exist to provide analytics capabilities to people who need more than just text log searching and need it sooner rather than later at a price they can afford.
  • 36
    ByPath

    ByPath

    ByPath

    ByPath is the B2B Sales Intelligence solution based on the exploitation of Big Data. Receive daily business alerts and key information to target your prospecting effort and map your current business. Available on the web and as a mobile application, ByPath is the solution designed by salespeople for salespeople that allows you to improve your performance at each step of the sales cycle. ByPath automatically replicates corporate organization charts, creating familiarity with the targeted accounts, identifying the influential contacts and key decision-makers and therefore establishing the best points of contact. ByPath delivers key information about contacts, including their career record, business email and phone number, as well as the most promising lead opportunities, quoted press articles and direct links to their social network accounts.
  • 37
    IRI CoSort

    IRI CoSort

    IRI, The CoSort Company

    What is CoSort? IRI CoSort® is a fast, affordable, and easy-to-use sort/merge/report utility, and a full-featured data transformation and preparation package. The world's first sort product off the mainframe, CoSort continues to deliver maximum price-performance and functional versatility for the manipulation and blending of big data sources. CoSort also powers the IRI Voracity data management platform and many third-party tools. What does CoSort do? CoSort runs multi-threaded sort/merge jobs AND many other high-volume (big data) manipulations separately, or in combination. It can also cleanse, mask, convert, and report at the same time. Self-documenting 4GL scripts supported in Eclipse™ help you speed or leave legacy: sort, ETL and BI tools; COBOL and SQL programs, plus Hadoop, Perl, Python, and other batch jobs. Use CoSort to sort, join, aggregate, and load 2-20X faster than data wrangling and BI tools, 10x faster than SQL transforms, and 6x faster than most ETL tools.
    Starting Price: From $4K USD perpetual use
  • 38
    Intelligent Artifacts

    Intelligent Artifacts

    Intelligent Artifacts

    A new category of AI. Most current AI solutions are engineered through a statistical and purely mathematical lens. We took a different approach. With discoveries in information theory, the team at Intelligent Artifacts has built a new category of AI: a true AGI that eliminates current machine intelligence shortcomings. Our framework keeps the data and application layers separate from the intelligence layer allowing it to learn in real-time, and enabling it to explain predictions down to root cause. A true AGI demands a truly integrated platform. With Intelligent Artifacts, you'll model information, not data — predictions and decisions are real-time and transparent, and can be deployed across various domains without the need to rewrite code. And by combining specialized AI consultants with our dynamic platform, you'll get a customized solution that rapidly offers deep insights and greater outcomes from your data.
  • 39
    Lentiq

    Lentiq

    Lentiq

    Lentiq is a collaborative data lake as a service environment that’s built to enable small teams to do big things. Quickly run data science, machine learning and data analysis at scale in the cloud of your choice. With Lentiq, your teams can ingest data in real time and then process, clean and share it. From there, Lentiq makes it possible to build, train and share models internally. Simply put, data teams can collaborate with Lentiq and innovate with no restrictions. Data lakes are storage and processing environments, which provide ML, ETL, schema-on-read querying capabilities and so much more. Are you working on some data science magic? You definitely need a data lake. In the Post-Hadoop era, the big, centralized data lake is a thing of the past. With Lentiq, we use data pools, which are multi-cloud, interconnected mini-data lakes. They work together to give you a stable, secure and fast data science environment.
  • 40
    Amazon EMR

    Amazon EMR

    Amazon

    Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting.
  • 41
    Cogniteev

    Cogniteev

    Cogniteev

    We provide an easy-to-use Data Access Automation Platform for the production of customized data sets and derivative apps like search engines and data dashboards – to make data intelligible and actionable. Our solutions enable businesses to access the information they need the way they need it in order to optimize performances and achieve their business goals. We are digging websites and cloud services of your choice and internal systems for the information and the data you need, with the help of our powerful crawlers and connectors, that are fed with your business rules. We are digging websites and cloud services of your choice and internal systems for the information and the data you need, with the help of our powerful crawlers and connectors, that are fed with your business rules. It can also be reintegrated in your internal data systems and enhance the exploitation of those.
  • 42
    Tamr

    Tamr

    Tamr

    Tamr’s next-generation data mastering platform integrates machine learning with human feedback to break down data silos and continuously clean and deliver accurate data across your business. Tamr works with leading organizations around the world to solve their toughest data challenges. Tackle problems like duplicate records and errors to create a complete view of your data – from customers to product to suppliers. Next-generation data mastering integrates machine learning with human feedback to deliver clean data to drive business decisions. Feed clean data to analytics tools and operational systems, with 80% less effort than traditional approaches. From Customer 360 to reference data management, Tamr helps financial firms stay data-driven and accelerate business outcomes. Tamr helps the public sector meet mission requirements sooner through reduced manual workflows for data entity resolution.
  • 43
    Apache Arrow

    Apache Arrow

    The Apache Software Foundation

    Apache Arrow defines a language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware like CPUs and GPUs. The Arrow memory format also supports zero-copy reads for lightning-fast data access without serialization overhead. Arrow's libraries implement the format and provide building blocks for a range of use cases, including high performance analytics. Many popular projects use Arrow to ship columnar data efficiently or as the basis for analytic engines. Apache Arrow is software created by and for the developer community. We are dedicated to open, kind communication and consensus decisionmaking. Our committers come from a range of organizations and backgrounds, and we welcome all to participate with us.
  • 44
    DataMax

    DataMax

    Digiterre

    DataMax is an enterprise-ready platform that takes the most complex elements of real-time data management and makes them simple to develop, deploy and operate at scale. This provides the ability to deliver business change at greater velocity. DataMax is a unique architecture, process and combination of specific technologies that rapidly move an organisation from disparate data and reporting sources into a single view of data which provides organisations with the insight they need to run their business more effectively. The system combines technologies in a unique way that delivers enterprise strength data management. The approach was proven to scale and is Cloud deployable. Covers time series and non-time series data to produce analytics platforms which provide a step change in the quality of data, analysis and reporting available to market analytics teams and then provided to Traders.
  • 45
    Tengu

    Tengu

    Tengu

    TENGU is a DataOps Orchestration Platform that works as a central workspace for data profiles of all levels. It provides data integration, extraction, transformation, loading all within it’s graph view UI in which you can intuitively monitor your data environment. By using the platform, business, analytics & data teams need fewer meetings and service tickets to collect data, and can start right away with the data relevant to furthering the company. The Platform offers a unique graph view in which every element is automatically generated with all available info based on metadata. While allowing you to perform all necessary actions from the same workspace. Enhance collaboration and efficiency, with the ability to quickly add and share comments, documentation, tags, groups. The platform enables anyone to get straight to the data with self-service. Thanks to the many automations and low to no-code functionalities and built-in assistant.
  • 46
    OpenText Magellan
    Machine Learning and Predictive Analytics Platform. Augment data-driven decision making and accelerate business with advanced artificial intelligence in a pre-built machine learning and big data analytics platform. OpenText Magellan uses AI technologies to provide predictive analytics in easy to consume and flexible data visualizations that maximize the value of business intelligence. Artificial intelligence software eliminates the need for manual big data processing by presenting valuable business insights in a way that is accessible and related to the most critical objectives of the organization. By augmenting business processes through a curated mix of capabilities, including predictive modeling, data discovery tools, data mining techniques, IoT data analytics and more, organizations can use their data to improve decision making based on real business intelligence and analytics.
  • 47
    Robin.io

    Robin.io

    Robin.io

    ROBIN is the industry’s first hyper-converged Kubernetes platform for big data, databases, and AI/ML. The platform provides a self-service App-store experience for the deployment of any application, anywhere – runs on-premises in your private data center or in public-cloud (AWS, Azure, GCP) environments. Hyper-converged Kubernetes is a software-defined application orchestration framework that combines containerized storage, networking, compute (Kubernetes), and the application management layer into a single system. Our approach extends Kubernetes for data-intensive applications such as Hortonworks, Cloudera, Elastic stack, RDBMS, NoSQL databases, and AI/ML apps. Facilitates simpler and faster roll-out of critical Enterprise IT and LoB initiatives, such as containerization, cloud-migration, cost-consolidation, and productivity improvement. Solves the fundamental challenges of running big data and databases in Kubernetes.
  • 48
    UQube

    UQube

    Upper Quadrant

    Through a familiar spreadsheet interface, field reps, payer marketers, partners, brand marketers, pricing and reimbursement professionals, and those working in managed markets can enter information in a permission-based application that rolls up to headquarters. Data can be disseminated through UQ subscription reporting or other third-party reporting tools. With a few clicks, generate the reports you need. Prioritize KPIs, determine what’s important, and flow information into multiple reporting environments. Secure sensitive data with user-specific permissions in both the collection process and dissemination process. Fill workflow gaps that exist between off-the-shelf spreadsheets and enterprise-wide solutions. Interconnect, harmonize, and synchronize data from one system to another.
  • 49
    Rinalogy Search
    Almost any search query applied to Big Data returns a very large number of results that are often practically impossible to review. Every user has specific needs. Finding information based on a user query and general data statistics does not produce useful results. eDiscovery, healthcare, financial services, crime, consulting, academia and other fields need to be able to quickly find accurate information. Rinalogy Search is a next generation search tool that uses machine learning to interactively learn from each user to return personalized results based on user’s feedback in real time. Rinalogy Search returns relevancy scores for individual documents in the results for each query. Rinalogy Search can be deployed in clients’ IT infrastructure, close to your data and behind your firewall. Rinalogy allows users to define the level of importance of search concepts by assigning weights to them, which helps finding the results You are looking for.
    Starting Price: $50 per month
  • 50
    Seerene

    Seerene

    Seerene

    Seerene’s Digital Engineering Platform is a software analytics and process mining technology that analyzes and visualizes the software development processes in your company. It reveals weaknesses and turns your organization into a well-oiled machine, delivering software efficiently, cost-effectively, quickly, and with the highest quality. Seerene provides decision-makers with the information needed to actively drive their organization towards 360° software excellence. Reveal code that frequently contains defects and kills developer productivity.​ Reveal lighthouse teams and transfer their best-practice processes across the entire workforce.​ Reveal defect risks in release candidates with a holistic X-ray of code, development hotspots and tests. Reveal features with a mismatch between invested developer time und created user value.​ Reveal code that is never executed by end-users and produces unnecessary maintenance costs.​