Alternatives to Apache Parquet

Compare Apache Parquet alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Apache Parquet in 2024. Compare features, ratings, user reviews, pricing, and more from Apache Parquet competitors and alternatives in order to make an informed decision for your business.

  • 1
    Google Cloud BigQuery
    BigQuery is a serverless, multicloud data warehouse that simplifies the process of working with all types of data so you can focus on getting valuable business insights quickly. At the core of Google’s data cloud, BigQuery allows you to simplify data integration, cost effectively and securely scale analytics, share rich data experiences with built-in business intelligence, and train and deploy ML models with a simple SQL interface, helping to make your organization’s operations more data-driven.
    Compare vs. Apache Parquet View Software
    Visit Website
  • 2
    StarTree

    StarTree

    StarTree

    StarTree Cloud is a fully-managed real-time analytics platform designed for OLAP at massive speed and scale for user-facing applications. Powered by Apache Pinot, StarTree Cloud provides enterprise-grade reliability and advanced capabilities such as tiered storage, scalable upserts, plus additional indexes and connectors. It integrates seamlessly with transactional databases and event streaming platforms, ingesting data at millions of events per second and indexing it for lightning-fast query responses. StarTree Cloud is available on your favorite public cloud or for private SaaS deployment. • Gain critical real-time insights to run your business • Seamlessly integrate data streaming and batch data • High performance in throughput and low-latency at petabyte scale • Fully-managed cloud service • Tiered storage to optimize cloud performance & spend • Fully-secure & enterprise-ready
    Compare vs. Apache Parquet View Software
    Visit Website
  • 3
    Amazon Redshift
    More customers pick Amazon Redshift than any other cloud data warehouse. Redshift powers analytical workloads for Fortune 500 companies, startups, and everything in between. Companies like Lyft have grown with Redshift from startups to multi-billion dollar enterprises. No other data warehouse makes it as easy to gain new insights from all your data. With Redshift you can query petabytes of structured and semi-structured data across your data warehouse, operational database, and your data lake using standard SQL. Redshift lets you easily save the results of your queries back to your S3 data lake using open formats like Apache Parquet to further analyze from other analytics services like Amazon EMR, Amazon Athena, and Amazon SageMaker. Redshift is the world’s fastest cloud data warehouse and gets faster every year. For performance intensive workloads you can use the new RA3 instances to get up to 3x the performance of any cloud data warehouse.
    Starting Price: $0.25 per hour
  • 4
    Apache Iceberg

    Apache Iceberg

    Apache Software Foundation

    Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time. Iceberg supports flexible SQL commands to merge new data, update existing rows, and perform targeted deletes. Iceberg can eagerly rewrite data files for read performance, or it can use delete deltas for faster updates. Iceberg handles the tedious and error-prone task of producing partition values for rows in a table and skips unnecessary partitions and files automatically. No extra filters are needed for fast queries, and the table layout can be updated as data or queries change.
  • 5
    DuckDB

    DuckDB

    DuckDB

    Processing and storing tabular datasets, e.g. from CSV or Parquet files. Large result set transfer to client. Large client/server installations for centralized enterprise data warehousing. Writing to a single database from multiple concurrent processes. DuckDB is a relational database management system (RDBMS). That means it is a system for managing data stored in relations. A relation is essentially a mathematical term for a table. Each table is a named collection of rows. Each row of a given table has the same set of named columns, and each column is of a specific data type. Tables themselves are stored inside schemas, and a collection of schemas constitutes the entire database that you can access.
  • 6
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 7
    Apache HBase

    Apache HBase

    The Apache Software Foundation

    Use Apache HBase™ when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Automatic failover support between RegionServers. Easy to use Java API for client access. Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options. Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX.
  • 8
    Apache Kudu

    Apache Kudu

    The Apache Software Foundation

    A Kudu cluster stores tables that look just like tables you're used to from relational (SQL) databases. A table can be as simple as a binary key and value, or as complex as a few hundred different strongly-typed attributes. Just like SQL, every table has a primary key made up of one or more columns. This might be a single column like a unique user identifier, or a compound key such as a (host, metric, timestamp) tuple for a machine time-series database. Rows can be efficiently read, updated, or deleted by their primary key. Kudu's simple data model makes it a breeze to port legacy applications or build new ones, no need to worry about how to encode your data into binary blobs or make sense of a huge database full of hard-to-interpret JSON. Tables are self-describing, so you can use standard tools like SQL engines or Spark to analyze your data. Kudu's APIs are designed to be easy to use.
  • 9
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 10
    qikkDB

    qikkDB

    qikkDB

    QikkDB is a GPU accelerated columnar database, delivering stellar performance for complex polygon operations and big data analytics. When you count your data in billions and want to see real-time results you need qikkDB. We support Windows and Linux operating systems. We use Google Tests as the testing framework. There are hundreds of unit tests and tens of integration tests in the project. For development on Windows, Microsoft Visual Studio 2019 is recommended, and its dependencies are CUDA version 10.2 minimal, CMake 3.15 or newer, vcpkg, boost. For development on Linux, the dependencies are CUDA version 10.2 minimal, CMake 3.15 or newer, and boost. This project is licensed under the Apache License, Version 2.0. You can use an installation script or dockerfile to install qikkDB.
  • 11
    DataStax

    DataStax

    DataStax

    The Open, Multi-Cloud Stack for Modern Data Apps. Built on open-source Apache Cassandra™. Global-scale and 100% uptime without vendor lock-in. Deploy on multi-cloud, on-prem, open-source, and Kubernetes. Elastic and pay-as-you-go for improved TCO. Start building faster with Stargate APIs for NoSQL, real-time, reactive, JSON, REST, and GraphQL. Skip the complexity of multiple OSS projects and APIs that don’t scale. Ideal for commerce, mobile, AI/ML, IoT, microservices, social, gaming, and richly interactive applications that must scale-up and scale-down with demand. Get building modern data applications with Astra, a database-as-a-service powered by Apache Cassandra™. Use REST, GraphQL, JSON with your favorite full-stack framework Richly interactive apps that are elastic and viral-ready from Day 1. Pay-as-you-go Apache Cassandra DBaaS that scales effortlessly and affordably.
  • 12
    Rockset

    Rockset

    Rockset

    Real-Time Analytics on Raw Data. Live ingest from S3, Kafka, DynamoDB & more. Explore raw data as SQL tables. Build amazing data-driven applications & live dashboards in minutes. Rockset is a serverless search and analytics engine that powers real-time apps and live dashboards. Operate directly on raw data, including JSON, XML, CSV, Parquet, XLSX or PDF. Plug data from real-time streams, data lakes, databases, and data warehouses into Rockset. Ingest real-time data without building pipelines. Rockset continuously syncs new data as it lands in your data sources without the need for a fixed schema. Use familiar SQL, including joins, filters, and aggregations. It’s blazing fast, as Rockset automatically indexes all fields in your data. Serve fast queries that power the apps, microservices, live dashboards, and data science notebooks you build. Scale without worrying about servers, shards, or pagers.
  • 13
    Sadas Engine
    Sadas Engine is the fastest Columnar Database Management System both in Cloud and On Premise. Turn Data into Information with the fastest columnar Database Management System able to perform 100 times faster than transactional DBMSs and able to carry out searches on huge quantities of data over a period even longer than 10 years. Every day we work to ensure impeccable service and appropriate solutions to enhance the activities of your specific business. SADAS srl, a company of the AS Group , is dedicated to the development of Business Intelligence solutions, data analysis applications and DWH tools, relying on cutting-edge technology. The company operates in many sectors: banking, insurance, leasing, commercial, media and telecommunications, and in the public sector. Innovative software solutions for daily management needs and decision-making processes, in any sector
  • 14
    Google Cloud Bigtable
    Google Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. Fast and performant: Use Cloud Bigtable as the storage engine that grows with you from your first gigabyte to petabyte-scale for low-latency applications as well as high-throughput data processing and analytics. Seamless scaling and replication: Start with a single node per cluster, and seamlessly scale to hundreds of nodes dynamically supporting peak demand. Replication also adds high availability and workload isolation for live serving apps. Simple and integrated: Fully managed service that integrates easily with big data tools like Hadoop, Dataflow, and Dataproc. Plus, support for the open source HBase API standard makes it easy for development teams to get started.
  • 15
    InfiniDB

    InfiniDB

    Database of Databases

    InfiniDB is a column-store DBMS optimized for OLAP workloads. It has a distributed architecture to support Massive Paralllel Processing (MPP). It uses MySQL as its front-end such that users familiar with MySQL can quickly migrate to InfiniDB. Due to this fact, users can connect to InfiniDB using any MySQL connector. InfiniDB applies MVCC to do concurrency control. It uses term System Change Number (SCN) to indicate a version of the system. In its Block Resolution Manager (BRM), it utilizes three structures, version buffer, version substitution structure, and version buffer block manager, to manage multiple versions. InfiniDB applies deadlock detection to resolve conflicts. InfiniDB uses MySQL as its front-end and supports all MySQL syntaxes, including foreign keys. InfiniDB is a columnar DBMS. For each column, InfiniDB applies range partitioning and stores the minimum and maximum value of each partition in a small structure called extent map.
  • 16
    ClickHouse

    ClickHouse

    ClickHouse

    ClickHouse is a fast open-source OLAP database management system. It is column-oriented and allows to generate analytical reports using SQL queries in real-time. ClickHouse's performance exceeds comparable column-oriented database management systems currently available on the market. It processes hundreds of millions to more than a billion rows and tens of gigabytes of data per single server per second. ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency. ClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure.
  • 17
    Hypertable

    Hypertable

    Hypertable

    Hypertable delivers scalable database capacity at maximum performance to speed up your big data application and reduce your hardware footprint. Hypertable delivers maximum efficiency and superior performance over the competition which translates into major cost savings. A proven scalable design that powers hundreds of Google services. All the benefits of open source with a strong and thriving community. C++ implementation for optimum performance. 24/7/365 support for your business-critical big data application. Unparalleled access to Hypertable brain power by the employer of all core Hypertable developers. Hypertable was designed for the express purpose of solving the scalability problem, a problem that is not handled well by a traditional RDBMS. Hypertable is based on a design developed by Google to meet their scalability requirements and solves the scale problem better than any of the other NoSQL solutions out there.
  • 18
    Querona

    Querona

    YouNeedIT

    We make BI & Big Data analytics work easier and faster. Our goal is to empower business users and make always-busy business and heavily loaded BI specialists less dependent on each other when solving data-driven business problems. If you have ever experienced a lack of data you needed, time to consuming report generation or long queue to your BI expert, consider Querona. Querona uses a built-in Big Data engine to handle growing data volumes. Repeatable queries can be cached or calculated in advance. Optimization needs less effort as Querona automatically suggests query improvements. Querona empowers business analysts and data scientists by putting self-service in their hands. They can easily discover and prototype data models, add new data sources, experiment with query optimization and dig in raw data. Less IT is needed. Now users can get live data no matter where it is stored. If databases are too busy to be queried live, Querona will cache the data.
  • 19
    Greenplum

    Greenplum

    Greenplum Database

    Greenplum Database® is an advanced, fully featured, open source data warehouse. It provides powerful and rapid analytics on petabyte scale data volumes. Uniquely geared toward big data analytics, Greenplum Database is powered by the world’s most advanced cost-based query optimizer delivering high analytical query performance on large data volumes. Greenplum Database® project is released under the Apache 2 license. We want to thank all our current community contributors and are interested in all new potential contributions. For the Greenplum Database community no contribution is too small, we encourage all types of contributions. An open-source massively parallel data platform for analytics, machine learning and AI. Rapidly create and deploy models for complex applications in cybersecurity, predictive maintenance, risk management, fraud detection, and many other areas. Experience the fully featured, integrated, open source analytics platform.
  • 20
    kdb+

    kdb+

    Kx Systems

    A high-performance cross-platform historical time-series columnar database featuring: - An in-memory compute engine - A real-time streaming processor - An expressive query and programming language called q
  • 21
    Snowflake

    Snowflake

    Snowflake

    Your cloud data platform. Secure and easy access to any data with infinite scalability. Get all the insights from all your data by all your users, with the instant and near-infinite performance, concurrency and scale your organization requires. Seamlessly share and consume shared data to collaborate across your organization, and beyond, to solve your toughest business problems in real time. Boost the productivity of your data professionals and shorten your time to value in order to deliver modern and integrated data solutions swiftly from anywhere in your organization. Whether you’re moving data into Snowflake or extracting insight out of Snowflake, our technology partners and system integrators will help you deploy Snowflake for your success.
    Starting Price: $40.00 per month
  • 22
    CrateDB

    CrateDB

    CrateDB

    The enterprise database for time series, documents, and vectors. Store any type of data and combine the simplicity of SQL with the scalability of NoSQL. CrateDB is an open source distributed database running queries in milliseconds, whatever the complexity, volume and velocity of data.
  • 23
    Vertica

    Vertica

    OpenText

    The Unified Analytics Warehouse. Highest performing analytics and machine learning at extreme scale. As the criteria for data warehousing continues to evolve, tech research analysts are seeing new leaders in the drive for game-changing big data analytics. Vertica powers data-driven enterprises so they can get the most out of their analytics initiatives with advanced time-series and geospatial analytics, in-database machine learning, data lake integration, user-defined extensions, cloud-optimized architecture, and more. Our Under the Hood webcast series lets you to dive deep into Vertica features – delivered by Vertica engineers and technical experts – to find out what makes it the fastest and most scalable advanced analytical database on the market. From ride sharing apps and smart agriculture to predictive maintenance and customer analytics, Vertica supports the world’s leading data-driven disruptors in their pursuit of industry and business transformation.
  • 24
    Apache Pinot

    Apache Pinot

    Apache Corporation

    Pinot is designed to answer OLAP queries with low latency on immutable data. Pluggable indexing technologies - Sorted Index, Bitmap Index, Inverted Index. Joins are currently not supported, but this problem can be overcome by using Trino or PrestoDB for querying. SQL like language that supports selection, aggregation, filtering, group by, order by, distinct queries on data. Consist of of both offline and real-time table. Use real-time table only to cover segments for which offline data may not be available yet. Detect the right anomalies by customizing anomaly detect flow and notification flow.
  • 25
    Azure Table Storage
    Use Azure Table storage to store petabytes of semi-structured data and keep costs down. Unlike many data stores—on-premises or cloud-based—Table storage lets you scale up without having to manually shard your dataset. Availability also isn’t a concern: using geo-redundant storage, stored data is replicated three times within a region—and an additional three times in another region, hundreds of miles away. Table storage is excellent for flexible datasets—web app user data, address books, device information, and other metadata—and lets you build cloud applications without locking down the data model to particular schemas. Because different rows in the same table can have a different structure—for example, order information in one row, and customer information in another—you can evolve your application and table schema without taking it offline. Table storage embraces a strong consistency model.
  • 26
    MariaDB

    MariaDB

    MariaDB

    MariaDB Platform is a complete enterprise open source database solution. It has the versatility to support transactional, analytical and hybrid workloads as well as relational, JSON and hybrid data models. And it has the scalability to grow from standalone databases and data warehouses to fully distributed SQL for executing millions of transactions per second and performing interactive, ad hoc analytics on billions of rows. MariaDB can be deployed on prem on commodity hardware, is available on all major public clouds and through MariaDB SkySQL as a fully managed cloud database. To learn more, visit mariadb.com.
  • 27
    Apache Cassandra

    Apache Cassandra

    Apache Software Foundation

    The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data. Cassandra's support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages.
  • 28
    MonetDB

    MonetDB

    MonetDB

    Choose from a wide range of SQL features to realise your applications from pure analytics to hybrid transactional/analytical processing. When you're curious about what's in your data; when you want to work efficiently; when your deadline is closing: MonetDB returns query result in mere seconds or even less. When you want to (re)use your own code; when you need specialised functions: use the hooks to add your own user-defined functions in SQL, Python, R or C/C++. Join us and expand the MonetDB community spread over 130+ countries with students, teachers, researchers, start-ups, small businesses and multinational enterprises. Join the leading Database in Analytical Jobs and surf the innovation! Don’t lose time with complex installation, use MonetDB’s easy setup to get your DBMS up and running quickly.
  • 29
    Upsolver

    Upsolver

    Upsolver

    Upsolver makes it incredibly simple to build a governed data lake and to manage, integrate and prepare streaming data for analysis. Define pipelines using only SQL on auto-generated schema-on-read. Easy visual IDE to accelerate building pipelines. Add Upserts and Deletes to data lake tables. Blend streaming and large-scale batch data. Automated schema evolution and reprocessing from previous state. Automatic orchestration of pipelines (no DAGs). Fully-managed execution at scale. Strong consistency guarantee over object storage. Near-zero maintenance overhead for analytics-ready data. Built-in hygiene for data lake tables including columnar formats, partitioning, compaction and vacuuming. 100,000 events per second (billions daily) at low cost. Continuous lock-free compaction to avoid “small files” problem. Parquet-based tables for fast queries.
  • 30
    Rons Data Stream

    Rons Data Stream

    Rons Place Software

    Rons Data Stream is a windows application designed to clean, or update, multiple data sources within seconds, whatever the size of the files, through the use of Cleaners. "Cleaners" are made up of a list of operations that are selected from a broad list of Column, Row and Cell processing rules. They can be built, saved and applied to as many data sources as required, and re-used with as many Jobs as needed. The Preview window displays both the original data and a preview of the processed data. The result of each rule is therefore very clear and comprehensible. "Jobs" contain all the detail needed for batch processing allowing 100's of data files to be processed in one go, making cleaning a whole directory an easy task. Rons Data Stream handles tabular text formats (CSV, HMTL, XML files and tokenized formats), SQL and Parquet, from loading to converting. It can work individually or hand in hand with Rons Data Edit, adding power to both applications.
  • 31
    ParadeDB

    ParadeDB

    ParadeDB

    ParadeDB brings column-oriented storage and vectorized query execution to Postgres tables. Users can choose between row and column-oriented storage at table creation time. Column-oriented tables are stored as Parquet files and are managed by Delta Lake. Search by keyword with BM25 scoring, configurable tokenizers, and multi-language support. Search by semantic meaning with support for sparse and dense vectors. Surface results with higher accuracy by combining the strengths of full text and similarity search. ParadeDB is ACID-compliant with concurrency controls across all transactions. ParadeDB integrates with the Postgres ecosystem, including clients, extensions, and libraries.
  • 32
    IBM Cloud SQL Query
    Serverless, interactive querying for analyzing data in IBM Cloud Object Storage. Query your data directly where it is stored, there's no ETL, no databases, and no infrastructure to manage. IBM Cloud SQL Query uses Apache Spark, an open-source, fast, extensible, in-memory data processing engine optimized for low latency and ad hoc analysis of data. No ETL or schema definition needed to enable SQL queries. Analyze data where it sits in IBM Cloud Object Storage using our query editor and REST API. Run as many queries as you need; with pay-per-query pricing, you pay only for the data scan. Compress or partition data to drive savings and performance. IBM Cloud SQL Query is highly available and executes queries using compute resources across multiple facilities. IBM Cloud SQL Query supports a variety of data formats such as CSV, JSON and Parquet, and allows for standard ANSI SQL.
    Starting Price: $5.00/Terabyte-Month
  • 33
    Gzip

    Gzip

    GNU Operating System

    GNU Gzip is a popular data compression program originally written by Jean-loup Gailly for the GNU project. Mark Adler wrote the decompression part. We developed this program as a replacement for compress because of the Unisys and IBM patents covering the LZW algorithm used by compress. These patents made it impossible for us to use compress, and we needed a replacement. The superior compression ratio of gzip is just a bonus. Stable source releases are available on the main GNU download server (HTTPS, HTTP, FTP) and its mirrors; please use a mirror if possible. gzip reduces the size of the named files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension ‘.gz’, while keeping the same ownership modes, access, and modification times. (The default extension is ‘z’ for MSDOS, OS/2 FAT, and Atari.) If no files are specified, the standard input is compressed to the standard output.
  • 34
    IRI DarkShield

    IRI DarkShield

    IRI, The CoSort Company

    IRI DarkShield is a powerful data masking tool that can (simultaneously) find and anonymize Personally Identifiable Information (PII) "hidden" in semi-structured and unstructured files and database columns / collections. DarkShield jobs are configured, logged, and run from IRI Workbench or a restful RPC (web services) API to encrypt, redact, blur, etc., the PII it finds in: * NoSQL & RDBs * PDFs * Parquet * JSON, XML & CSV * Excel & Word * BMP, DICOM, GIF, JPG & TIFF DarkShield is one of 3 data masking products in the IRI Data Protector Suite, and comes with IRI Voracity data management platform subscriptions. DarkShield bridges the gap between structured and unstructured data masking, allowing users to secure data in a consistent manner across disparate silos and formats by using the same masking functions as FieldShield and CellShield EE. DarkShield also handles data in RDBs and flat-files, too, but there are more capabilities that FieldShield offers for those sources.
  • 35
    IRI Data Protector Suite

    IRI Data Protector Suite

    IRI, The CoSort Company

    The IRI Data Protector suite contains multiple data masking products which can be licensed standalone or in a discounted bundle to profile, classify, search, mask, and audit PII and other sensitive information in structured, semi-structured, and unstructured data sources. Apply their many masking functions consistently for referential integrity: ​IRI FieldShield® Structured Data Masking FieldShield classifies, finds, de-identifies, risk-scores, and audits PII in databases, flat files, JSON, etc. IRI DarkShield® Semi & Unstructured Data Masking DarkShield classifies, finds, and deletes PII in text, pdf, Parquet, C/BLOBs, MS documents, logs, NoSQL DBs, images, and faces. IRI CellShield® Excel® Data Masking CellShield finds, reports on, masks, and audits changes to PII in Excel columns and values LAN-wide or in the cloud. IRI Data Masking as a Service IRI DMaaS engineers in the US and abroad do the work of classifying, finding, masking, and risk-scoring PII for you.
  • 36
    Row Zero

    Row Zero

    Row Zero

    Row Zero is the best spreadsheet for big data. Row Zero matches the experience of traditional spreadsheets but can handle 1+ billion rows, process data much faster, and connect live to your data warehouse and other data sources. Row Zero spreadsheets are powerful enough to pull entire database tables into a spreadsheet, letting non-technical users build live pivot tables, graphs, models, and metrics on data from your data warehouse. Row Zero also offers advanced security features and is cloud-based, empowering organizations to eliminate ungoverned CSV exports and locally stored spreadsheets from their org. With Row Zero, you can easily open, edit, and share multi-GB files (CSV, parquet, txt, etc.) Row Zero has all of the spreadsheet features you know and love, but was built for big data. If you know how to use Excel or Google Sheets, you can get started with ease.
    Starting Price: $8/month/user
  • 37
    Optimage

    Optimage

    Optimage

    Automatically compress images achieving the highest compression ratio at consistent image quality. Optimage is a simple yet powerful image optimization tool that provides the highest compression ratio at consistent visual quality, implementing many best practices for using images on the web and mobile. It is the first tool to achieve visually lossless compression in a comprehensive set of third-party tests and the new state of the art in image compression. It can resize and convert common image and video formats, and keep the best quality required for professional photography. It is designed to make automatic image optimization accessible and inclusive to everyone. Thousands of people have been successfully using Optimage to optimize their images. Optimage uses novel perceptual metrics and improved encoders to reduce image size by up to 90% without losing visual quality. Optimage provides the highest compression ratio by using advanced image reduction and data compression algorithms.
    Starting Price: $15 per month
  • 38
    Raijin

    Raijin

    RAIJINDB

    In order to deal with sparse data, the Raijin Database uses a flat JSON representation for the data records. The Raijin Database supports SQL as its primary query language while lifting some of SQL's limitations. Data compression not only saves disk space but provides a performance boost with modern CPUs. Most NoSQL solutions are inefficient at or totally lack support for analytical queries. Raijin DB supports group by and aggregations using standard SQL syntax. Vectorized execution and cache-friendly algorithms allow large amounts of data to be operated on. Backed by optimized SIMD instructions (SSE2/AVX2) and a modern compressed hybrid columnar storage layer it ensures that your CPUs are not wasting cycles. This gives unparalleled data-crunching capabilities an order of magnitude faster compared to other solutions written in higher level or even interpreted languages which are inefficient at processing large amounts of data.
  • 39
    SAS Data Loader for Hadoop
    Load your data into or out of Hadoop and data lakes. Prep it so it's ready for reports, visualizations or advanced analytics – all inside the data lakes. And do it all yourself, quickly and easily. Makes it easy to access, transform and manage data stored in Hadoop or data lakes with a web-based interface that reduces training requirements. Built from the ground up to manage big data on Hadoop or in data lakes; not repurposed from existing IT-focused tools. Lets you group multiple directives to run simultaneously or one after the other. Schedule and automate directives using the exposed Public API. Enables you to share and secure directives. Call them from SAS Data Integration Studio, uniting technical and nontechnical user activities. Includes built-in directives – casing, gender and pattern analysis, field extraction, match-merge and cluster-survive. Profiling runs in-parallel on the Hadoop cluster for better performance.
  • 40
    QuasarDB

    QuasarDB

    QuasarDB

    Quasar's brain is QuasarDB, a high-performance, distributed, column-oriented timeseries database management system designed from the ground up to deliver real-time on petascale use cases. Up to 20X less disk usage. Quasardb ingestion and compression capabilities are unmatched. Up to 10,000X faster feature extraction. QuasarDB can extract features in real-time from the raw data, thanks to the combination of a built-in map/reduce query engine, an aggregation engine that leverages SIMD from modern CPUs, and stochastic indexes that use virtually no disk space. The most cost-effective timeseries solution, thanks to its ultra-efficient resource usage, the capability to leverage object storage (S3), unique compression technology, and fair pricing model. Quasar runs everywhere, from 32-bit ARM devices to high-end Intel servers, from Edge Computing to the cloud or on-premises.
  • 41
    Hackolade

    Hackolade

    Hackolade

    Hackolade is the pioneer for data modeling of NoSQL and multi-model databases, providing a comprehensive suite of data modeling tools for various NoSQL databases and APIs. Hackolade is the only data modeling tool for MongoDB, Neo4j, Cassandra, ArangoDB, BigQuery, Couchbase, Cosmos DB, Databricks, DocumentDB, DynamoDB, Elasticsearch, EventBridge Schema Registry, Glue Data Catalog, HBase, Hive, Firebase/Firestore, JanusGraph, MariaDB, MarkLogic, MySQL, Oracle, PostgreSQL, Redshift, ScyllaDB, Snowflake, SQL Server, Synapse, TinkerPop, YugabyteDB, etc. It also applies its visual design to Avro, JSON Schema, Parquet, Protobuf, Swagger and OpenAPI, and is rapidly adding new targets for its physical data modeling engine.
    Starting Price: €100 per month
  • 42
    E-MapReduce
    EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface.
  • 43
    MasterCheck

    MasterCheck

    NUGEN Audio

    MasterCheck is the complete optimization solution for today’s delivery services, a plug-in providing the tools to make sure your music reaches the listener as intended. Streaming apps, download stores and podcasts all use data compression, loudness normalization or both. These processes can affect your track in undesirable ways: your loud, punchy mix could end up quiet and flat, or suffer clipping and distortion. MasterCheck reveals these problems ahead of time, and enables you to deliver masters perfectly tuned for specific playout systems. MasterCheck demonstrates the effects of loudness normalization so you can find the sweet spot between perceived loudness and dynamics, and allows you to hear artefacts introduced by the encoding process ahead of time. You can quickly find the point where these processes will start to negatively impact the music, putting you back in control.
    Starting Price: $199 one-time payment
  • 44
    TDR Kotelnikov

    TDR Kotelnikov

    Tokyo Dawn Records

    TDR Kotelnikov is a wideband dynamics processor combining high fidelity dynamic range control with deep musical flexibility. As a descendant of the venerable TDR Feedback Compressor product family, Kotelnikov has directly inherited several unique features such as a proven control scheme, individual release control for peak and RMS content, an intuitive user interface, and powerful, state of the art, high-precision algorithms. With a sonic signature best described as “stealthy”, Kotelnikov has the ability to manipulate the dynamic range by dramatic amounts, while carefully preserving the original tone, timbre and punch of a musical signal. As such, it is perfectly suited to stereo bus compression as well as other critical applications. The “Gentleman’s Edition” (GE) adds several improvements to the standard edition’s feature set. The most obvious being an intuitive access to various equal loudness workflows, the ability to specify ratio in a frequency-dependent manner, etc.
    Starting Price: €50 one-time payment
  • 45
    Vega-Altair

    Vega-Altair

    Vega-Altair

    The Vega-Altair open-source project is not affiliated with Altair Engineering, Inc. With Vega-Altair, you can spend more time understanding your data and its meaning. Altair’s API is simple, friendly and consistent and built on top of the powerful Vega-Lite visualization grammar. This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code. The key idea is that you are declaring links between data columns and visual encoding channels, such as the x-axis, y-axis, color, etc. The rest of the plot details are handled automatically. Building on this declarative plotting idea, a surprising range of simple to sophisticated plots and visualizations can be created using relatively concise grammar.
  • 46
    Apache Ranger

    Apache Ranger

    The Apache Software Foundation

    Apache Ranger™ is a framework to enable, monitor and manage comprehensive data security across the Hadoop platform. The vision with Ranger is to provide comprehensive security across the Apache Hadoop ecosystem. With the advent of Apache YARN, the Hadoop platform can now support a true data lake architecture. Enterprises can potentially run multiple workloads, in a multi tenant environment. Data security within Hadoop needs to evolve to support multiple use cases for data access, while also providing a framework for central administration of security policies and monitoring of user access. Centralized security administration to manage all security related tasks in a central UI or using REST APIs. Fine grained authorization to do a specific action and/or operation with Hadoop component/tool and managed through a central administration tool. Standardize authorization method across all Hadoop components. Enhanced support for different authorization methods - Role based access control etc.
  • 47
    PixelChain

    PixelChain

    PixelChain

    Today most NFTs and CryptoArtworks share the same problem, the images are stored off-chain. If the project where the data is stored dies the graphic information of the artwork will be most probably lost. Storing all the artwork information and metadata on the chain solves this problem. Now you can create and store art 100% on chain that will live forever. Whenever a PixelChain gets minted our innovator smart contract will encode all the image data, compress it and send it to the blockchain, where it gets stored together with its name and author information. Later on, this data can be always accessible directly from the blockchain, decompressed and decoded by our open source decoder in order to rebuild the original image that was created by the artist. This is our MVP solution for storing Art 100% On-chain. We will apply the same principle to store other kinds of artwork like Music and Voxels.
  • 48
    Compressor

    Compressor

    Compressor

    A simple interface and intuitive controls make Compressor the perfect companion for custom encoding with Final Cut Pro. A sleek interface matches Final Cut Pro and makes it simple to navigate compression projects. Browse encoding settings in the left sidebar, and open the inspector to quickly configure advanced audio and video properties. Your batch appears in the center, directly below a large viewer that lets you view and navigate your file. You can view High Dynamic Range footage on any recent Mac that displays an extended range of brightness, and see the video right in the viewer before starting a batch export. Or step up to the Pro Display XDR and view your video in stunning HDR, the way it was meant to be seen. Make your content even more accessible by embedding audio descriptions when encoding a variety of video file formats including MOV, MP4, M4V and MXF.
    Starting Price: $49.99 one-time payment
  • 49
    Apache Sentry

    Apache Sentry

    Apache Software Foundation

    Apache Sentry™ is a system for enforcing fine grained role based authorization to data and metadata stored on a Hadoop cluster. Apache Sentry has successfully graduated from the Incubator in March of 2016 and is now a Top-Level Apache project. Apache Sentry is a granular, role-based authorization module for Hadoop. Sentry provides the ability to control and enforce precise levels of privileges on data for authenticated users and applications on a Hadoop cluster. Sentry currently works out of the box with Apache Hive, Hive Metastore/HCatalog, Apache Solr, Impala and HDFS (limited to Hive table data). Sentry is designed to be a pluggable authorization engine for Hadoop components. It allows you to define authorization rules to validate a user or application’s access requests for Hadoop resources. Sentry is highly modular and can support authorization for a wide variety of data models in Hadoop.
  • 50
    yarl

    yarl

    Python Software Foundation

    All URL parts, scheme, user, password, host, port, path, query, and fragment are accessible by properties. All URL manipulations produce a new URL object. Strings passed to constructor and modification methods are automatically encoded giving canonical representation as result. Regular properties are percent-decoded, use raw_ versions for getting encoded strings. Human-readable representation of URL is available as .human_repr(). PyPI contains binary wheels for Linux, Windows and MacOS. If you want to install yarl on another operating system (like Alpine Linux, which is not manylinux-compliant because of the missing glibc and therefore, cannot be used with our wheels) the tarball will be used to compile the library from the source code. It requires a C compiler and Python headers installed. Please note that the pure-Python (uncompiled) version is much slower. However, PyPy always uses a pure-Python implementation, and, as such, it is unaffected by this variable.