Best Data Engineering Tools in New Zealand - Page 2

Compare the Top Data Engineering Tools in New Zealand as of October 2024 - Page 2

  • 1
    Arcadia Data

    Arcadia Data

    Arcadia Data

    Arcadia Data provides the first visual analytics and BI platform native to Hadoop and cloud (big data) that delivers the scale, performance, and agility business users need for both real-time and historical insights. Its flagship product, Arcadia Enterprise, was built from inception for big data platforms such as Apache Hadoop, Apache Spark, Apache Kafka, and Apache Solr, in the cloud and/or on-premises. Using artificial intelligence (AI) and machine learning (ML), Arcadia Enterprise streamlines the self-service analytics process with search-based BI and visualization recommendations. It enables real-time, high-definition insights in use cases like data lakes, cybersecurity, connected IoT devices, and customer intelligence. Arcadia Enterprise is deployed by some of the world’s leading brands, including Procter & Gamble, Citibank, Nokia, Royal Bank of Canada, Kaiser Permanente, HPE, and Neustar.
  • 2
    TIBCO Data Science

    TIBCO Data Science

    TIBCO Software

    Democratize, collaborate, and operationalize, machine learning across your organization. Data science is a team sport. Data scientists, citizen data scientists, data engineers, business users, and developers need flexible and extensible tools that promote collaboration, automation, and reuse of analytic workflows. But algorithms are only one piece of the advanced analytic puzzle. To deliver predictive insights, companies need to increase focus on the deployment, management, and monitoring of analytic models. Smart businesses rely on platforms that support the end-to-end analytics lifecycle while providing enterprise security and governance. TIBCO® Data Science software helps organizations innovate and solve complex problems faster to ensure predictive findings quickly turn into optimal outcomes. TIBCO Data Science allows organizations to expand data science deployments across the organization by providing flexible authoring and deployment capabilities.
  • 3
    Iterative

    Iterative

    Iterative

    AI teams face challenges that require new technologies. We build these technologies. Existing data warehouses and data lakes do not fit unstructured datasets like text, images, and videos. AI hand in hand with software development. Built with data scientists, ML engineers, and data engineers in mind. Don’t reinvent the wheel! Fast and cost‑efficient path to production. Your data is always stored by you. Your models are trained on your machines. Existing data warehouses and data lakes do not fit unstructured datasets like text, images, and videos. AI teams face challenges that require new technologies. We build these technologies. Studio is an extension of GitHub, GitLab or BitBucket. Sign up for the online SaaS version or contact us to get on-premise installation
  • 4
    Bodo.ai

    Bodo.ai

    Bodo.ai

    Bodo’s powerful compute engine and parallel computing approach provides efficient execution and effective scalability even for 10,000+ cores and PBs of data. Bodo enables faster development and easier maintenance for data science, data engineering and ML workloads with standard Python APIs like Pandas. Avoid frequent failures with bare-metal native code execution and catch errors before they appear in production with end-to-end compilation. Experiment faster with large datasets on your laptop with the simplicity that only Python can provide. Write production-ready code without the hassle of refactoring for scaling on large infrastructure!
  • 5
    SiaSearch

    SiaSearch

    SiaSearch

    We want ML engineers to worry less about data engineering and focus on what they love, building better models in less time. Our product is a powerful framework that makes it 10x easier and faster for developers to explore, understand and share visual data at scale. Automatically create custom interval attributes using pre-trained extractors or any other model. Visualize data and analyze model performance using custom attributes combined with all common KPIs. Use custom attributes to query, find rare edge cases and curate new training data across your whole data lake. Easily save, edit, version, comment and share frames, sequences or objects with colleagues or 3rd parties. SiaSearch, a data management platform that automatically extracts frame-level, contextual metadata and utilizes it for fast data exploration, selection and evaluation. Automating these tasks with metadata can more than double engineering productivity and remove the bottleneck to building industrial AI.
  • 6
    Datakin

    Datakin

    Datakin

    Instantly reveal the order hidden within your complex data world, and always know exactly where to look for answers. Datakin automatically traces data lineage, showing your entire data ecosystem in a rich visual graph. It clearly illustrates the upstream and downstream relationships for each dataset. The Duration tab summarizes a job’s performance in a Gantt-style chart along with its upstream dependencies, making it easy to find bottlenecks. When you need to pinpoint the exact moment of a breaking change, the Compare tab shows how your jobs and datasets have changed between runs. Sometimes jobs that run successfully produce bad output. The Quality tab surfaces critical data quality metrics, showing how they change over time so anomalies become obvious. Datakin helps you find the root cause of issues quickly – and prevent new ones from occurring.
    Starting Price: $2 per month
  • 7
    Feast

    Feast

    Tecton

    Make your offline data available for real-time predictions without having to build custom pipelines. Ensure data consistency between offline training and online inference, eliminating train-serve skew. Standardize data engineering workflows under one consistent framework. Teams use Feast as the foundation of their internal ML platforms. Feast doesn’t require the deployment and management of dedicated infrastructure. Instead, it reuses existing infrastructure and spins up new resources when needed. You are not looking for a managed solution and are willing to manage and maintain your own implementation. You have engineers that are able to support the implementation and management of Feast. You want to run pipelines that transform raw data into features in a separate system and integrate with it. You have unique requirements and want to build on top of an open source solution.
  • 8
    datuum.ai
    AI-powered data integration tool that helps streamline the process of customer data onboarding. It allows for easy and fast automated data integration from various sources without coding, reducing preparation time to just a few minutes. With Datuum, organizations can efficiently extract, ingest, transform, migrate, and establish a single source of truth for their data, while integrating it into their existing data storage. Datuum is a no-code product and can reduce up to 80% of the time spent on data-related tasks, freeing up time for organizations to focus on generating insights and improving the customer experience. With over 40 years of experience in data management and operations, we at Datuum have incorporated our expertise into the core of our product, addressing the key challenges faced by data engineers and managers and ensuring that the platform is user-friendly, even for non-technical specialists.
  • 9
    Numbers Station

    Numbers Station

    Numbers Station

    Accelerating insights, eliminating barriers for data analysts. Intelligent data stack automation, get insights from your data 10x faster with AI. Pioneered at the Stanford AI lab and now available to your enterprise, intelligence for the modern data stack has arrived. Use natural language to get value from your messy, complex, and siloed data in minutes. Tell your data your desired output, and immediately generate code for execution. Customizable automation of complex data tasks that are specific to your organization and not captured by templated solutions. Empower anyone to securely automate data-intensive workflows on the modern data stack, free data engineers from an endless backlog of requests. Arrive at insights in minutes, not months. Uniquely designed for you, tuned for your organization’s needs. Integrated with upstream and downstream tools, Snowflake, Databricks, Redshift, BigQuery, and more coming, built on dbt.
  • 10
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 11
    AtScale

    AtScale

    AtScale

    AtScale helps accelerate and simplify business intelligence resulting in faster time-to-insight, better business decisions, and more ROI on your Cloud analytics investment. Eliminate repetitive data engineering tasks like curating, maintaining and delivering data for analysis. Define business definitions in one location to ensure consistent KPI reporting across BI tools. Accelerate time to insight from data while efficiently managing cloud compute costs. Leverage existing data security policies for data analytics no matter where data resides. AtScale’s Insights workbooks and models let you perform Cloud OLAP multidimensional analysis on data sets from multiple providers – with no data prep or data engineering required. We provide built-in easy to use dimensions and measures to help you quickly derive insights that you can use for business decisions.
  • 12
    Presto

    Presto

    Presto Foundation

    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. For data engineers who struggle with managing multiple query languages and interfaces to siloed databases and storage, Presto is the fast and reliable engine that provides one simple ANSI SQL interface for all your data analytics and your open lakehouse. Different engines for different workloads means you will have to re-platform down the road. With Presto, you get 1 familar ANSI SQL language and 1 engine for your data analytics so you don't need to graduate to another lakehouse engine. Presto can be used for interactive and batch workloads, small and large amounts of data, and scales from a few to thousands of users. Presto gives you one simple ANSI SQL interface for all of your data in various siloed data systems, helping you join your data ecosystem together.
  • 13
    Informatica Data Engineering Streaming
    AI-powered Informatica Data Engineering Streaming enables data engineers to ingest, process, and analyze real-time streaming data for actionable insights. Advanced serverless deployment option​ with integrated metering dashboard cuts admin overhead. Rapidly build intelligent data pipelines with CLAIRE®-powered automation, including automatic change data capture (CDC). Ingest thousands of databases and millions of files, and streaming events. Efficiently ingest databases, files, and streaming data for real-time data replication and streaming analytics. Find and inventory all data assets throughout your organization. Intelligently discover and prepare trusted data for advanced analytics and AI/ML projects.
  • 14
    Mosaic AIOps

    Mosaic AIOps

    Larsen & Toubro Infotech

    LTI’s Mosaic is a converged platform, which offers data engineering, advanced analytics, knowledge-led automation, IoT connectivity and improved solution experience to its users. Mosaic enables organizations to undertake quantum leaps in business transformation, and brings an insights-driven approach to decision-making. It helps deliver pioneering Analytics solutions at the intersection of physical and digital worlds. Catalyst for Enterprise ML & AI Adoption. ModelManagement. TrainingAtScale. AIDevOps. MLOps. MultiTenancy. LTI’s Mosaic AI is a cognitive AI platform, designed to provide its users with an intuitive experience in building, training, deploying and managing AI models at enterprise scale. It brings together the best AI frameworks & templates, to provide a platform where users enjoy a seamless & personalized “Build-to-Run” transition on their AI workflows.
  • 15
    IBM Databand
    Monitor your data health and pipeline performance. Gain unified visibility for pipelines running on cloud-native tools like Apache Airflow, Apache Spark, Snowflake, BigQuery, and Kubernetes. An observability platform purpose built for Data Engineers. Data engineering is only getting more challenging as demands from business stakeholders grow. Databand can help you catch up. More pipelines, more complexity. Data engineers are working with more complex infrastructure than ever and pushing higher speeds of release. It’s harder to understand why a process has failed, why it’s running late, and how changes affect the quality of data outputs. Data consumers are frustrated with inconsistent results, model performance, and delays in data delivery. Not knowing exactly what data is being delivered, or precisely where failures are coming from, leads to persistent lack of trust. Pipeline logs, errors, and data quality metrics are captured and stored in independent, isolated systems.
  • 16
    Molecula

    Molecula

    Molecula

    Molecula is an enterprise feature store that simplifies, accelerates, and controls big data access to power machine-scale analytics and AI. Continuously extracting features, reducing the dimensionality of data at the source, and routing real-time feature changes into a central store enables millisecond queries, computation, and feature re-use across formats and locations without copying or moving raw data. The Molecula feature store provides data engineers, data scientists, and application developers a single access point to graduate from reporting and explaining with human-scale data to predicting and prescribing real-time business outcomes with all data. Enterprises spend a lot of money preparing, aggregating, and making numerous copies of their data for every project before they can make decisions with it. Molecula brings an entirely new paradigm for continuous, real-time data analysis to be used for all your mission-critical applications.
  • 17
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 18
    Sentrana

    Sentrana

    Sentrana

    Whether your data is trapped in silos or you’re generating data at the edge, Sentrana gives you the flexibility to create AI and data engineering pipelines wherever your data is. And you can share your AI, Data, and Pipelines with anyone anywhere. With Sentrana, you can achieve newfound agility to effortlessly move between compute environments, while all your data and your work replicates automatically to wherever you want. Sentrana provides a large inventory of building blocks from which you can stitch together custom AI and Data Engineering pipelines. Rapidly assemble and test many different pipelines to create the AI you need. Turn your data into AI with near-zero effort and cost. Since Sentrana is an open platform, newer cutting-edge AI building blocks that are emerging every day are put right at your fingertips. Sentrana turns the Pipelines and AI models you create into re-executable building blocks that anyone on your team can hook into their own pipelines.
  • 19
    Intergraph Smart Laser Data Engineer
    Learn how CloudWorx for Intergraph Smart 3D connects to the point cloud and enables users to make the hybrid between the existing plant and newly modeled parts. Intergraph Smart® Laser Data Engineer provides state-of-the-art point cloud rendering performance in CloudWorx for Intergraph Smart 3D users via the JetStream point cloud engine. With its instant loading and persistent full rendering of the point cloud during user actions – regardless of the size of the dataset – Smart Laser Data Engineer delivers ultimate fidelity to the user. JetStream’s centralized data storage and administrative architecture – while serving the high-performance point cloud to users – also provides an easy-to-use project environment, making data distribution, user access control, backups and other IT functions easy and effective, saving time and money.
  • 20
    Knoldus

    Knoldus

    Knoldus

    World's largest team of Functional Programming and Fast Data engineers focused on creating customized high-performance solutions. We move from "thought" to "thing" via rapid prototyping and proof of concept. Activate an ecosystem to deliver at scale with CI/CD to support your requirements. Understanding the strategic intent and stakeholder needs to develop a shared vision. Deploy MVP to launch the product in the most efficient & expedient manner possible. Continuous improvements and enhancements to support new requirements. Building great products and providing unmatched engineering services would not be possible without the knowledge and extensive usage of the latest tools and technology. We help you to capitalize on opportunities, respond to competitive threats, and scale successful investments by reducing organizational friction from your company’s structures, processes, and culture. Knoldus helps clients identify and capture the most value and meaningful insights from data.
  • 21
    Foghub

    Foghub

    Foghub

    Simplified IT/OT Integration, Data Engineering & Real-Time Edge Intelligence. Easy to use, cross-platform, open architecture, edge computing for industrial time-series data. Foghub offers the Critical-Path to IT/OT convergence, connecting Operations (Sensors, Devices, and Systems) with Business (People, Processes, and Applications), enabling automated data acquisition, data engineering, transformations, advanced analytics and ML. Handle large variety, volume, and velocity of industrial data with out-of-the-box support for all data types, most popular industrial network protocols, OT/lab systems, and databases. Easily automate the collection of data about your production runs, batches, parts, cycle-times, process parameters, asset condition, performance, health, utilities, consumables as well as operators and their performance. Designed for scale, Foghub offers a comprehensive set of capabilities to handle large volumes and velocity of data.
  • 22
    Datactics

    Datactics

    Datactics

    Profile, cleanse, match and deduplicate data in drag-and-drop rules studio. Lo-code UI means no programming skill required, putting power in the hands of subject matter experts. Add AI & machine learning to your existing data management processes In order to reduce manual effort and increase accuracy, providing full transparency on machine-led decisions with human-in-the-loop. Offering award-winning data quality and matching capabilities across multiple industries, our self-service solutions are rapidly configured within weeks with specialist assistance available from Datactics data engineers. With Datactics you can easily measure data to regulatory & industry standards, fix breaches in bulk and push into reporting tools, with full visibility and audit trail for Chief Risk Officers. Augment data matching into Legal Entity Masters for Client Lifecycle Management.
  • 23
    witboost

    witboost

    Agile Lab

    witboost is a modular, scalable, fast, efficient data management system for your company to truly become data driven, reduce time-to-market, it expenditures and overheads. witboost comprises a series of modules. These are building blocks that can work as standalone solutions to address and solve a single need or problem, or they can be combined to create the perfect data management ecosystem for your company. Each module improves a specific data engineering function and they can be combined to create the perfect solution to answer your specific needs, guaranteeing a blazingly fact and smooth implementation, thus dramatically reducing time-to-market, time-to-value and consequently the TCO of your data engineering infrastructure. Smart Cities need digital twins to predict needs and avoid unforeseen problems, gathering data from thousands of sources and managing ever more complex telematics.
  • 24
    DataSentics

    DataSentics

    DataSentics

    Making data science & machine learning have a real impact on organizations. We are an AI product studio, a group of 100 experienced data scientists and data engineers with a combination of experience both from the agile world of digital start-ups as well as major international corporations. We don’t end with nice slides and dashboards. The result that counts is an automated data solution in production integrated inside a real process. We do not report clickers but data scientists and data engineers. We have a strong focus on productionalizing data science solutions in the cloud with high standards of CI and automation. Building the greatest concentration of the smartest and most creative data scientists and engineers by being the most exciting and fulfilling place for them to work in Central Europe. Giving them the freedom to use our critical mass of expertise to find and iterate on the most promising data-driven opportunities, both for our clients and our own products.
  • 25
    Switchboard

    Switchboard

    Switchboard

    Aggregate disparate data at scale, reliably and accurately, to make better business decisions with Switchboard, a data engineering automation platform driven by business teams. Uncover timely insights and accurate forecasts. No more outdated manual reports and error-prone pivot tables that don’t scale. Directly pull and reconfigure data sources in the right formats in a no-code environment. Reduce your dependency on the engineering team. Automatic monitoring and backfilling make API outages, bad schemas, and missing data a thing of the past. Not a dumb API, but an ecosystem of pre-built connectors that are easily and quickly adapted to actively transform raw data into a strategic asset. Our team of experts has worked in data teams at Google and Facebook. We’ve automated those best practices to elevate your data game. A data engineering automation platform with authoring and workflow processes proven to scale with terabytes of data.
  • 26
    Sifflet

    Sifflet

    Sifflet

    Automatically cover thousands of tables with ML-based anomaly detection and 50+ custom metrics. Comprehensive data and metadata monitoring. Exhaustive mapping of all dependencies between assets, from ingestion to BI. Enhanced productivity and collaboration between data engineers and data consumers. Sifflet seamlessly integrates into your data sources and preferred tools and can run on AWS, Google Cloud Platform, and Microsoft Azure. Keep an eye on the health of your data and alert the team when quality criteria aren’t met. Set up in a few clicks the fundamental coverage of all your tables. Configure the frequency of runs, their criticality, and even customized notifications at the same time. Leverage ML-based rules to detect any anomaly in your data. No need for an initial configuration. A unique model for each rule learns from historical data and from user feedback. Complement the automated rules with a library of 50+ templates that can be applied to any asset.
  • 27
    Aggua

    Aggua

    Aggua

    Aggua is a data fabric augmented AI platform that enables data and business teams Access to their data, creating Trust and giving practical Data Insights, for a more holistic, data-centric decision-making. Instead of wondering what is going on underneath the hood of your organization's data stack, become immediately informed with a few clicks. Get access to data cost insights, data lineage and documentation without needing to take time out of your data engineer's workday. Instead of spending a lot of time tracing what a data type change will break in your data pipelines, tables and infrastructure, with automated lineage, your data architects and engineers can spend less time manually going through logs and DAGs and more time actually making the changes to infrastructure.
  • 28
    Vaex

    Vaex

    Vaex

    At Vaex.io we aim to democratize big data and make it available to anyone, on any machine, at any scale. Cut development time by 80%, your prototype is your solution. Create automatic pipelines for any model. Empower your data scientists. Turn any laptop into a big data powerhouse, no clusters, no engineers. We provide reliable and fast data driven solutions. With our state-of-the-art technology we build and deploy machine learning models faster than anyone on the market. Turn your data scientist into big data engineers. We provide comprehensive training of your employees, enabling you to take full advantage of our technology. Combines memory mapping, a sophisticated expression system, and fast out-of-core algorithms. Efficiently visualize and explore big datasets, and build machine learning models on a single machine.
  • 29
    Kodex

    Kodex

    Kodex

    Privacy engineering is an emerging field that has intersections with data engineering, information security, software development, and privacy law. Its goal is to ensure that personal data is stored and processed in a legally compliant way that respects and protects the privacy of the individuals this data belongs in the best possible way. Security engineering is on one hand a requirement for privacy engineering but also an independent discipline that aims to guarantee the secure processing and storage of sensitive data in general. If your organization processes data that is either sensitive or personal (or both), you need privacy & security engineering. This is especially true if you do your own data engineering or data science.
  • 30
    Archon Data Store

    Archon Data Store

    Platform 3 Solutions

    Archon Data Store™ is a powerful and secure open-source based archive lakehouse platform designed to store, manage, and provide insights from massive volumes of data. With its compliance features and minimal footprint, it enables large-scale search, processing, and analysis of structured, unstructured, & semi-structured data across your organization. Archon Data Store combines the best features of data warehouses and data lakes into a single, simplified platform. This unified approach eliminates data silos, streamlining data engineering, analytics, data science, and machine learning workflows. Through metadata centralization, optimized data storage, and distributed computing, Archon Data Store maintains data integrity. Its common approach to data management, security, and governance helps you operate more efficiently and innovate faster. Archon Data Store provides a single platform for archiving and analyzing all your organization's data while delivering operational efficiencies.