Big Data Tools for Windows

View 72 business solutions
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    Vaex

    Vaex

    Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python

    Data science solutions, insights, dashboards, machine learning, deployment. We start at 100GB. Vaex is a high-performance Python library for lazy Out-of-Core data frames (similar to Pandas), to visualize and explore big tabular datasets. It calculates statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid for more than a billion (10^9) samples/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted). Cut development cut development time by 80%. Your prototype is your solution. Create automatic pipelines for any model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    ankus

    ankus

    Data Mining and Machine Learning Algorithms based on MapReduce

    [The feature of ankus] * ankus is a 'web-based big data mining project and tool'. - MapReduce-based data mining/machine learning algorithms library - Hadoop-based distributed bigdata system - offering a web-based GUI for easy use [The ankus project & License] * The ankus project consists of three as an open source. * ankus has Dual licensed under the community and commercial licenses. * community license is following GPLv3 - Some algorithms in Core Project do not under the OSS License [Demonstration Site] http://www.openankus.org:18080 [Official website & E-mail] www.openankus.org ankus@openankus.org [ankus video list] http://bit.ly/ankus_video [community] http://www.facebook.com/groups/openankus (Korean Groups) http://www.facebook.com/openankus (English Groups) http://bit.ly/ankus_forum (Google groups user forum)
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    apache spark data pipeline osDQ

    apache spark data pipeline osDQ

    osDQ dedicated to create apache spark based data pipeline using JSON

    This is an offshoot project of open source data quality (osDQ) project https://sourceforge.net/projects/dataquality/ This sub project will create apache spark based data pipeline where JSON based metadata (file) will be used to run data processing , data pipeline , data quality and data preparation and data modeling features for big data. This uses java API of apache spark. It can run in local mode also. Get json example at https://github.com/arrahtech/osdq-spark How to run Unzip the zip file Windows : java -cp .\lib\*;osdq-spark-0.0.1.jar org.arrah.framework.spark.run.TransformRunner -c .\example\samplerun.json Mac UNIX java -cp ./lib/*:./osdq-spark-0.0.1.jar org.arrah.framework.spark.run.TransformRunner -c ./example/samplerun.json For those on windows, you need to have hadoop distribtion unzipped on local drive and HADOOP_HOME set. Also copy winutils.exe from here into HADOOP_HOME\bin
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4

    deshang

    Software to support deshang research

    Deshang research project mainly focus on collecting students' behaviors and using big data technologies to analyze the factors which might make effects on behavior changing and to build strategies set of parents and teacher guiding. This SF project aims to provide interface and backend analysis functionalities for project Deshang. The softwares used are WAMP (Window Apache + MySQL + PHP) with phpMyAdmin (web base MySQL admin console) included, WordPress (3.8.1 chinese version), Sphinx as search engine and libMMSeg chinese directionary for Sphinx.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8 Monitoring Tools in One APM. Install in 5 Minutes. Icon
    8 Monitoring Tools in One APM. Install in 5 Minutes.

    Errors, performance, logs, uptime, hosts, anomalies, dashboards, and check-ins. One interface.

    AppSignal works out of the box for Ruby, Elixir, Node.js, Python, and more. 30-day free trial, no credit card required.
    Start Free
  • 5
    geometry-api-java

    geometry-api-java

    The Esri Geometry API for Java enables developers to write apps

    The Esri Geometry API for Java can be used to enable spatial data processing in 3rd-party data-processing solutions. Developers of custom MapReduce-based applications for Hadoop can use this API for spatial processing of data in the Hadoop system. The API is also used by the Hive UDF’s and could be used by developers building geometry functions for 3rd-party applications such as Cassandra, HBase, Storm and many other Java-based “big data” applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    giServer

    giServer

    giServer the easy to use and extensible batch and integration server

    The giServer is an easy-to-use integration server for process automation and event-driven or scheduled execution of batch jobs. Instead of using complex XML configuration files an elaborate GUI for batch job management is included. Some possible usage scenarios are: - Automatic processing of incoming data files - Big Data applications - Process automation - Data Mining/Aggregation applications - Automatic Reporting - Processing and analysis of database records
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    gravitino

    gravitino

    Unified metadata lake for data & AI assets.

    Apache Gravitino is a high-performance, geo-distributed, and federated metadata lake. It manages metadata directly in different sources, types, and regions, providing users with unified metadata access for data and AI assets.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8

    iCubing

    Several OLAP algorithms, data structures and HPC OLAP versions

    OLAP technology is very useful for decision makers and data mining tools with BIG data. In this direction, we implement iCubing project with several multidimensional data cube approaches for cube indexing, querying, updating and mining. There are also several cube types, i.e. alphanumeric cubes, text cubes with unstructured data and geo cube with geo types, dimensions, measures and hierarchies, so the OLAP area continues a hard challenge after more than 20 years of the seminal paper of Jim Gray et al. in 1997. Our team has more than 15 years of experience in developing OLAP kernels.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9

    iOVFDT

    iOVFDT algorithm of incremental decision tree

    How to extract meaningful information from big data has been a popular open problem. Decision tree, which has a high degree of knowledge interpretation, has been favored in many real world applications. However noisy values commonly exist in high-speed data streams, e.g. real-time online data feeds that are prone to interference. When processing big data, it is hard to implement pre-processing and sampling in full batches. To solve this trade-off, we propose a new decision tree so called incrementally optimized very fast decision tree (iOVFDT). Inheriting the use of Hoeffding bound in VFDT algorithm for node-splitting check, it contains four optional strategies of functional tree leaf, which improve the classifying accuracy. In addition, a multi-objective incremental optimization mechanism investigates a balance among accuracy, mode size and learning speed...
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    json4sapnw

    json4sapnw

    Another JSON extension for SAP ABAP

    This is a SAP addon to handle JSON data within SAP ABAP Programs. It comes in the customer exchange namespace /CEX/ and has to be installed as an SAP transport request. The addon supports object oriented JSON methods to process deep structured JSON data. Building JSON data from SAP data objects and parsing JSON data back to SAP data objects are supported. See the WIKI for some examples. Thanks to the SAP community and especially to Rüdiger Plantiko for the basic work (http://ruediger-plantiko.blogspot.de/2010/12/ein-json-parser-in-abap.html). Enjoy! last Changes: - JSON HTTP Client - HTTP Auth for Basic, SAP Basic+SSO, WSSE - Bugfixes: Big Integer, negative Integer - Array with has_next/next - Object with robust set_text method - OpenWeatherMap.org Example (see files/example)
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11

    paralline

    Big Data tool

    Paralline executes a python function (or lambda function) or a script over each line of huge text files, in parallel processes and aggregates the result to a list.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12

    rows-column-extracter

    Easily extract the rows/columns while handling your big data

    Its a windows executable software tested on windows 10 - 64 bit machine. Feel free to use it at your discreation. Download and double click to install the software. Provide the installation path or leave the default of C:\Program Files. Once installed, move to the installation directory and double click extracter.exe application. Input the fields as detailed below, Remember, CSV file array starts from 0 hence, column/row number 1 will be considered as 2. Usage - Column Gap - A gap between two consecutive rows in a column to be printed. Default is 1 Column Number - The column number to be extracted. Default is 1 Row Gap - Gap betwen two consecutive columns in a row to be printed. Default is 1 Row Number - The row number to be extracted. Default is 1 Browse to locate your csv file. Please remember it works with CSV file format only. The ouptut is stored in the newly created folder in the installation path. It contains indexed/non-indexed rows and columns files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    This is a BIg Data project
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14

    wzd

    Powerful storage server, designed for big data storage systems

    wZD is a server written in Go language that uses a modified version of the BoltDB database as a backend for saving and distributing any number of small and large files, NoSQL keys/values, in a compact form inside micro Bolt databases (archives), with distribution of files and values in BoltDB databases depending on the number of directories or subdirectories and the general structure of the directories. Using wZD can permanently solve the problem of a large number of files on any POSIX compatible file system, including a clustered one. Outwardly it works like a regular WebDAV server.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB