The goal of project is delivering useful and simple tools to creating charts. Functionality is provided by modules/plugins (OSGi technology). It allows you to create your plugins or customize Analysis application (by changing installed plugins in application). The Application, thanks to JAVA technology, is portable on the most operating system.
In Files page you can find the application and also patches to its.
The Energy Sensing and Monitoring Infrastructure of the GAMES project
This is part of the GAMES Project, which consists of a set of innovative methodologies and Open Source ICT tools for designing and managing energy efficiency in IT Service Centres. The ESMI is the monitoring and sensing infrastructure which allows the fine-grained measurement and provides both an event-based and a real-time stream of data conveying information about the energy consumption in the key points of the overall system. It is completed by a set of Nagios plugin and an assessment tool which implements some green Performance Indicators and integrates data mining with the self adaptive controllers of the entire framework.
Enrich and query corpora in the TEI-XML vocabulary. CorpusReader manage very large corpora and corpora containing milestone annotation. It provides tools for enriching corpora with output of linguistic parsers, and for extracting quantitative information
Deploy in 115+ regions with the modern database for every enterprise.
MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
The NITE XML Toolkit supports the creation, analysis, and browsing of annotated multimodal, text, or spoken language corpora, and represents both timing and rich linguistic structure. It contains libraries for developers and some end user tools.
Cyberinfrastructure Shell (CIShell) is an open source, community-driven framework/application for the integration and utilization of datasets, algorithms, tools, and computing resources. Algorithms can be integrated using most programming languages.
Full-stack observability with actually useful AI | Grafana Cloud
Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.
Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
SPASE Model is a collection of tools for working with the structured data model information. Tools can convert the relational version of the data model into various expressions, including XSD, XMI and PDF documentation.
This project is a compilation of tools/libraries to help with tasks related to Text Analytics mainly in Java. These tools range from simple wrappers to sophisticated mining tasks that can improve the productivity of researchers and engineers.
DimReduction project provide an open-source multiplatform (Java) graphical environment for bioinformatics problems that supports many feature selection algorithms, pattern recognition techniques, criterion functions and graphic visualization tools.
Web-as-corpus tools in Java.
* Simple Crawler (and also integration with Nutch and Heritrix)
* HTML cleaner to remove boiler plate code
* Language recognition
* Corpus builder
The main objective of the ONE project is to enrich Digital Ecosystems with an decentralised negotiation environment and enabling tools that will allow organisations to create contract agreements for supplying integrated services as a virtual organisation
Nevada is a tool for interactive visualization of dynamic networks. Unlike other dynamic network visualization tools it's focussing on visualizations preserving the user's mental-map. Import of Pajek files is supported.
Scan, the Semantic Content ANnotator, is a semantic pipeline that helps connecting information extraction tools to semantic database. UIMA-based, it allows easy plugin-writing: information extraction, ontology control, store in RDF Repositories.
T-Rex (Trainable Relation Extraction) is a highly configurable machine learning-based Information Extraction from Text framework, which includes tools for document classification, entity extraction and relation extraction.
LT4eL (Language Technology for e-Learning) develops a framework of multilingual language technology tools and semantic web techniques for improving the retrieval and the metadata annotation of learning material.
The aim of the project is to provide open source collection of algorithms in the field of spectroscopy: data handling and processing, modeling and artificial intelligence tools.
KNeTS (Knowledge Elicitation Tools) is a survey tool to create multi-agent models based on local knowledge using pattern analysis to identify rules that are iteratively validated with the informant. The final output is a knowledge-based multi-agent model
The Databionics ESOM Tools offer many data mining tasks using Emergent Self-Organizing Maps. Visualization, clustering, and classification of high-dimensional data using databionics principles can be performed interactively or automatically.
hypKNOWsys aims at developing a Java-based workbench for knowledge discovery and knowledge management. Currently, hypKNOWsys has released two intermediate tools: DIAsDEM Workbench (text mining for semantic tagging) and WUMprep (Web mining pre-processing)
The Microdata Management Toolkit is a collection of tools for documenting, disseminating and preserving survey and census microdata The project is sponsored by the International Household Survey Network with financial support from the World Bank.
The Genomic Diversity and Phenotype Data Model (GDPDM) captures molecular and phenotypic diversity data. MySQL databases are used to implement the schema. This project develops software tools (written in Java, Perl, etc.) associated with this model.
Set of Ant filters that can be used to gather statistics from files or resources. It is mainly used for log files analysis. It allows to: - count inputs - count occurrences of each input - calculate average, max and min values of floats in input