Go from Data Warehouse to Data and AI platform with BigQuery
Build, train, and run ML models with simple SQL. Automate data prep, analysis, and predictions with built-in AI assistance from Gemini.
BigQuery is more than a data warehouse—it's an autonomous data-to-AI platform. Use familiar SQL to train ML models, run time-series forecasts, and generate AI-powered insights with native Gemini integration. Built-in agents handle data engineering and data science workflows automatically. Get $300 in free credit, query 1 TB, and store 10 GB free monthly.
Try BigQuery Free
AI-powered service management for IT and enterprise teams
Enterprise-grade ITSM, for every business
Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
A collection of library code and tools for application execution profiling and performance testing. You can create stopwatches to time select portions of your code. You can measure differences (often to sub-millisecond accuracy) between clocks on different machines. You can log application events in a .csv format for subsequent analysis. You can also generate CPU loading logs in a .csv format.
Note: this tool has moved to github, this repository is not maintained any more.
See https://github.com/turdusmerula/ftrace
GScopeLog is a tool for instrumenting c++ code through gcc. The main purpose is to trace entry/exit point of functions. A status file may be generated to give overview of functions calls and timing informations with minimal impact on performances.
Alchemist GCC/LLVM plugin for codeanalysis and tuning
News: since 2015 we continue all related developments within Collective Knowledge Framework: http://github.com/ctuning/ck/wiki
Alchemist plugin is a collection of plugins for GCC/LLVM for external and fine-grain codeanalysis and tuning. It is intended to to extract program properties for machine learning based optimization (see MILEPOST GCC); optimize programs at fine-grain level (such as unrolling, tiling, prefetching, etc); tune default optimization heuristic; gradually decompose program and detect performance or other anomalies; generate benchmarks particularly useful to train ML-based compilers. ...