Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.
Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
Try Cloud Run Free
Go from Data Warehouse to Data and AI platform with BigQuery
Build, train, and run ML models with simple SQL. Automate data prep, analysis, and predictions with built-in AI assistance from Gemini.
BigQuery is more than a data warehouse—it's an autonomous data-to-AI platform. Use familiar SQL to train ML models, run time-series forecasts, and generate AI-powered insights with native Gemini integration. Built-in agents handle data engineering and data science workflows automatically. Get $300 in free credit, query 1 TB, and store 10 GB free monthly.
Scriptable database and system performance benchmark
sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. It is most frequently used for database benchmarks, but can also be used to create arbitrarily complex workloads that do not involve a database server. extensive statistics about rate and latency is available, including latency percentiles and histograms. Low overhead even with thousands of concurrent threads. sysbench is capable of generating and tracking hundreds of millions of events per second. New benchmarks can be...
A collection of library code and tools for application execution profiling and performance testing. You can create stopwatches to time select portions of your code. You can measure differences (often to sub-millisecond accuracy) between clocks on different machines. You can log application events in a .csv format for subsequent analysis. You can also generate CPU loading logs in a .csv format.
BEMAP (BEnchMarks for Automatic Parallelizer) is a benchmark used to measure performance for an automatic parallelizer.
All OpenCL code benchmarks covered in this project are done step-by-step along with hand-tunning.
Each tuning step executional time are measured in details with a comprehensive user interface and help option.
The exact implementation in native code (C++) is also provided in each project folder for reference.
By using these benchmarks, one may analyze:
1. How to...
New to Google Cloud? Get $300 in free credit to explore Compute Engine, BigQuery, Cloud Run, Vertex AI, and 150+ other products.
Start your next project with $300 in free Google Cloud credit. Spin up VMs, run containers, query exabytes in BigQuery, or build AI apps with Vertex AI and Gemini. Once your credits are used, keep building with 20+ products with free monthly usage, including Compute Engine, Cloud Storage, GKE, and Cloud Run functions. Sign up to start building right away.
A framework that compares performance of different pieces of code. Available in four languages: c#.net (SharpKinoko), java (JKinoko), c (CKinoko) and c++ (CppKinoko)
Note: this tool has moved to github, this repository is not maintained any more.
See https://github.com/turdusmerula/ftrace
GScopeLog is a tool for instrumenting c++ code through gcc. The main purpose is to trace entry/exit point of functions. A status file may be generated to give overview of functions calls and timing informations with minimal impact on performances.
Alchemist GCC/LLVM plugin for code analysis and tuning
News: since 2015 we continue all related developments within Collective Knowledge Framework: http://github.com/ctuning/ck/wiki
Alchemist plugin is a collection of plugins for GCC/LLVM for external and fine-grain code analysis and tuning. It is intended to to extract program properties for machine learning based optimization (see MILEPOST GCC); optimize programs at fine-grain level (such as unrolling, tiling, prefetching, etc); tune default optimization heuristic; gradually decompose...