From dev environments to AI training, choose preset or custom VMs with 1–96 vCPUs and industry-leading 99.95% uptime SLA.
Compute Engine delivers high-performance virtual machines for web apps, databases, containers, and AI workloads. Choose from general-purpose, compute-optimized, or GPU/TPU-accelerated machine types—or build custom VMs to match your exact specs. With live migration and automatic failover, your workloads stay online. New customers get $300 in free credits.
Try Compute Engine
Build on Google Cloud with $300 in Free Credit
New to Google Cloud? Get $300 in free credit to explore Compute Engine, BigQuery, Cloud Run, Vertex AI, and 150+ other products.
Start your next project with $300 in free Google Cloud credit. Spin up VMs, run containers, query exabytes in BigQuery, or build AI apps with Vertex AI and Gemini. Once your credits are used, keep building with 20+ products with free monthly usage, including Compute Engine, Cloud Storage, GKE, and Cloud Run functions. Sign up to start building right away.
MassiveJava is a Java-based environment for parallel programming. Lithium is able to execute the parallel application using a cluster or a network of workstations. A user is able to program a parallel application without take in account problem like sche
The Herdtools are a set of user-level cluster management and control utilities with a consistent command line interface. These include things like parallel file copy, remote execution, sudo execution, job kill, etc. We utilize Rob Brown's "procstatd" d
The BCR flavor of Cooperative Data Sharing (CDS) is a scalable, portable, flexible C-based API and daemon for initiating and communicating between processes/threads in uniprocessor and multiprocessor (e.g. distributed, SMP, and parallel) platforms.
Provide a common portal platform to join Internet Service Providers users into a community, both including personalization using "portlets" as the building blocks of .NET type applications and using a Parallel Portal Engine to overcome such demanding use
Go from idea to deployed AI app without managing infrastructure. Vertex AI offers one platform for the entire AI development lifecycle.
Ship AI apps and features faster with Vertex AI—your end-to-end AI platform. Access Gemini 3 and 200+ foundation models, fine-tune for your needs, and deploy with enterprise-grade MLOps. Build chatbots, agents, or custom models. New customers get $300 in free credit.
The message passing interface (MPI) standard is a library specification for message passing on parallel computers. This project develops an extension to MPI in C++ such that STL objects can be transfered just as easily as fundamental data types.
QUAFF is a C++ library for designing and running parallel code onto any MPI aware system by providing an eay to use -- yet efficient -- API based on algorithmic skeletons.
GAP (Grid Agents Platform) Toolkit for Modeling and Simulation of Mobile Agents in Grid Environments. GAP is an abstraction over GridSim, a Grid Simulation Toolkit for Resource Modelling and Application Scheduling for Parallel and Distributed Computing.
Host and run your applications without the need to manage infrastructure. Scales up from and down to zero automatically.
Cloud Run is the fastest way to deploy containerized apps. Push your code in Go, Python, Node.js, Java, or any language and Cloud Run builds and deploys it automatically. Get fast autoscaling, pay only when your code runs, and skip the infrastructure headaches. Two million requests free per month. And new customers get $300 in free credit.