Implementations and code to accompany DeepMind publications
A library for graph deep learning research
Paddle Quantum
Generate photo-realistic textures based on source images
PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)
Libraries for optimizing AI models, inference speed, and GPU usage
Implementation of model parallel autoregressive transformers on GPUs
Code release for ConvNeXt V2 model
CPT: A Pre-Trained Unbalanced Transformer
A collection of practical tips can be found at the bottom of this page
AI powered Offline Background Remover.
Large dataset of coding contests designed for AI and ML model training
Code for a multi-agent particle environment used in a paper
Source code accompanying book: Data Science on the GCP
Automating Host Exploitation with AI
Your gateway to GPT writing
AI discovers faster, efficient algorithms for matrix multiplication
Large-scale pretraining for dialogue
An interpretable and efficient predictor using pre-trained models
Implementation of BEVFormer, a camera-only framework
StudioGAN is a Pytorch library providing implementations of networks
Experiments and code from Google Brain’s Tokyo research workshop
Experiment tracking, ML developer tools
ReSkin Sensor Interfacing Library
Learning to Act by Watching Unlabeled Online Videos