BytePS
A high performance and generic framework for distributed DNN training
...We show our experiment on BERT-large training, which is based on GluonNLP toolkit. The model uses mixed precision. We use Tesla V100 32GB GPUs and set batch size equal to 64 per GPU. Each machine has 8 V100 GPUs (32GB memory) with NVLink-enabled. Machines are inter-connected with 100 Gbps RDMA network. This is the same hardware setup you can get on AWS.