Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here. You can also view their previous blog post from 2021 on the model.
Features
- Open-source pre-training implementation of Google's LaMDA research paper in PyTorch
- T5 Relative Positional Bias in Attention
- Gated GELU Activation in the Feed forward layer
- GPT-like Decoder Only architecture
- Autoregressive with Top-k sampling
- Sentencepiece Byte-pair encoded tokenizer
License
MIT LicenseFollow LaMDA-pytorch
Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform
Build, govern, and optimize agents and models with Gemini Enterprise Agent Platform.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of LaMDA-pytorch!