A simple but complete full-attention transformer with a set of promising experimental features from various papers. Proposes adding learned memory key/values prior to attending. They were able to remove feedforwards altogether and attain a similar performance to the original transformers. I have found that keeping the feedforwards and adding the memory key/values leads to even better performance. Proposes adding learned tokens, akin to CLS tokens, named memory tokens, that is passed through the attention layers alongside the input tokens. You can also use the l2 normalized embeddings proposed as part of fixnorm. I have found it leads to improved convergence when paired with small initialization (proposed by BlinkDL). The small initialization will be taken care of as long as l2norm_embed is set to True.

Features

  • Decoder-only (GPT-like)
  • Encoder-only (BERT-like)
  • State of the art image classification
  • Augmenting Self-attention with Persistent Memory
  • Transformers Without Tears
  • Root Mean Square Layer Normalization

Project Samples

Project Activity

See All Activity >

Categories

Machine Learning

License

MIT License

Follow x-transformers

x-transformers Web Site

You Might Also Like
Our Free Plans just got better! | Auth0 by Okta Icon
Our Free Plans just got better! | Auth0 by Okta

With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
Try free now
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of x-transformers!

Additional Project Details

Programming Language

Python

Related Categories

Python Machine Learning Software

Registered

2022-08-11