Reference implementation of the Transformer architecture optimized
ANE Transformers is a reference PyTorch implementation of Transformer components optimized for Apple Neural Engine on devices with A14 or newer and on Macs with M1 or newer chips. It demonstrates how to structure attention and related layers to achieve substantial speedups and lower peak memory compared to baseline implementations when deployed to ANE. The repository targets practitioners who want to keep familiar PyTorch modeling while preparing models for Core ML/ANE execution paths....