MUSE is a framework for learning multilingual word embeddings that live in a shared space, enabling bilingual lexicon induction, cross-lingual retrieval, and zero-shot transfer. It supports both supervised alignment with seed dictionaries and unsupervised alignment that starts without parallel data by using adversarial initialization followed by Procrustes refinement. The code can align pre-trained monolingual embeddings (such as fastText) across dozens of languages and provides standardized evaluation scripts and dictionaries. By mapping languages into a common vector space, MUSE makes it straightforward to build cross-lingual applications where resources are scarce for some languages. The training and evaluation pipeline is lightweight and fast, so experimenting with different languages or initialization strategies is easy. Beyond dictionary induction, the learned embeddings are often used as building blocks for downstream tasks like classification, retrieval, or machine translation.
Features
- Supervised and unsupervised cross-lingual embedding alignment
- Adversarial initialization with Procrustes refinement
- Ready-made evaluation scripts and bilingual dictionaries
- Support for dozens of languages and fast experimentation
- Works with common monolingual embeddings like fastText
- Outputs reusable aligned spaces for retrieval and transfer