| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| README.md | 2020-03-30 | 1.8 kB | |
| Text Summarization Models.tar.gz | 2020-03-30 | 586.3 kB | |
| Text Summarization Models.zip | 2020-03-30 | 736.0 kB | |
| Totals: 3 Items | 1.3 MB | 0 | |
Text Summarization
In this release, we support both abstractive and extractive text summarization.
New Model: UniLM
UniLM is a state of the art model developed by Microsoft Research Asia (MSRA). The model is pre-trained on a large unlabeled natural language corpus (English Wikipedia and BookCorpus) and can be fine-tuned on different types of labeled data for various NLP tasks like text classification and abstractive summarization.
Supported Models
unilm-large-casedunilm-base-cased
For more info about UniLM, please refer to the following: * Paper: Unified Language Model Pre-training for Natural Language Understanding and Generation * Github: https://github.com/microsoft/unilm
Thanks to the UniLM team, Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon, for their great work and support for the integration.
New Model: BERTSum
BERTSum is an encoder architecture designed for text summarization. It can be used together with different decoders to support both extractive and abstractive summarization.
Supported Models
bert-base-uncased(extractive and abstractive)-
distilbert-base-uncased(extractive) -
Papers:
-
GitHub:
Thanks to the original authors Yang Liu and Mirella Lapata for their great contribution.
All model implementations support distributed training and multi-GPU inferencing. For abstractive summarization, we also support mixed-precision training and inference.