Summarization, translation, sentiment analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX. This package is still in the alpha stage, therefore some functionalities such as beam searches are still in development. The simplest way to get started for generation is to use the default pre-trained version of T5 on ONNX included in the package. Please note that the first time you call get_encoder_decoder_tokenizer, the models are being downloaded which might take a minute or two. Other tasks just require to change the prefix in your prompt, for instance for summarization. Run any of the T5 trained tasks in a line (translation, summarization, sentiment analysis, completion, generation) Export your own T5 models to ONNX easily. Utility functions to generate what you need quickly. Up to 4X speedup compared to PyTorch execution for smaller contexts.
Features
- Run any of the T5 trained tasks in a line (translation, summarization, sentiment analysis, completion, generation)
- Export your own T5 models to ONNX easily
- Utility functions to generate what you need quickly
- Up to 4X speedup compared to PyTorch execution for smaller contexts
- ONNX-T5 is available on PyPi
- Summarization, translation, Q&A, text generation and more