Our light reimplementation of bert for keras. A cleaner, lighter version of bert for keras. This is the keras version of the transformer model library re-implemented by the author and is committed to combining transformer and keras with as clean code as possible. The original intention of this project is for the convenience of modification and customization, so it may be updated frequently. Load the pre-trained weights of bert/roberta/albert for fine-tune. Implement the attention mask required by the language model and seq2seq. Pre-training code from zero (supports TPU, multi-GPU, please see pertaining). Compatible with keras, tf.keras.
Features
- Compatible with keras, tf.keras
- Rich examples
- Load the pre-trained weights of bert/roberta/albert for finetune
- Implement the attention mask required by the language model and seq2seq
- Pre-training code from zero
- Support TPU, multi-GPU, please see pretraining
License
Apache License V2.0Follow bert4keras
You Might Also Like
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of bert4keras!