Generating adversarial examples for NLP models. TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
Features
- Understand NLP models better by running different adversarial attacks on them and examining the output
- Research and develop different NLP adversarial attacks using the TextAttack framework and library of components
- Augment your dataset to increase model generalization and robustness downstream
- Train NLP models using just a single command (all downloads included!)
- Documentation available
- You should be running Python 3.6+ to use this package
- Pre-trained Models for testing attacks and evaluating constraints
- Visualization options like Weights & Biases and Visdom
- AttackedText, a utility class for strings that includes tools for tokenizing and editing text
Categories
Machine LearningLicense
MIT LicenseFollow TextAttack
You Might Also Like
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of TextAttack!