NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
Features
- Implements multiple NLG evaluation metrics
- Supports sentence-level and corpus-level evaluations
- Works with machine translation, summarization, and chatbot output
- Provides command-line and Python API access
- Allows custom metric integration
- Optimized for large-scale NLG benchmarking
Categories
Natural Language Processing (NLP)License
MIT LicenseFollow NLG-Eval
Other Useful Business Software
Secure File Transfer for Windows with Cerberus by Redwood
Cerberus supports unlimited users and connections on a single IP, with built-in encryption, 2FA, and a browser-based web client — all deployable in under 15 minutes with a 25-day free trial.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of NLG-Eval!