NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
Features
- Implements multiple NLG evaluation metrics
- Supports sentence-level and corpus-level evaluations
- Works with machine translation, summarization, and chatbot output
- Provides command-line and Python API access
- Allows custom metric integration
- Optimized for large-scale NLG benchmarking
Categories
Natural Language Processing (NLP)License
MIT LicenseFollow NLG-Eval
Other Useful Business Software
Earn up to 16% annual interest with Nexo.
Put idle assets to work with competitive interest rates, borrow without selling, and trade with precision. All in one platform.
Geographic restrictions, eligibility, and terms apply.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of NLG-Eval!