NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
Features
- Implements multiple NLG evaluation metrics
- Supports sentence-level and corpus-level evaluations
- Works with machine translation, summarization, and chatbot output
- Provides command-line and Python API access
- Allows custom metric integration
- Optimized for large-scale NLG benchmarking
Categories
Natural Language Processing (NLP)License
MIT LicenseFollow NLG-Eval
Other Useful Business Software
Train ML Models With SQL You Already Know
Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of NLG-Eval!