NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.

Features

  • Implements multiple NLG evaluation metrics
  • Supports sentence-level and corpus-level evaluations
  • Works with machine translation, summarization, and chatbot output
  • Provides command-line and Python API access
  • Allows custom metric integration
  • Optimized for large-scale NLG benchmarking

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow NLG-Eval

NLG-Eval Web Site

Other Useful Business Software
Try Google Cloud Risk-Free With $300 in Credit Icon
Try Google Cloud Risk-Free With $300 in Credit

No hidden charges. No surprise bills. Cancel anytime.

Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of NLG-Eval!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python

Related Categories

Python Natural Language Processing (NLP) Tool

Registered

2025-01-23