NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.

Features

  • Implements multiple NLG evaluation metrics
  • Supports sentence-level and corpus-level evaluations
  • Works with machine translation, summarization, and chatbot output
  • Provides command-line and Python API access
  • Allows custom metric integration
  • Optimized for large-scale NLG benchmarking

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow NLG-Eval

NLG-Eval Web Site

Other Useful Business Software
Go From AI Idea to AI App Fast Icon
Go From AI Idea to AI App Fast

One platform to build, fine-tune, and deploy ML models. No MLOps team required.

Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of NLG-Eval!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python

Related Categories

Python Natural Language Processing (NLP) Tool

Registered

2025-01-23