Hallucination Leaderboard is an open research project that tracks and compares the tendency of large language models to produce hallucinated or inaccurate information when generating summaries. The project provides a standardized benchmark that evaluates different models using a dedicated hallucination detection system known as the Hallucination Evaluation Model. Each model is tested on document summarization tasks to measure how often generated responses introduce information that is not supported by the original source material. The results are published as a leaderboard that allows researchers and developers to compare model reliability and factual consistency. By focusing on hallucination rates rather than traditional metrics such as accuracy or fluency, the benchmark highlights an important aspect of AI system safety and trustworthiness. The leaderboard is regularly updated as new models are released and evaluation methods evolve.
Features
- Benchmark that measures hallucination frequency in language model outputs
- Evaluation framework based on document summarization tasks
- Leaderboard comparing hallucination rates across multiple LLMs
- Automated scoring using a dedicated hallucination evaluation model
- Public dataset and evaluation pipeline for reproducible testing
- Regular updates tracking performance of newly released models