LongBench is a comprehensive benchmark designed to evaluate the ability of large language models to understand and reason over very long textual contexts. Traditional language model benchmarks typically evaluate tasks involving relatively short inputs, which does not reflect many real-world applications such as analyzing large documents or entire code repositories. LongBench addresses this gap by providing datasets that require models to process and reason over long sequences of text across multiple tasks. The benchmark includes multiple categories such as single-document question answering, multi-document reasoning, summarization, long dialogue understanding, and code analysis. It supports bilingual evaluation in English and Chinese to assess multilingual capabilities across extended contexts. Newer versions of the benchmark introduce extremely long context windows ranging from thousands to millions of tokens, enabling researchers to test the limits of modern long-context models.
Features
- Benchmark for evaluating long-context reasoning in large language models
- Multitask datasets covering QA, summarization, dialogue, and code analysis
- Support for bilingual evaluation in English and Chinese
- Context lengths ranging from thousands to millions of tokens
- Standardized dataset format enabling automated evaluation
- Tasks designed to simulate real-world long-document reasoning scenarios