Reference implementations of MLPerf™ training benchmarks
Agentic, Reasoning, and Coding (ARC) foundation models
A Heterogeneous Benchmark for Information Retrieval
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Code for running inference and finetuning with SAM 3 model
MTEB: Massive Text Embedding Benchmark
Code for the paper "Evaluating Large Language Models Trained on Code"
LongBench v2 and LongBench (ACL 25'&24')
A.S.E (AICGSecEval) is a repository-level AI-generated code security
Benchmarking synthetic data generation methods
Meta Agents Research Environments is a comprehensive platform
Visual Causal Flow
Leaderboard Comparing LLM Performance at Producing Hallucinations
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Geometric deep learning extension library for PyTorch
Designed for text embedding and ranking tasks
Large-Scale Agentic RL for High-Performance CUDA Kernel Generation
General plug-and-play inference library for Recursive Language Models
Collection of reference environments, offline reinforcement learning
Python-based research interface for blackbox
Simulation framework for accelerating research
Benchmark LLMs by fighting in Street Fighter 3
Capable of understanding text, audio, vision, video
Provider-agnostic, open-source evaluation infrastructure
Collections of robotics environments