Reference implementations of MLPerf™ training benchmarks
Agentic, Reasoning, and Coding (ARC) foundation models
A Heterogeneous Benchmark for Information Retrieval
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Code for the paper "Evaluating Large Language Models Trained on Code"
Code for running inference and finetuning with SAM 3 model
MTEB: Massive Text Embedding Benchmark
LongBench v2 and LongBench (ACL 25'&24')
A.S.E (AICGSecEval) is a repository-level AI-generated code security
Benchmarking synthetic data generation methods
Meta Agents Research Environments is a comprehensive platform
Visual Causal Flow
Leaderboard Comparing LLM Performance at Producing Hallucinations
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Geometric deep learning extension library for PyTorch
Collections of robotics environments
Designed for text embedding and ranking tasks
Large-Scale Agentic RL for High-Performance CUDA Kernel Generation
Collection of reference environments, offline reinforcement learning
Benchmark LLMs by fighting in Street Fighter 3
Provider-agnostic, open-source evaluation infrastructure
Advanced language and coding AI model
Capable of understanding text, audio, vision, video
A Python toolbox for scalable outlier detection
Simulation framework for accelerating research