LLM-Colosseum is an experimental benchmarking framework designed to evaluate the capabilities of large language models through gameplay interactions rather than traditional text-based benchmarks. The system places language models inside the environment of the classic video game Street Fighter III, where they must interpret the game state and decide which actions to perform during combat. This setup creates a dynamic environment that tests reasoning, situational awareness, and decision-making abilities in real time. Instead of relying purely on reward signals as in reinforcement learning agents, the models analyze contextual information and generate strategic actions based on the game environment. Performance is evaluated using a competitive ranking system that assigns models an ELO rating based on their results across matches against other models.

Features

  • Benchmark framework using a real-time game environment for evaluation
  • Street Fighter III gameplay used to test decision-making abilities of language models
  • Competitive ELO ranking system based on model performance in matches
  • Context-aware action selection rather than traditional reward-based reinforcement learning
  • Experimental platform for studying reasoning and strategic behavior in LLMs
  • Jupyter-based environment for running model competitions and evaluations

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow LLM Colosseum

LLM Colosseum Web Site

Other Useful Business Software
Go From AI Idea to AI App Fast Icon
Go From AI Idea to AI App Fast

One platform to build, fine-tune, and deploy ML models. No MLOps team required.

Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of LLM Colosseum!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM)

Registered

2026-03-07