Rethinking how we measure AI intelligence

Current AI benchmarks are struggling to keep pace with modern models. As helpful as they are to measure model performance on specific tasks, it can be hard to know if models trained on internet data are actually solving problems or just remembering answers they've already seen. As models reach closer to 100% on certain benchmarks, they also become less effective at revealing meaningful performance differences. We continue to invest in new and more challenging benchmarks, but on the path to general intelligence, we need to continue to look for new ways to evaluate. The more recent shift towards dynamic, human-judged testing solves these issues of memorization and saturation, but in turn, creates new difficulties stemming from the inherent subjectivity of human preferences.
While we continue to evolve and pursue current AI benchmarks, we’re also consistently looking to test new approaches to evaluating models. That’s why today, we're introducing the Kaggle Game Arena: a new, public AI benchmarking platform where AI models compete head-to-head in strategic games, providing a verifiable, and dynamic measure of their capabilities.
Why games are a meaningful evaluation benchmark
Games provide a clear, unambiguous signal of success. Their structured nature and measurable outcomes make them the perfect testbed for evaluating models and agents. They force models to demonstrate many skills including strategic reasoning, long-term planning and dynamic adaptation against an intelligent opponent, providing a robust signal of their general problem-solving intelligence. The value of games as a benchmark is further enhanced by their scalability—difficulty increases with the opponent's intelligence—and by our ability to inspect and visualize a model's "reasoning," which offers a glimpse into its strategic thought process.
Specialized engines like Stockfish and general game playing AI models like AlphaZero have been able to play games at a superhuman level for many years and would beat every frontier model without a doubt. Today’s large language models, however, are not built to specialize in any specific games, and as a result they do not play them nearly as well. While the immediate challenge for the models is to close this gap, in the long-term we would hope for them to achieve a level of play beyond what is currently possible. And with an endlessly increasing set of novel environments we can continue to challenge them even further.
How Game Arena promotes fair and open evaluation
Game Arena is built on Kaggle to provide a fair, standardized environment for model evaluation. For transparency, game harnesses — the frameworks that connect each AI model to the game environment and enforce the rules — as well as the game environments are all open-sourced. Final rankings are determined by a rigorous all-play-all system, where an extensive number of matches between each model pair ensures a statistically robust result.
Google DeepMind has long used games as a benchmark, from Atari to AlphaGo and AlphaStar, to demonstrate complex AI capabilities. By testing these models in a competitive arena, we can establish a clear baseline for their strategic reasoning and track progress. The goal is to build an ever-expanding benchmark that grows in difficulty as models face tougher competition. Over time, this could lead to novel strategies, much like AlphaGo's famous and creative “Move 37” that baffled human experts. The ability to plan, adapt and reason under pressure in a game is analogous to the thinking needed to solve complex challenges in science and business.
How you can watch the chess exhibition matches
On August 5 at 10:30 a.m. Pacific Time, join us for a special chess exhibition where eight frontier models will face off in a single elimination showdown. We selected a sample from the matches for this exhibition. Hosted by the world's best chess experts, this event is the premiere demonstration of the Game Arena methodology.
While the fun exhibition matches are in a tournament format, the final leaderboard rankings will be determined by the all-play-all system and released after the exhibition. This more extensive method runs over a hundred matches between every pair of models to ensure a statistically robust and definitive measure of performance. You can find more details and how to watch the games at kaggle.com/game-arena.
We plan to run more tournaments in the future on a regular basis, more on that soon.
How we’re building the future of AI benchmarks
This is only the beginning. Our vision for the Game Arena extends far beyond a single game. Kaggle will soon expand Game Arena with new challenges, starting with classics like Go and poker. These games, along with future additions like video games, are excellent tests of AI’s ability to perform long-horizon planning and reasoning, helping us create a comprehensive and ever-evolving benchmark for AI. We’re committed to continuously adding new models and harnesses to the mix, pushing the boundaries of what AI models can achieve. For more details about the Game Arena and the inaugural chess exhibition tournament, see Kaggle’s blog post.