Seeking to challenge artificial intelligence systems that have been handling benchmark tests with ease, a team of technology experts has issued a global call for the toughest questions to determine when expert-level AI has truly arrived. This project, called “Humanity’s Last Exam,” is a collaboration between the Centre for AI Safety and startup Scale AI. The goal is to remain relevant as AI capabilities continue to advance in the future.
The call for tough questions comes after the unveiling of OpenAI o1, a new model developed by the maker of ChatGPT, which has excelled in reasoning benchmarks. Dan Hendrycks, executive director of CAIS, co-authored two papers in 2021 proposing tests for AI systems on topics like US history and competition-level math. AI systems were previously struggling to answer questions on these tests, but now they are excelling, rendering common benchmarks less meaningful.
Although AI has shown improvements in some areas, lesser-used tests involving plan formulation and visual pattern-recognition puzzles have revealed poor performance. As some researchers argue that planning and abstract reasoning are better measures of intelligence, Hendrycks emphasizes the need for abstract reasoning in tests like “Humanity’s Last Exam.” Additionally, privacy measures will be taken to ensure that AI systems’ answers are not based on memorization of common benchmarks.
The upcoming exam will consist of at least 1,000 crowd-sourced questions that are challenging for non-experts to answer. These questions are due on November 1 and will undergo peer review, with winning submissions receiving co-authorship and up to $5,000 in prizes sponsored by Scale AI. By providing harder tests for expert-level models, the organizers aim to measure the rapid progress of AI accurately. However, questions related to weapons will be restricted due to safety concerns.
It is evident that AI systems have made significant progress in certain benchmark tests, prompting the need for more challenging assessments to gauge expert-level capabilities. The “Humanity’s Last Exam” project aims to push AI systems to their limits to determine when they have achieved expert-level performance. With the involvement of the Centre for AI Safety and Scale AI, this initiative seeks to offer relevant and rigorous evaluations for AI systems as they continue to evolve.
In light of recent advancements in AI capabilities, traditional benchmarks may no longer serve as accurate measures of intelligence. Tests involving various topics like US history and competition-level math have shown significant improvements from AI systems, highlighting the need for more challenging assessments. “Humanity’s Last Exam” promises to provide a platform for rigorous testing, including questions that are difficult even for non-experts, ensuring that AI systems are truly pushed to their limits.
As AI researchers explore the nuances of intelligence measurement through various tests, the importance of abstract reasoning and planning skills in AI systems has been emphasized. The visual aspect of certain tests may not always be aligned with language models, leading to the need for diverse evaluation methods. By offering a comprehensive exam that prioritizes abstract reasoning, “Humanity’s Last Exam” aims to provide a holistic assessment of AI systems’ capabilities, setting a new standard for expert-level performance evaluation.