News

AI Experts Launch 'Humanity's Last Exam' to Test Limits

A global initiative, dubbed “Humanity’s Last Exam”, has been launched by the Center for AI Safety (CAIS) and startup Scale AI to evaluate the true intelligence of advanced AI systems. The project aims to develop new, challenging benchmarks for AI that go beyond popular tests, which current AI models now handle with ease. As AI continues to evolve, experts are increasingly concerned about how well current benchmarks assess AI’s reasoning and decision-making abilities.

This effort comes shortly after OpenAI previewed its new model, known as OpenAI o1, which reportedly outperformed most established reasoning benchmarks. Current AI models, like Anthropic's Claude, have shown remarkable improvement over the past year, scoring nearly 89% on an undergraduate-level test, up from 77% just a year earlier. These results indicate that while AI is advancing rapidly, current benchmarks may not be sufficient to measure its growing capabilities.

The goal of Humanity’s Last Exam is to identify more complex tasks that can push AI models beyond their comfort zones. Unlike conventional exams, this new initiative will focus on abstract reasoning and complex problem-solving, areas where AI systems currently struggle. The exam will consist of at least 1,000 difficult questions, crowd-sourced from experts and non-experts alike, and is expected to be ready by November 1. Winning submissions will undergo peer review, and contributors will be rewarded with co-authorship and up to $5,000 in prizes, sponsored by Scale AI.

AI experts, including Dan Hendrycks of CAIS, who co-authored widely used 2021 papers on AI testing, argue that new benchmarks are urgently needed to track the rapid advancements in AI. Hendrycks explains that many of today’s AI systems have been trained on data that includes answers from popular tests, rendering those benchmarks less effective as a measure of AI’s true intelligence. Some of the questions in Humanity's Last Exam will be kept private to ensure that AI cannot simply memorize the answers.

One key aspect of the exam will be testing AI’s ability to reason and plan, areas that some researchers believe are better indicators of true intelligence than simple fact recall or pattern recognition. The exam will avoid certain topics, such as weapons, deemed too risky for AI to study.

This project is seen as an important step in the ongoing development of AI technologies, as it aims to keep pace with the rapid improvements in AI models. As these systems become more advanced, understanding their capabilities and limitations is crucial, particularly in critical areas like decision-making and reasoning. Humanity’s Last Exam seeks to ensure that AI’s progress is measured against the most challenging benchmarks, providing a clearer picture of when AI has truly reached expert-level intelligence.

Want to stay updated on the latest in AI advancements? Follow us for more tech news and insights on the cutting-edge developments shaping the future of artificial intelligence.