Recitation over Reasoning: How Cutting-Edge Language Models Can Fail on Elementary School-Level Reasoning Problems?
Posted by ninjasaid13@reddit | LocalLLaMA | View on Reddit | 4 comments
Abstract
The rapid escalation from elementary school-level to frontier problems of the difficulty for LLM benchmarks in recent years have weaved a miracle for researchers that we are only inches away from surpassing human intelligence. However, is the LLMs' remarkable reasoning ability indeed comes from true intelligence by human standards, or are they simply reciting solutions witnessed during training at an Internet level? To study this problem, we propose RoR-Bench, a novel, multi-modal benchmark for detecting LLM's recitation behavior when asked simple reasoning problems but with conditions subtly shifted, and conduct empirical analysis on our benchmark. Surprisingly, we found existing cutting-edge LLMs unanimously exhibits extremely severe recitation behavior; by changing one phrase in the condition, top models such as OpenAI-o1 and DeepSeek-R1 can suffer 60% The rapid escalation from elementary school-level to frontier problems of the difficulty for LLM benchmarks in recent years have weaved a miracle for researchers that we are only inches away from surpassing human intelligence. However, is the LLMs' remarkable reasoning ability indeed comes from true intelligence by human standards, or are they simply reciting solutions witnessed during training at an Internet level? To study this problem, we propose RoR-Bench, a novel, multi-modal benchmark for detecting LLM's recitation behavior when asked simple reasoning problems but with conditions subtly shifted, and conduct empirical analysis on our benchmark. Surprisingly, we found existing cutting-edge LLMs unanimously exhibits extremely severe recitation behavior; by changing one phrase in the condition, top models such as OpenAI-o1 and DeepSeek-R1 can suffer 60% performance loss on elementary school-level arithmetic and reasoning problems. Such findings are a wake-up call to the LLM community that compels us to re-evaluate the true intelligence level of cutting-edge LLMs.
nomorebuttsplz@reddit
Having glanced at the paper, to me it looks like they are basically just injecting a "Misguided Attention" style word trick into the problem. These are things that people can also often fail to detect and we've known llms often struggle with these things as well.
The two example problems seem frankly pretty stupid and poorly worded but maybe better worded in Chinese (?).
Overall I'm not impressed and it seems we've reached the point where we're continually reaching to find things LLMs are bad at -- such the recent results of the math test that aims towards proofs. Ok, perhaps they do poorly because they haven't been trained on finding proofs but correct answers? This is a fine area for future development but has nothing to do with "reasoning over recitation" or similar arguments like "It's not real emergence." At this point they're so boring and obviously wrong to me at least.
TedHoliday@reddit
Let me know when an LLM actually writes even a single paper that is novel and impactful in a field.
tim_Andromeda@reddit
This is very good, simar to a recent study by apple, deliniating(?) the difference between reasoning and reciting. The lead off example given in the paper is very telling.
Formal_Drop526@reddit
There's also this paper: [2503.21934v1] Proof or Bluff? Evaluating LLMs on 2025 USA Math Olympiad