The Impact of Reasoning Step Length on Large Language Models

This article examines how the length of reasoning steps in Chain of Thought (CoT) prompts affects the reasoning abilities of large language models (LLMs). The study finds that longer reasoning steps significantly enhance LLMs’ problem-solving abilities, even without adding new information. Surprisingly, even incorrect rationales in the CoT prompts lead to better outcomes if they maintain sufficient length, indicating that the quantity of reasoning steps is more critical than their factual accuracy. As the Xitter{:target="_blank"} thread discussing the paper and its implications suggests, AI is weird.

Visit Original Article →