[2602.06176] Large Language Model Reasoning Failures

[2602.06176] Large Language Model Reasoning Failures

A survey cataloguing the ways LLMs fail at reasoning, organized along two axes: what kind of reasoning (formal, informal, embodied) and what kind of failure (architectural limits, application-specific constraints, brittleness under minor variation). The paper reviews existing studies on each failure mode and proposes mitigations. Useful as a reference for anyone building on top of LLMs who wants to know where the guardrails should go.

Visit Original Article →

⌘K

Start typing to search...

Search across content, newsletters, and subscribers