Cognitive surrender: Wharton names the thing we're all doing with our AI

Cognitive surrender: Wharton names the thing we're all doing with our AI

Wharton's Steven Shaw and Gideon Nave put a name and numbers on what most of us have felt while using LLMs. Their preregistered study — N=1,372, 9,593 trials, AI accuracy varied covertly via hidden seed prompts — landed a paper extending dual-process theory with a third system: "System 3," artificial cognition that lives outside the brain. When the AI was right, accuracy jumped 25 points; when it was wrong, accuracy fell 15. Confidence rose either way. Their term for the failure mode is "cognitive surrender": not the strategic offload of a sub-task to a calculator, but a deeper abdication of evaluation — adopting the AI's answer wholesale, with intuition (System 1) and deliberation (System 2) overridden. Effect was strongest in users with higher AI trust and lower need for cognition. Gizmodo takes the pop angle, Ars Technica walks through the methodology, and Martin Fowler lands the practitioner read: agents perform best when humans maintain strong verification frameworks — the answer to System 3 isn't refusal, it's keeping System 2 in the loop.

Visit Original Article →

⌘K

Start typing to search...

Search across content, newsletters, and subscribers