Epistemological Fault Lines Between Human and Artificial Intelligence 2025

https://drive.google.com/file/d/1T2ClLvj3qwCFUzA4li8WCCJQDdX9022U/view?usp=sharing
ASSESSMENT
Issue #481
28 Dec 2025
The money shot in this paper is really this visual, which does a direct comparison between the reasoning scenarios of humans vs LLM’s. The argument supports Fei-fei Li / Yann LeCunn / Gary Marcus camp that LLM’s are not the route to AGI because they are dependent on text, which itself is a symbolic description of the world, not the world as it is experienced. From the recruiting POV, it also supports my provocation that the rehabilitation of ‘gut feel’ based hiring will be an inevitable consequence of the rise of LLM’s driven AI’s - we’re going to need an ideological basis to veto the recommendation; what else could it be other than ‘I just don’t like that guy?’