On LLMs and Human Intelligence

Someone recently tweeted about being in awe of LLMs while sensing they're missing something "ineffable yet essential" about general intelligence. I think maybe the fact that it's ineffable is a hint. That in fact there are no such things as general intelligences, and it's a whole slew of context-dependent, dynamic, embodied processes that emerge from continuous interaction between specific systems and their environments.

What irks us about LLMs is they're unreliable in a very inhuman way. When they fail, they violate our expectations of how minds work. We expect intelligence to come bundled with self-awareness, doubt, the ability to know what you don't know. LLMs show us intelligence decoupled from understanding, and that just doesn't feel right.

So I think it's a category error to compare human intelligence to LLMs, and asking "when will LLMs achieve human-level intelligence" is the wrong question. Maybe more interesting: what novel forms of cognition are these systems showing?