On LLMs and Human Intelligence (2)
Thinking more about this.
LLMs interpolate, they don't compose. They can't combine ideas into genuinely new structures because they have no mechanism for compositional reasoning, just proximity matching. Real composition might require consequences that happen to a continuous self, whereas LLMs rebuild from scratch each token. Humans can compose from sparse data.
Show a kid one lever, they'll start using sticks to flip things. LLMs need many examples to build a statistical manifold around "lever-ness." It's about composing from principles extracted from single experiences vs interpolating across massive experience-spaces. Just different kinds of cognition, not different points on the same scale.