Code Smells and Cameras
Most programmers will tell you about code smells. That ick you get from looking at messy, overly complex code. You can't always articulate why it's wrong, but you know something's off.
This intuition isn't unique to programming. Mathematicians talk about "elegant" proofs. Physicists get suspicious when equations get too complicated—simplicity is treated as weak evidence of truth. The whole lineage from Occam's razor to Kolmogorov complexity points at the same thing: compression and understanding are intimately related. If you truly grok a problem, you can express its solution minimally. Excess complexity signals excess confusion.
Which brings me to LLMs.
No doubt they've changed how I work. But they still generate code that smells. Not broken, just... bloated. Defensive boilerplate. Unnecessary abstractions. Cargo-culted idioms. Patterns that look like code because they were trained on code that looks like that.
Venkatesh Rao has this idea that LLMs are cameras, not engines. We think they're generating something, but they're actually photographing the statistical... [more]

