The teacher lies sometimes
Worth acknowledging, if one must go on and on about language models: they still constantly make up the most ridiculous bullshit.
Looking for books about a semi-obscure subject, I gamely asked Claude for recs and promptly received a list of six, none of which exist. Yet the authors of three were real —
It’s difficult to call that experience a success, yet it clearly wasn’t a failure, either.
A similar story, with code: recently I was trying to render some images using Blender, and I wanted to do so entirely from the command line —
Claude got me started. Not without inventing many dozens of nonexistent functions; not without endlessly jumbling versions of the API; but with enough sense and structure to help me understand the Blender Way. Now, the raw API docs make sense, and I’m off to the races.
Language models have been framed as insurgent competitors to search, but presently the experiences are pretty similar, not in form but in requirement. In both cases, successful use demands confident navigation and quick triage. Woe unto the Google user who clicks the first search result, and woe unto the Claude user who believes it.
It’s dizzying for a machine to be so powerful, yet so clearly unsuitable for any kind of decision-making with actual consequences. “Yes, this is the most complex and broadly capable computer program ever deployed. No, you can only ask it about silly stuff that doesn’t really matter.”
I have to say, if it was me, I would be too embarrassed to release a product that so confidently produces so much bullshit. I suppose I’m glad it’s not me, though, because there’s plenty of value to be snatched from the jaws of confabulation.
To the blog home page