This is a post from Robin Sloan’s lab blog & notebook. You can visit the blog’s homepage, or learn more about me.

Knowledge and memory

September 6, 2025

The other day, I asked Claude how to do some­thing using a par­tic­ular Ruby library, and it hal­lu­ci­nated three nonex­is­tent methods in a row. We can ask “why do lan­guage models do this?” but/and we can also ask, “why doesn’t Robin do this?”

I think it’s because I don’t only know things: I remember learning them. My knowl­edge is sedimentary, and I can “feel” the posi­tion and solidity of dif­ferent facts and ideas in that mass. I can feel, too, the airy dis­con­nect of a guess.

If you’d chal­lenged me to simply guess the methods I was looking for, I would have typed exactly what Claude hal­lu­ci­nated. Same goes for most Ruby programmers. So, why didn’t I guess, and then find myself sin­cerely sur­prised (as Claude surely was) when the methods didn’t exist? Well, checking my memory, I found no record of ever learning them in the first place.

Not that I can con­nect every Ruby method I know to the pre­cise time and place of its memorization — but there is some tag, some tether, some … some­thing. It’s a wild sort of proprioception.

I’ll remind you that biol­o­gists do not, in the year 2025, know memory’s phys­ical sub­strate in the brain! Plenty of hypotheses — no agreement. Is there any more cen­tral mys­tery in human biology, maybe even human existence?

Lan­guage models don’t have memory at all, because they don’t have expe­ri­ences that com­pound and inform each other. Don’t the model weights encode a vast store­house of memory? No — those are closer to DNA, an inheritance. The model weights are awe­some the way an embryo’s devel­op­ment is awe­some, rather than the way Steph Curry’s three-pointers are awe­some.

Many engi­neers have pinned their hopes on the con­text window as a kind of memory, a place where “expe­ri­ences” might accrue, leave traces. There’s cer­tainly some utility there … but the analogy is waking up in a hotel room and finding a scratchpad full of notes that you don’t remember making. (Lan­guage models might, after all, be in hell.) You prob­ably go ahead and trust the notes … but the dis­ori­en­ta­tion of that sce­nario should be clear. The movie Memento is not the chron­icle of a very stable guy’s very normal day.

The solid, struc­tured memory that we use to under­stand what we know and don’t know — when and when not to guess — requires time, and prob­ably also a sort of causal web, episodes and expe­ri­ences all linked together. Maybe that’s a way of saying it requires life, being alive, oper­ating in the world. I don’t believe hal­lu­ci­na­tion will go away — indeed I think it will con­tinue to be a huge problem — until a new kind of AI model goes out into the world and, in some real sense, lives in it.

To the blog home page