This is a post from Robin Sloan’s lab blog & notebook. You can visit the blog’s homepage, or learn more about me.

Talkie and Claude (no, the other one)

April 27, 2026

There have been exper­i­ments involving lan­guage models trained on vin­tage text before, but they lin­gered mostly in the realm of the gimmick; Talkie is notable both for its size — the largest such model so far, 13B para­me­ters trained on 260B tokens written before 1930 — and for the depth of the ques­tions its cre­ators are asking.

We know a smart human from the 1930s, yanked a hun­dred years into the future, could learn to pro­gram com­puters using, e.g., Python without any problem.

Can an LLM yanked a hun­dred years into the future do this, too?

Talkie reveals that no, it can’t, not really — though the tests in the launch post only scratch the sur­face of what might be attempted and explored. And of course a bigger Talkie, maybe GPT-3-class, would have dif­ferent capabilities — if indeed it’s even pos­sible to train one. (One senses the authors here have already rum­maged beneath the couch cush­ions for pre-1930 tokens … )

I’m presently reading a ter­rific biog­raphy of Claude Shannon. In the late 1930s, his MIT master’s thesis — “the most impor­tant master’s thesis ever”—established a direct map­ping between elec­tric cir­cuits and Boolean logic. This con­nec­tion was both very simple and totally radical; at the time, Boolean logic wasn’t con­sid­ered par­tic­u­larly practical — in fact, it wasn’t con­sid­ered much at all. In a stroke, Shannon’s insight opened up a new field, basi­cally the same one that all this LLM research is unfolding in today.

If you could coax Talkie, or a future ver­sion that’s larger and more capable, into making Claude Shannon’s con­nec­tion — without simply giving it away, of course — it would pro­vide evi­dence that modern LLMs might be able to make con­nec­tions of that power at the real fron­tier of knowl­edge today.

Conversely, if no amount of coaxing or even coaching could get Big Talkie any­where near a robust approx­i­ma­tion of Shannon’s thesis … it would raise ques­tions about this whole game plan.

I asked Talkie:

Does it seem to you like there might be any cor­re­spon­dence between elec­tric cir­cuits and the logic of George Boole?

Talkie replied:

I think not. The fun­da­mental propo­si­tions in logic are inde­pen­dent of all elec­trical considerations, and they do not admit of any illus­tra­tions drawn from elec­tricity.

This isn’t a fair test, of course; Talkie hasn’t yet been trained to run in dogged loops, to roam through vast fields of if/then, but wait, actually … There’s plenty of inves­ti­ga­tion that remains to be done here.

Demis Has­s­abis is fond of saying that a test for truly pow­erful AI would be to train a Talkie-like LLM with a knowl­edge cutoff of 1911, then chal­lenge it to for­mu­late gen­eral relativity, as Ein­stein did in 1915.

I agree that this would be impressive, but/and I also wonder if it’s too challenging. Sci­ence would ben­efit from Ein­steins on demand, sure … but it would also ben­efit from sim­pler insights: the kind of “what if X is also Y” map­ping that Claude Shannon pro­vided. Those feel to me much more plau­sibly in the wheel­house of LLMs than Ein­stein-level cosmic restructurings. (I feel sort of bad calling Shannon’s century-defining insight “sim­pler” but … I also sort of think he would agree … )

That’s not to say I find even those simple insights, at this moment, par­tic­u­larly plausible … you read about Shannon and you learn there was more than lan­guage in play here. This was a guy deeply enmeshed in the phys­ical world. For him, the cir­cuits weren’t imaginary; they were real, and they were a tan­gled mess.

Yet it does not seem, in principle, IMPOS­SIBLE for some future Talkie to go crawling through cir­cuit diagrams, through crusty neglected Boole, and dis­cover the same simple, incandescent, epochal trans­la­tion that Shannon did. It’s very inter­esting to think about.

Anyway, this is all to say, Talkie is a triumph, hugely provocative, poten­tially very productive. Bravo!

To the blog home page