The ultimate litmus test
I like how plainly Jack Clark puts this:
[F]or the AI revolution to truly pay out, it needs to change science: AI has already massively changed and accelerated the work of computer programmers, but I think for AI to have a large effect in the world we need to apply it to science —
the ultimate litmus test for the success of AI as a technology will be if it can either make research breakthroughs itself or provably massively accelerate scientists in their ability to make breakthroughs. FutureHouse is building software to help us see if this is the case.
Of course, I like how plainly Jack puts most things. As a journalist turned AI practitioner, he has not quite been able to put the documentary impulse behind him, and his weekly newsletter is THE essential AI industry read.
For my part, I continue to believe that supercharged AI science is not particularly likely but also not impossible —
(Here, by the way, is a Borgesian challenge to AI science: imagine a legitimately supersmart AI assistant that can, in fact, propose a perfect, paradigm-busting experiment. However, it can also propose 100,000 other experiments that are a waste of time … and there’s no way of knowing ahead of time which is which. A signature of AI intelligence is the cheerful willingness to offer another alternative, and another, and another. Where and how might stubborn specificity, even obsession, be introduced into these systems? What would it mean for an AI system to form a deep and compelling conviction? Is it even possible?)
(The Borgesian part comes from imagining a pile of 100,000 proposed physics experiments —