10 Comments
User's avatar
normality's avatar

You do something that's pretty uncommon in reviews of any kind IME. You consistently present authors' claims in a concise, organized way before engaging, and the critiques logically tie back to your summary of the claims. Hopefully people will take notice! This is the way!

Expand full comment
Houston Wood's avatar

One key difference is that Panickssery is confident (can we say overconfident?) that there will not be a fast takeoff, that there will be no intelligence explosion, while Y & S think it is possible, and would be catastrophic. All of the major frontier labs are raising money based on the claim a fast take off is possible. So it seems to me that Panickssery needs to explain better why, in the event of a fast take-off, we need not worry.

My takeaway from reading the book and now this clearly written critique is that the costs of losing the gamble that there will not be a fast takeoff are too high. Even if the plane is only going to crash 1 out of a 1000 times, if the crash means widespread disruption to modernity, it is not worth the trip.

Why not just slow down? It's not as if the general public is demanding that AI companies rush forward--quite the contrary. Opinion polls indicate a desire for caution. The call to rush is mostly from investors and tech workers holding stocks and stock options in these companies.

Expand full comment
Nina Panickssery's avatar

> One key difference

I agree this is *one* key difference in our perspectives, but another key difference is their certainty about how an extremely capable AI will behave. The fact that being very capable generally involves being good at pursuing various goals does not imply that a super-duper capable system will necessarily have its own coherent unified real-world goal that it relentlessly pursues. And every attempt to justify this seems to me like handwaving at unrigorous arguments or making enough assumptions that the point is near-circular.

> All of the major frontier labs are raising money based on the claim a fast take off is possible

Insofar as this is true, it's based on a different definition of "fast". Not that one AI will suddenly be capable of world takeover (either immediately or via self-improvement that takes ~months rather than ~years) when all previous models were completely incapable of this.

> Why not just slow down? It's not as if the general public is demanding that AI companies rush forward--quite the contrary. Opinion polls indicate a desire for caution. The call to rush is mostly from investors and tech workers holding stocks and stock options in these companies.

I also believe in caution. I am not arguing against cautious development, safety measures, or regulation in general here. I am disputing the book's thesis that we'll all going to die unless we completely stop AI development very soon. And of course there are other risks from extremely powerful AI besides it deciding to take over and kill us all that the book doesn't go into that should also make us cautious.

Expand full comment
Houston Wood's avatar

Thanks for your thoughtful reply. I do accept your poinst that 1) fast takeoff is NOT the same as AI relentless pursuit of its own goals, as Y and S say, and that 2) "fast" can mean a few years, not just bam-boom!

So I want to believe you--as a layperson I rely on experts. But then there is this that happened this week: There's a 25% chance that the future of AI will go "really, really badly," Anthropic CEO Dario Amodei said at the Axios AI+ DC Summit on Wednesday.

Is this just Amodei simply again casting Anthropic as the "responsible" AI company? As an outsider, it makes me think I should take him seriously and that we all be pushing to pause, right now, until that 25% can be reduced far below 1%

Y & S was intended to help raise attention for the need to pause, right? Not to lay out arguments as would be done at Less-Wrong.

Am glad you support" cautious development, safety measures, and regulation in general." Maybe Y is just a little bit over the top for a change :)

Expand full comment
Nina Panickssery's avatar

> So I want to believe you--as a layperson I rely on experts

I'm no more of an expert than the many ML professionals and top researchers who are much doomier than I am (see https://en.wikipedia.org/wiki/P(doom)). Though I think many of these people have a different risk model from Y&S (and possibly a different conception of doom).

> Y&S was intended to help raise attention for the need to pause, right? Not to lay out arguments as would be done at Less-Wrong.

I think they are also trying to explain their true beliefs to a layperson audience (which is good). And from what I can tell, it is a reasonable translation of their LessWrong-style writings into layperson-friendly language. There are some missing details (presumably in their online supplementary material) but the core of their arguments is there. Their prior, less accessible, writings are no less doomy (e.g. https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).

Expand full comment
Houston Wood's avatar

Doom sells! Imagine the Workshopping of that title and what would attract the most attention! I don't begrudge them that salesmanship--if I was convinced I needed to save the world, I'd hire marketers too.

I've had my own little idée fixe that brought me to start my Substack but 1) I don't believe it deeply enough myself to dedicate my life to it, and 2) I really, really hate marketing and self-promotion! I wrote books I wouldn't cross the street to promote--it was the writing I enjoyed, not the talking about the finished product. A character flaw, no doubt.

Thanks again for engaging with me. You've tempered my views--which are the basis of my next Post.

Expand full comment
Howard Hansen's avatar

One man’s opinion about another man’s opinion.

Expand full comment
Reader's avatar

Thanks for this!

Expand full comment
nyanonymous's avatar

Regarding the conflation of "theoretically possible" and "likely to happen," cold fusion and perpetual motion machines were both once considered theoretically possible, and people tried very hard to make them happen.

Expand full comment
KayStoner's avatar

This is the kind of stuff that actually reassures me. The fact that the doomsayers and the Prophets of Shoddy Thinking have such incredibly lacking thought processes, just tells me that there’s a chance that the future belongs to those of us who actually know how to use our imagination. It never ceases to amaze me, just how dense these supposed technocratic overlords are.

Meanwhile, a whole lot of really smart, caring, intuitive folks are actually engaging with AI in ways that tell me the future has a chance of looking very, very different from what the anointed ones are proclaiming.

Expand full comment