Discussion about this post

User's avatar
normality's avatar

You do something that's pretty uncommon in reviews of any kind IME. You consistently present authors' claims in a concise, organized way before engaging, and the critiques logically tie back to your summary of the claims. Hopefully people will take notice! This is the way!

Expand full comment
Houston Wood's avatar

One key difference is that Panickssery is confident (can we say overconfident?) that there will not be a fast takeoff, that there will be no intelligence explosion, while Y & S think it is possible, and would be catastrophic. All of the major frontier labs are raising money based on the claim a fast take off is possible. So it seems to me that Panickssery needs to explain better why, in the event of a fast take-off, we need not worry.

My takeaway from reading the book and now this clearly written critique is that the costs of losing the gamble that there will not be a fast takeoff are too high. Even if the plane is only going to crash 1 out of a 1000 times, if the crash means widespread disruption to modernity, it is not worth the trip.

Why not just slow down? It's not as if the general public is demanding that AI companies rush forward--quite the contrary. Opinion polls indicate a desire for caution. The call to rush is mostly from investors and tech workers holding stocks and stock options in these companies.

Expand full comment
8 more comments...

No posts