{"id":509,"date":"2024-08-17T11:00:00","date_gmt":"2024-08-17T11:00:00","guid":{"rendered":"http:\/\/washnow.me\/?p=509"},"modified":"2024-08-23T14:34:46","modified_gmt":"2024-08-23T14:34:46","slug":"how-ais-booms-and-busts-are-a-distraction","status":"publish","type":"post","link":"http:\/\/washnow.me\/index.php\/2024\/08\/17\/how-ais-booms-and-busts-are-a-distraction\/","title":{"rendered":"How AI\u2019s booms and busts are a distraction"},"content":{"rendered":"
\n

\"\"

A photo illustration of GPT-4o is seen on May 14, 2024. | CG\/VCG via Getty Images<\/figcaption><\/figure>\n

What does it mean for AI safety if this whole AI thing is a bit of a bust<\/a>?\u00a0<\/p>\n

\u201cIs this all hype and no substance?\u201d is a question more people have been asking<\/a> lately<\/a> about generative AI, pointing out that there have been delays in model releases<\/a>, that commercial applications have been slow to emerge<\/a>, that the success of open source models<\/a> makes it harder to make money off proprietary ones, and that this whole thing costs a whole lot of money<\/a>.<\/p>\n

I think many of the people calling \u201cAI bust\u201d don\u2019t have a strong grip on the full picture. Some of them are people who have been insisting all along that there\u2019s nothing to generative AI<\/a> as a technology, a view that\u2019s badly out of step with AI\u2019s many very real users and uses<\/a>.\u00a0<\/p>\n

\n

This story was first featured in the Future Perfect newsletter<\/a>.<\/h2>\n

Sign up here<\/a> to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.<\/p>\n<\/div>\n

And I think some people have a frankly silly view of how fast commercialization should happen. Even for an incredibly valuable and promising technology that will ultimately be transformative, it takes time between when it\u2019s invented and when someone first delivers an extremely popular consumer product based on it. (Electricity, for example, took decades<\/a> between invention and truly widespread adoption.) \u201cThe killer app for generative AI hasn\u2019t been invented yet\u201d seems true, but that\u2019s not a good reason to assure everyone that it won\u2019t be invented any time soon, either.<\/p>\n

But I think there\u2019s a sober \u201ccase for a bust\u201d that doesn\u2019t rely on misunderstanding or underestimating the technology. It seems plausible that the next round of ultra-expensive models will still fall short of solving the difficult problems that would make them worth their billion-dollar training runs \u2014 and if that happens, we\u2019re likely to settle in for a period of less excitement. More iterating and improving on existing products, fewer bombshell new releases, and less obsessive coverage.\u00a0<\/p>\n

If that happens, it\u2019ll also likely have a huge effect on attitudes toward AI safety, even though in principle the case for AI safety doesn\u2019t depend on the AI hype of the last few years.\u00a0<\/p>\n

The fundamental case for AI safety is one I\u2019ve been writing about<\/a> since long before ChatGPT and the recent AI frenzy. The simple case is that there\u2019s no reason to think that AI models which can reason as well as humans \u2014 and much faster \u2014 aren\u2019t possible, and we know they would be enormously commercially valuable if developed. And we know it would be very dangerous to develop and release powerful systems which can act independently in the world without oversight and supervision that we don\u2019t actually know how to provide.\u00a0<\/p>\n

Many of the technologists working on large language models believe<\/a> that systems powerful enough that these safety concerns go from theory to real-world are right around the corner<\/a>. They might be right, but they also might be wrong. The take I sympathize with the most is engineer Alex Irpan\u2019s<\/a>: \u201cThere\u2019s a low chance the current paradigm [just building bigger language models] gets all the way there. The chance is still higher than I\u2019m comfortable with.\u201d<\/p>\n

It\u2019s probably true that the next generation of large language models won\u2019t be powerful enough to be dangerous. But many of the people working on it believe it will be, and given the enormous consequences<\/a> of uncontrolled power AI, the chance isn\u2019t so small it can be trivially dismissed, making some oversight warranted.\u00a0<\/p>\n

How AI safety and AI hype ended up intertwined<\/h2>\n

In practice, if the next generation of large language models aren\u2019t much better than what we currently have, I expect that AI will still transform our world \u2014 just more slowly. A lot of ill-conceived AI startups will go out of business and a lot of investors will lose money \u2014 but people will continue to improve our models at a fairly rapid pace, making them cheaper and ironing out their most annoying deficiencies.\u00a0<\/p>\n

Even generative AI\u2019s most vociferous skeptics, like Gary Marcus, tend to tell me that superintelligence is possible; they just expect it to require a new technological paradigm, some way of combining the power of large language models with some other approach that counters their deficiencies.<\/p>\n

While Marcus identifies as an AI skeptic, it\u2019s often hard to find significant differences between his views and those of someone like Ajeya Cotra, who thinks that powerful intelligent systems<\/a> may be language-model powered in a sense that is analogous to how a car is engine-powered, but will have lots of additional processes and systems to transform their outputs into something reliable and usable.\u00a0<\/p>\n

The people I know who worry about AI safety often hope that this is the route things will go. It would mean a little bit more time to better understand the systems we\u2019re creating, time to see the consequences of using them before they become incomprehensibly powerful. AI safety is a suite of hard problems, but not unsolvable ones. Given some time, maybe we\u2019ll solve them all.<\/p>\n

But my sense of the public conversation around AI is that many people believe \u201cAI safety\u201d is a specific worldview, one that is inextricable from the AI fever of the last few years. \u201cAI safety,\u201d as they understand it, is the claim that superintelligent systems are going to be here in the next few years \u2014 the view espoused in Leopold Aschenbrenner\u2019s \u201cSituational Awareness<\/a>\u201d and reasonably common among AI researchers at top companies.\u00a0<\/p>\n

If we don\u2019t get superintelligence in the next few years, then, I expect to hear a lot of \u201cit turns out we didn\u2019t need AI safety.\u201d\u00a0<\/p>\n

Keep your eyes on the big picture<\/h2>\n

If you\u2019re an investor in today\u2019s AI startups, it deeply matters whether GPT-5 is going to be delayed six months or whether OpenAI is going to next raise money at a diminished valuation<\/a>.\u00a0<\/p>\n

If you\u2019re a policymaker or a concerned citizen, though, I think you ought to keep a bit more distance than that, and separate the question of whether current investors\u2019 bets will pay off from the question of where we\u2019re headed as a society.\u00a0<\/p>\n

Whether or not GPT-5 is a powerful intelligent system, a powerful intelligent system would be commercially valuable and there are thousands of people working from many different angles to build one. We should think about how we\u2019ll approach such systems and ensure they\u2019re developed safely.<\/p>\n

\u00a0If one company loudly declares they\u2019re going to build a powerful dangerous system and fails, the takeaway shouldn\u2019t be \u201cI guess we don\u2019t have anything to worry about.\u201d It should be \u201cI\u2019m glad we have a bit more time to figure out the best policy response.\u201d<\/p>\n

As long as people are trying to build extremely powerful systems, safety will matter \u2014 and the world can\u2019t afford to either get blinded by the hype or be reactively dismissive as a result of it.<\/p>\n","protected":false},"excerpt":{"rendered":"

A photo illustration of GPT-4o is seen on May 14, 2024. | CG\/VCG via Getty Images What does it mean for AI safety if this whole AI thing is a bit of a bust?\u00a0 \u201cIs this all hype and no…<\/p>\n","protected":false},"author":1,"featured_media":511,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/posts\/509"}],"collection":[{"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/comments?post=509"}],"version-history":[{"count":1,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/posts\/509\/revisions"}],"predecessor-version":[{"id":510,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/posts\/509\/revisions\/510"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/media\/511"}],"wp:attachment":[{"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/media?parent=509"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/categories?post=509"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/washnow.me\/index.php\/wp-json\/wp\/v2\/tags?post=509"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}