Поиск по этому блогу

Search1

123

пятница, 1 декабря 2023 г.

The big takeaways from the chaos at OpenAI

On risk, value, leverage, and safety.
O'Reilly
Next:Economy
Newsletter
rift

Modified image by Hitchster on Flickr

OpenAI undoubtedly changed the industry when it released ChatGPT, and in doing so brought questions of AI safety to the fore. But a year later, tensions within the company over wider goals and objectives erupted, plunging it into chaos. To many watching the drama unfold, the events reflected other high-profile firings (particularly Steve Jobs’s from Apple, though Altman was reinstated after only a few days rather than over a decade), but as I explained to Laurent Belsie in the Christian Science Monitor, “This is an early skirmish in a war for the future”—pitting techno optimism against AI “doomerism” and AI safety against profit. Now that things have settled down, here are some of the perspectives we enjoyed on the situation at OpenAI and what it can teach us about the future of the industry. Let us know if there are others that you found helpful.

(Note that many of these articles were written in the midst of the fast-moving episode. While some of the speculation may have proved incorrect, the wider themes are still worthy of attention.)

It exposed the rift between AI doomers and accelerationists

Altman’s firing laid bare the tensions between those eager to push AI innovation forward at all costs and those concerned about what they deem AI’s existential risk. And as Henry Farrell, author and professor of international affairs at Johns Hopkins, contends, arguments from AI doomers have a real theological bent. Farrell considers the connection between AI development and the "rationalism" of Eliezer Yudkowsky, a movement that theorizes the risks of an all-powerful artificial general intelligence. (Rationalism is also the philosophy underpinning the increasingly controversial Effective Altruism movement.) As Farrell points out, “All this would be sociologically fascinating, but of little real world consequence, if it hadn’t profoundly influenced the founders of the organizations pushing AI forward.” Of course, as Farrell notes, most people working in AI are not fervent doomers (or accelerationists for that matter), finding balance somewhere in the space between safety and profit. And while rationalism may not be the strongest philosophy to guide AI, neither is the main impulse driving acceleration:

The OpenAI saga is a fight between God and Money; between a quite peculiar quasi-religious movement, and a quite ordinary desire to make cold hard cash. You should probably be putting your bets on Money prevailing in whatever strange arrangement of forces is happening as Altman is beamed up into the Microsoft mothership. But we might not be all that better off in this particular case if the forces of God were to prevail, and the rationalists who toppled Altman were to win a surprising victory. They want to slow down AI, which is good, but for all sorts of weird reasons, which are unlikely to provide good solutions for the actual problems that AI generates. The important questions about AI are the ones that neither God or Mammon has particularly good answers for.

And of course, it was also about money

In his initial take, Bloomberg’s Matt Levine focused on OpenAI’s corporate structure (which purports that the board of directors “controls” the for-profit LLC). Levine offered his own “annotated” version, superimposing “MONEY” over Microsoft’s “minority owner” stake in the LLC, asking, “Is control of OpenAI indicated by the word ‘controls,’ or by the word ‘MONEY’? ” While on paper OpenAI is structured as a nonprofit answering to “humanity” rather than “investors,” as Levine argues, the actions of the past two weeks prove that in the end, money always wins. (More on that below.) Most interestingly, he guesses that in firing Altman, the board was trying to prevent OpenAI from becoming just another Big Tech company:

The boardroom coup at OpenAI really might have been, at least in part, about the board’s literal fears of AI apocalypse. But those fears are also, absolutely, a metaphor for Silicon Valley capitalism. The board looked at OpenAI and saw a CEO who was too focused on market share and profitability and expansion, and decided to stop him.

Microsoft came out on top

The big winner in all this—besides Altman himself—was OpenAI minority shareholder Microsoft. It first appeared that Altman, OpenAI president Greg Brockman, and a huge number of OpenAI employees would move to a new org within Microsoft, which as Stratechery’s Ben Thompson points out, would mean that “Microsoft [would have] just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.” Of course, that’s not exactly how it played out, but Microsoft still came out ahead (and is likely in an even better position, since it won’t have to deal with the challenges that would come with ownership). As Thompson explains, OpenAI’s objectives as a nonprofit were always in conflict with the demands of investors like Microsoft—and with the economic reality of training AI models. OpenAI needed money to fund the “massive amounts of compute” required to build its products, and Microsoft was happy to step in. “In other words,” Thompson argues, “while [OpenAI’s] board may have had the charter of a non-profit, and an admirable willingness to act on and stick to their convictions, they ultimately had no leverage because they weren’t a for-profit company with the capital to be truly independent.” As Thompson wrote on Nov. 20, “What is clear is that Altman and Microsoft are in the driver seat of AI.” That’s still true. And with Altman back as CEO, Microsoft now has even more leverage. In a post on Monday, Thompson guessed “that Microsoft will make noise about pushing for a board seat , but ultimately use that as negotiating leverage to push for even more extensive IP rights to OpenAI’s code and weights.” (For now, Altman has lost his own seat on the board.) Another big winner: other AI companies. As Thompson points out in the same post, companies that depended solely on OpenAI’s products are doubtless looking for alternatives to protect their investments.

The big takeaway

In his column in the Los Angeles Times, Brian Merchant spells out the real lesson to be learned from the drama at OpenAI: “Concerns about ‘AI safety’ are going to be steamrolled by the tech giants itching to tap in to a new revenue stream every time.” Regardless of whether or not the board’s initial decision to fire Altman was flawed, as Merchant attests, the events since have amounted to “a further consolidation of power of one of the biggest tech companies and less accountability for the product than ever.” 

We don’t know exactly what precipitated Altman’s firing. Among other things, rumors have pointed to a breakthrough model that may be able to solve math problems. (Casey Newton has reported that “the board never received any formal communication about [this model].”) Such a model in itself would only be a stepping stone to AGI—or maybe it’s just hype. But since AI companies aren’t yet required to disclose key information, we’re not likely to find out unless that model is released as a product. As Merchant explains above, the saga at OpenAI shows that we can’t simply trust companies to build safe and responsible AI. The pace of AI innovation isn’t going to slow in the near term, and organizations won’t place safety over profit on their own accord. We need to regulate them, mandate disclosures about their models and products, and make the results public. As I’ve noted before,

OpenAI’s own AI safety and responsibility guidelines cite. . .goals [like data privacy and ownership, bias and fairness, transparency, accountability, and standards], but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? 

But as I pointed out then, and as recent events demonstrate, it matters whose human values we elevate. O’Reilly’s Mike Loukides summed this up nicely in a recent post:

Our fears of AI are really fears of ourselves, fears that it will act as badly as humans have repeatedly acted. . . .We really don't have a chance to solve the AI problem if we can't solve the human problem. And if we can't solve the latter, the former is probably irrelevant.

—Tim O’Reilly and Peyton Joyce

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.