пятница, 19 июля 2024 г.

Determining the best business model for AI

AI aggregators, architectures of participation, or snake oil?
O'Reilly
Next:Economy
Newsletter

"Debunking AI snake oil reveals AI's true value." T. Merry, "W.E. Gladstone as a quack doctor," 1889
(modified with Adobe Photoshop).*

Apple's business model for AI

At last month's WWDC event, Apple announced a host of new features it dubbed "Apple Intelligence." (Spoiler: Apple Intelligence is just AI.) But the company isn't just spinning up an LLM of its own. It will also integrate tools from partners like OpenAI and Google . As Ben Evans explained, Apple's strategy—unlike those of its Big Tech peers—suggests that the company believes "that generative AI is a technology, not a product." Stratechery's Ben Thompson argued (in a post written in advance of WWDC) that the idea that "AI is a complement to Apple's business, not disruptive," is in keeping with Apple's product-focused business model. And that's because in the short term, AI stands to make hardware (like the iPhone) more important. In an update to his post following WWDC, Thompson pointed out another advantage for Apple: trust.

Apple is addressing a space that is very useful, that only they can address, and which also happens to be "safe" in terms of reputation risk. Honestly, it almost seems unfair—or, to put it another way, it speaks to what a massive advantage there is for a trusted platform. Apple gets to solve real problems in meaningful ways with low risk, and that's exactly what they are doing. . . .
. . .Apple is positioning itself as an AI Aggregator: the company owns users and, by extension, generative AI demand by virtue of owning its platforms, and it is deepening its moat through Apple Intelligence, which only Apple can do; that demand is then being brought to bear on suppliers who probably have to eat the costs of getting privileged access to Apple's userbase.

+ Wharton professor Ethan Mollick also weighed in on Apple Intelligence, using it as a jumping-off point to examine four models of how businesses are experimenting with AI. "What is worth paying attention to," he explains, "is how all the AI giants are trying many different approaches to see what works."

+ From The Atlantic: "The iPhone Is Now an AI Trojan Horse."

How to fix AI's original sin

Apple's decision to integrate third-party frontier models into its platform may also insulate the company from some of the thorniest problems AI companies are facing, largely around how they source their training data. (Then again, maybe not.) Big Tech companies are scraping the web to build datasets to train their models—Microsoft AI CEO and DeepMind cofounder Mustafa Suleyman recently claimed that any content on the open web is "freeware," available for anyone to use as they see fit. And as Forbes and WIRED reported, AI search company Perplexity has been ignoring sites' robots.txt files limiting access to web crawlers, essentially "scraping websites without permission" and surfacing that information in its responses to users. Is it copyright infringement in spirit if not in law—another instance of what The Daily host Michael Barbaro has called " AI's original sin"? A better question, as I explain in " How to Fix 'AI's Original Sin,'" may be "How do we create a virtuous circle of ongoing value creation, an ecosystem in which everyone benefits?" The good news is that

not only is the problem [of copyright violation] solvable but. . .solving it can create a new golden age for both AI model providers and copyright-based businesses. What's missing is the right architecture for the AI ecosystem, and the right business model.

Correcting our course means reversing the trend toward monopolization and building an "architecture of participation"—creating "a world of AI that works much like the World Wide Web or open source systems such as Linux." And this world is "far more likely to emerge from cooperating AI services built with smaller, distributed models."

+ An architecture of participation only works if people are incentivized to take part. But that won’t happen unless AI companies start respecting both creators' work and the decisions they make about sharing it. My company, O'Reilly, is proud to be one of the first to attribute the sources referenced by our AI tool—and use it to pay royalties to the creators of that content. Our users know that the information they're receiving is trustworthy, and our experts are paid for that use. It's a model other companies could learn from. Find out how we've accomplished it here.

+ Platformer's Casey Newton offered an interesting response to my article in the context of the controversy about Perplexity.

+ From Engadget: "Artists Criticize Apple’s Lack of Transparency Around Apple Intelligence Data."

Debunking "AI snake oil"

In order to tackle the increasingly entrenched monopolization of AI, we must scrutinize the promises those companies are making and distinguish what's actually possible from self-interested fantasy. AI Snake Oil is an invaluable newsletter (and upcoming book ) by Princeton's Arvind Narayanan and Sayash Kapoor that takes as its mission "to dispel hype, remove misconceptions, and clarify the limits of AI" through insightful posts on key topics like copyright, AI safety, machine-learning-based science, and AI agents . Narayanan and Kapoor aren't pessimistic about AI—they're realistic. And that puts them at odds with the hype machine that's pumping "dubious uses of AI" while warning of existential doom. But debunking AI snake oil reveals what's truly valuable about this technology, as Narayanan and Kapoor explain in the introduction to their project:

AI is being used to make impactful decisions about us every day, so broken AI can and does wreck lives and careers. Of course, not all AI is snake oil—far from it—so the ability to distinguish genuine progress from hype is critical for all of us.

+ Will AI deliver on its promise? Goldman Sachs Global Macro Research's latest report on AI, Gen AI: Too Much Spend, Too Little Benefit?, considers the question from both sides. And Ed Zitron discusses some of the more unfavorable findings at Where's Your Ed At?

+ AI companies use agreed-upon benchmarks to measure the effectiveness of their tools. But the issue with AI benchmarks, says The Markup's Jon Keegan, is that they "don't [actually] tell you much, if anything, about an AI product." It's a problem that will continue until a consensus is reached on a set of ethical benchmarks: as Arvind Narayanan explains, even if the quality of a benchmark is suspect, "once a benchmark becomes widely used, it tends to be hard to switch away from it, simply because people want to see comparisons of a new model with previous models."

+ More from Ben Evans: "The AI Summer"

Introducing the SSRC’s AI Disclosures Project

I’m happy to announce the launch of the Social Science Research Council’s AI Disclosures Project, aimed at "addressing the potentially dangerous consequences for society's safety and equity that might arise from how AI is commercialized." Ilan Strauss, my colleague and a senior research associate at UCL's Institute for Innovation and Public Purpose, and I will be leading the project with the goal of creating a "systematic disclosure and auditing framework that can become the basis for a set of 'Generally Accepted AI Management Principles.'" You can learn more about the project here, and be sure to follow our work on X and our newsletter, Asimov's Addendum (coming soon).

GenAI is "particularly, exceedingly, marvellously ill-suited" for scientific research

Here's a marvelous X thread from deep learning researcher Jeremy Howard about why LLMs aren't likely to make scientific breakthroughs, at least as currently designed. But it goes off into a wonderful meditation on transgression of expectations and creative breakthroughs. Anyone who cares about the future of human-AI culture (and not just AI in scientific research) ought to read it.

aaaaaaaaa

—Tim O’Reilly and Peyton Joyce

 

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.