Поиск по этому блогу

Search1

123

пятница, 1 марта 2024 г.

New possibilities or more of the same?

IP lawsuits, the risks of summarization, and more.
O'Reilly
Next:Economy
Newsletter
illustration of large bubble with the letters AI inside hovering ove a large city

Image generated using Microsoft Image Creator and Photoshop

A wild exploration of new possibilities or just more of the same?

Last week, I sat down with Project Syndicate to talk AI. We touched on whether the AI bubble would be a "productive" one, what could limit AI's potential, why we need a market structure for AI that benefits creators and encourages them to continue creating content, and much more. (If you think about the Roman Empire more than the average person, you'll also enjoy the discussion on what the fall of Rome tells us about our present.) Will we actually realize the benefits AI promises? At the moment, the jury's still out. As I noted, new technologies like AI

giv[e] us a chance for a do-over—an opportunity to build a better world. Yet, again and again, we mostly fail to take it. As J. Bradford DeLong put it in his latest book, rather than leveraging technology to take us straight to a better future, we are "slouching towards utopia."
As the subtitle of WTF?What's the Future and Why It's Up to Us—indicates, technology is not deterministic. We can use it for good or ill. The problem is that our society is not an open range; a lot of roads and rails have already been laid, and we tend to follow them, even though they often lead in the wrong direction.

But as I told Project Syndicate, we have a chance to blaze a new trail—if we confront the challenges we're facing head-on. Give it a read and let me know what you think.

+ From the Verge: "The frenzied hype around AI kept expectations high. The earnings calls disappointed. 2024 is going to be a year of reckoning."

+ Late last year in Locus, Cory Doctorow explained, "Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind. Sometimes, it can be hard to guess what kind of bubble you're living through until it pops and you find out the hard way." So, he asks, "What kind of bubble is AI?"

+ Last week Bloomberg's John Authers reflected on NVIDIA's massive growth and its recent earnings report. "This isn't yet an extreme bubble to match some from history," he argued. But "it could yet turn into one, particularly if the Fed cuts rates without slowing the economy."

The OpenAI endgame

As you probably know, the New York Times is suing OpenAI for copyright infringement over the latter's use of NYT content to train its models. We've talked a little in Next Economy about the complexities of applying the "fair use" doctrine to AI tools—and it's an important issue to hash out in court. But as O'Reilly's Mike Loukides points out, even a settlement will generate some pretty serious consequences. Mike guesses that the NYT will settle with OpenAI for a large sum of money and in effect "set a de facto price on training data," which AI companies will then be obligated to pay to all content publishers and creators. As a result only the richest companies will be able to build AI models. (I get into this a little bit too in my conversation with Project Syndicate above.) As he notes, this would have a chilling effect on open source development in particular:

What will AI be in the future? Will there be a proliferation of models? Will AI users, both corporate and individuals, be able to build tools that serve them? Or will we be stuck with a small number of AI models running in the cloud and being billed by the transaction, where we never really understand what the model is doing or what its capabilities are? That's what the endgame to the legal battle between OpenAI and the Times is all about.

+ ICYMI: Mike and I previously discussed how retrieval-augmented generation could help AI companies track the provenance of data used in responses and pay creators accordingly.

+ From the Financial Times: Here's Rana Foroohar's take on the lawsuit (and others like it).

+ From Ars Technica: "Why the New York Times Might Win Its Copyright Lawsuit Against OpenAI"

Who's asking for AI search?

I mentioned above that new technologies can help build a better world—if we can break free from the ruts we've been locked in. AI search is a useful case study: Is it an innovative use of new technology or simply a stale rehash? More to the point, as Ryan Broderick asks in Fast Company, "Does anyone even want an AI search engine? " It's not just that AI search is prone to hallucinations (even though it is). As Broderick explains, it's that the key objective of AI search—to summarize web pages for users who then have no need to actually visit those sites—risks destabilizing the infrastructure of the web that those same AI tools rely on to produce value (for both users and investors):

Why even bother making new websites if no one's going to see them?. . .To even entertain the idea of building AI-powered search engines means, in some sense, that you are comfortable with eventually being the reason those creators no longer exist. It is an undeniably apocalyptic project, but not just for the web as we know it, but also your own product. Unless you plan on subsidizing an entire internet's worth of constantly new content with the revenue from your AI chatbot, the information it's spitting out will get worse as people stop contributing to the network. Which is something that's already starting to happen.

The political economy of AI

Here's a long read from Johns Hopkins professor Henry Farrell that neatly ties together all these threads. Farrell emphasizes that to get to the core of the intellectual property controversy, we must first understand that AI is a "cultural technology" (a term coined by Alison Gopnik to describe "new techniques for passing on information from one group of people to another"). And as he notes, cultural technologies—books, internet search, etc.—can be used for good but can also just as easily cause harm. (I said much the same to Project Syndicate.) This tension between benefit and injury is animating the debates around fraught issues like copyright and summarization, and it's something that must be resolved if we're to realize the true value of AI, as Farrell's post and all the others above make abundantly clear. Farrell's entire article is well worth a read, but we'll close by sharing a bit that should resonate with Next Economy readers:

If these technologies are valuable, so too is the human generated knowledge that they summarize. In a world that is increasingly more complex, we are likely to need all the tools for managing complexity that we can get. But tools like LLMs are likely to be valuable precisely to the extent that they provide an interface that condenses, remixes, and provides access to high quality human knowledge. They may condense and make visible connections across this body of knowledge that would otherwise be hard to see. But they don't and can't provide a miraculous solution to the garbage-in, garbage-out problem. If they are trained on crap—whether that be lousy human generated information, or lousy synthetic data—they will produce crap.
This suggests that LLMs should not be viewed as a substitute for high quality human generated knowledge. They should instead be viewed as an obligate complement to such knowledge—a means of making it more useful, which doesn't have much independent worth without it. And that is important for our collective choices over intellectual property systems. If you want LLMs to have long term value, you need to have an accompanying social system in which humans keep on producing the knowledge, the art and the information that makes them valuable. Intellectual property systems without incentives for the production of valuable human knowledge will render LLMs increasingly worthless over time.

—Tim O’Reilly and Peyton Joyce

 

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.