Поиск по этому блогу

Search1

123

пятница, 13 сентября 2024 г.

Skepticism isn’t the same as doomerism

AI risk, art, and making products that people actually want.
O'Reilly
Next:Economy
Newsletter

"Should we be skeptical of AI art?" Generated with Adobe Firefly.

"What the AI debate is really about"

A few weeks ago, Matthew Yglesias shared his frustration with "the tendency to pit 'doomers' against AI 'optimists,'" noting that the two sides of the AI debate are much more closely aligned in most cases than such antipodal terminology suggests:

Outside of a handful of internet personalities, relatively few of the people raising safety concerns about AI development are actually saying humanity is doomed. . . .Powerful technologies with massive upside often also have significant downside, with the power to doom us if, and only if, we plow ahead in a totally heedless way. The argument is that we shouldn't do that.
And on the other side, most of the people in the non-doom camp aren't really optimists deep down. . . .[M]any of those who are more optimistic are still soft AI skeptics. They fundamentally agree with the bubble-callers and tech industry haters that this is just the latest thing in the Silicon Valley hype cycle. They're just more optimistically disposed to the hype cycle.

Yglesias holds that it all actually comes down to how skeptical you are about the possibility that someone will build a superintelligent AI in the near future. Superintelligence would clearly be revolutionary. However, as he points out with regard to the pace of AI advances, "just because a line has been going up and to the right over the past four years doesn't mean it will continue to do so." Meanwhile, the focus on AI progress has stymied the development of useful AI products from current models. (More on this below.) As Yglesias reasons, a world that never creates a superintelligence will still mean "a lot of people will make a lot of money"; in that world, "a lot of normal regulatory issues will [also] be very relevant." And if you do believe that a superintelligence is imminent, there are quite a few reasons to be worried beyond "AI will doom us all."

+ In Project Syndicate, Daron Acemoglu argues that "the current safety debate [focused as it is on superintelligence] not only (unhelpfully) anthropomorphizes AI; it also leads us to focus on the wrong targets. Since any technology can be used for good or bad, what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to."

+ In my latest post on Asimov's Addendum, I ask, "What do we not see [in the current consideration of AI risk]?" For one, "we do not see much attention paid to the many ways in which harm may be caused by the owners—either the developer/deployers or the third party customers—of AI in pursuit of their business objectives." And we may be losing our best chance to address this harm through regulation.

+ Research at the University of Washington "found that ChatGPT consistently ranked resumes with disability-related honors and credentials. . .lower than the same resumes without those honors and credentials."

+ From Ars Technica: "LLMs Have a Strong Bias Against Use of African American English."

Making AI products people want

Following its most recent earnings report (which, compared to wildly inflated expectations, was less-than-stellar), NVIDIA's stock fell 9.5%, wip[ing] out $278.9 billion in the biggest loss of value ever for a US stock." And other tech stocks soon plummeted as well—a warning, as Bloomberg observed, "that AI's promise to rewire global economies [is] far from being realized, making it hard to justify lofty valuations." In their AI Snake Oil newsletter, Arvind Narayanan and Sayash Kapoor argue that what AI companies have forgotten is to "make something people want." In this insightful post, they outline " five limitations of LLMs that developers need to tackle in order to make compelling AI-based consumer products." None are likely to shock you, but together they're a good reminder that developing a truly valuable AI product will take more than the latest and greatest model.

+ Here's O'Reilly's Mike Loukides on the AI blues. He asks, "Is AI getting worse? Or are we getting more picky?"

Is AI art "art"? (Or is this the wrong question entirely?)

In a widely shared article in The New Yorker, sci-fi author Ted Chiang maintains that, in most cases, generative AI "isn't going to make art." But Chiang isn't tendering an anti-technology polemic as much as he's proposing a humanistic vision of art as an act of communication. Chiang generalizes this perspective into a definition of art as "something that results from making a lot of choices"—an outcome limited by generative AI:

When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices.

The piece kicked up a hornet's nest, as AI boosters fought with anti-AI artists over the philosophical nature of art, technology's place as a creative tool, and the role training data plays in creating artistic output from generative AI—big questions that could never be resolved by a single article. (It's also important to point out that Chiang allows that an artist could use GenAI tools to create art in a way that still involves making thousands of choices.) In his Programmable Mutter newsletter, Henry Farrell suggests that whether or not AI output counts as art isn't the critical question. After all, he argues, " even if AI makes art, it may be bad for culture":

The outputs of LLMs and other Large Models are, on the whole, blander and less interesting than human created art. As Alison Gopnik argues, they are very strong on imitation, but not on innovation. Even if you think that AI, or much simpler algorithms for that matter, can be used to generate art, you can still worry that the currently popular versions are going to make culture duller and more disconnected. . . .
. . .These tools can still be useful (I myself use them plenty!) but there are real grounded reasons for general caution, whether you are a humanist like Ted, or whether you're concerned for other reasons with maintaining variety and exploration in the arts.

+ Max Read considered Chiang's article alongside the recent drama that erupted over National Novel Writing Month's position statement on AI . Here's his take on claims that artists' consternation about AI amounts to gatekeeping: "Increased avenues for participation are likely to drive down the price of labor. . . .The only tools left to writers, who (with a few narrow exceptions) have no legal way to control and negotiate the supply and pricing of their work, are indirect forms of social protectionism: snobbery, taste, and 'gatekeeping.'"

+ Bloomberg's Tyler Cowen ventures that "in the not-too-distant future, what kind of culture the world produces [with AI] could depend on the price of electricity."

+ More from Henry Farrell: "There's a Killer App for Large Language Models."

—Tim O’Reilly and Peyton Joyce

 

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.