Поиск по этому блогу

Search1

123

пятница, 21 июля 2023 г.

When tech goes bad

How to balance the dangers with the promise.
O'Reilly
Next:Economy
Newsletter
an ai robot reaching out through monitor to touch man working at the computer

Image by jwoodruff

We need a national registry of large AI models. Here's what it could look like.

In "It's Time to Create a National Registry for Large AI Models," Gillian Hadfield, Mariano-Florentino (Tino) Cuéllar, and Tim O'Reilly argue that

serious queries about the future of AI make one thing perfectly clear: the public and government leaders lack sufficient visibility to know how to judge this moment in history, and who might be responsible for the benefits and risks that generative AI will bring.

They propose a national registry for large AI models—not unlike those used in industries such as securities, nuclear power, or labs that handle dangerous materials—that would allow government insight into who is developing models and what risks exist. Because the technology is moving so fast, they say, the threshold should be set near and slightly above the capabilities of OpenAI's GPT‑4 and could be designed to protect intellectual property rights.

In the US the registry could be operated by the Department of Commerce (which handles technology standards and export controls), the Department of Energy (which monitors AI safety and advises on technology safety issues relevant to federal procurement), or the Department of Homeland Security (responsible for critical infrastructure). Even in the absence of legislation, existing federal laws governing export controls, sensitive information, and related matters could be enough to create such a registry. While this wouldn't end the threat of bad actors deploying AI for nefarious purposes, "a registration requirement can help reveal the companies and individuals motivated to evade such threshold requirements that may therefore merit further scrutiny from law enforcement."

In fact, the authors say, it's not democratically legitimate for visibility to be exclusively in the purview of the companies building the models.

Decisions about hugely consequential technologies—how fast they roll out, how much they disrupt economies and societies, what is considered a good tradeoff between benefit and harm, what kinds of tests should be required prior to deployment—should not be solely under corporate governance, under the exclusive control of even well-intentioned business executives who are legally obligated to act only in the interests of their shareholders. Precisely because society will benefit from further innovation and development of large language models and similar technologies, regulation should start with basic registration schemes to enable visibility into the development of these technologies and to ensure that prudently designed policies can be carefully targeted.

The inside story of how Congress failed to rein in big tech

The authors of the previous article may be wise to contemplate ways a registry could be implemented without legislation. In this Washington Post op-ed , Steven Pearlstein explains how, even with bipartisan support, Congress failed to act on a handful of bills meant to end the anticompetitive business practices of large tech companies like Apple, Amazon, Facebook, and Google. One possible reason, Pearlstein points out, is that "the four companies spent an estimated $250 million to kill the various bills. In financial terms, that represented about 1/10 of 1 percent of their combined annual profits. On a political scale, it was an overwhelming show of force."

+ ICYMI: "Let's Open Big Tech's Financial Black Box."

Google's new search tool could eat the internet alive

In the Atlantic (gated), Justin Pot explains how Google's new search tool effectively could reduce the internet to just a handful of portals:

Instead of sending you off to other corners of the web, more search results appear within Google. Sort of like ChatGPT, it pulls information from various websites, rewords it, and puts that text on top of your search results—pushing down any links you see. In the process, it stifles traffic to the rest of the internet, lessening the very incentive to post online. With AI, Google Search might eventually set off a doom loop for the web as we know it.

You can also read it on Microsoft Start (ungated).

"AI that feels alive"

If you're concerned that social media is bad for teens, here's something that will give you nightmares. Character.AI's users have created over 14 million chatbots that are capable of mimicking companionship, emotional support, and even romantic love. As Jon Victor writes in this Information article (gated):

As the platform grew, it became clear that [Noam] Shazeer and [Daniel] De Freitas had inadvertently tapped into a new type of social media—one in which the only human involved is the one behind the keyboard. Character.AI users have now sent more than 15 billion messages to their AI companions, according to the company. Active users on average spend 2 hours on the platform per day.
The result is a powerful feedback loop that can lead some people, in particular younger users, to form intimate connections with their chosen bots.…The company won't say how old its users are on average, but the CEO said its user base skews toward "young adults." The platform is open to anyone 13 and up, or 16 and up in the EU. "You can certainly imagine that children are vulnerable in all kinds of ways," [Raymond Mar, a psychologist at York University] said, "including having more difficulty separating reality from fiction."

One danger is that the AI is so realistic (and autonomous) that users forget that it's an AI and act on misinformation provided by the AI—whether it's bad relationship or medical advice or conspiracy theories. It's not a theoretical concern: one man has already committed suicide after allegedly being goaded to do so by an AI . And misinformation generated by AI chatbots can be particularly hard to moderate. But perhaps the most concerning aspect is that while these AI companions can provide solace, lonely adults or insecure teens could find that the ease of communicating with the "perfect" friends or romantic partners they create as chatbots can replace the need for real (and less perfect) human contact. This may be just the beginning of our struggles to balance the good intentions of AI with the less positive unintended consequences.

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.