Поиск по этому блогу

Search1

123

пятница, 30 июня 2023 г.

Three things to know about how the US Congress might regulate AI


The Technocrat

By Tate Ryan-Mosley • 06.30.2023
 

Hello and welcome to The Technocrat! 

If your social circles are anything like mine, chances are AI is likely to come up around the barbecue or swimming pool, or while you're imbibing a refreshment (or several) over this holiday weekend. There's a lot to talk about. Recently, US lawmakers have been busy doing exactly that: chatting about AI. And some of that chat might even start to translate into action. 

Last week, Senate majority leader Chuck Schumer (a Democrat from New York) announced his grand strategy for AI policymaking at a speech in Washington, DC, ushering in what might be a new era for US tech policy. He outlined some key principles for AI regulation and argued that Congress ought to introduce new laws quickly.

Schumer's plan is a culmination of many other, smaller policy actions. On June 14, Senators Josh Hawley (a Republican from Missouri) and Richard Blumenthal (a Democrat from Connecticut) introduced a bill that would exclude generative AI from Section 230 (the law that shields online platforms from liability for the content their users create). Last Thursday, the House science committee hosted a handful of AI companies to ask questions about the technology and the various risks and benefits it poses. House Democrats Ted Lieu and Anna Eshoo, with Republican Ken Buck, proposed a National AI Commission to manage AI policy, and a bipartisan group of senators suggested creating a federal office to encourage, among other things, competition with China

Though this flurry of activity is noteworthy, US lawmakers are not actually starting from scratch on AI policy. "You're seeing a bunch of offices develop individual takes on specific parts of AI policy, mostly that fall within some attachment to their preexisting issues," says Alex Engler, a fellow at the Brookings Institution. Individual agencies like the FTC,the Department of Commerce, and the US Copyright Office have been quick to respond to the craze of the last six months, issuing policy statements, guidelines, and warnings about generative AI in particular. 

Of course, we never really know whether talk means action when it comes to Congress. However, US lawmakers' thinking about AI reflects some emerging principles. Here are three key themes in all this chatter that you should know to help you understand where US AI legislation could be going. 

  • The US is home to Silicon Valley and prides itself on protecting innovation. Many of the biggest AI companies are American companies, and Congress isn't going to let you, or the EU, forget that! Schumer called innovation the "north star" of US AI strategy, meaning regulators will probably be calling on tech CEOs to ask how they'd like to be regulated. It's going to be interesting watching the tech lobby at work here. Some of this language arose in response to the latest regulations from the European Union, which some tech companies and critics say will stifle innovation

  • Technology, and AI in particular, ought to be aligned with "democratic values." We're hearing this from top officials like Schumer and President Biden. The subtext here is the narrative that US AI companies are different from Chinese AI companies. (New guidelines in China mandate that outputs of generative AI must reflect "communist values.") The US is going to try to package its AI regulation in a way that maintains the existing advantage over the Chinese tech industry, while also ramping up its production and control of the chips that power AI systems and continuing its escalating trade war. 

  • One big question: what happens to Section 230. A giant unanswered question for AI regulation in the US is whether we will or won't see Section 230 reform. Section 230 is a 1990s internet law in the US that shields tech companies from being sued over the content on their platforms. But should tech companies have that same 'get out of jail free' pass for AI-generated content? This is a big question, and it would require that tech companies identify and label AI-made text and images, which is a massive undertaking. Given that the Supreme Court recently declined to rule on Section 230, the debate has likely been pushed back down to Congress. Whenever legislators decide if and how the law should be reformed, it could have a huge impact on the AI landscape. 

So where is this going? Well, nowhere in the short-term, as politicians skip off for their summer break. But starting this fall, Schumer plans to kick off invite-only discussion groups in Congress to look at particular parts of AI. 

In the meantime, Engler says we might hear some discussions about the banning of certain applications of AI, like sentiment analysis or facial recognition, echoing parts of the EU regulation. Lawmakers could also try to revive existing proposals for comprehensive tech legislation—for example, the Algorithmic Accountability Act.

For now, all eyes are on Schumer's big swing. "The idea is to come up with something so comprehensive and do it so fast. I expect there will be a pretty dramatic amount of attention," says Engler. 

What else I'm reading

  • Everyone is talking about "Bidenomics," meaning the current president's specific brand of economic policy. Tech is at the core of Bidenomics, with billions upon billions of dollars being poured into the industry in the US. For a glimpse of what that means on the ground, it's well worth reading this story from the Atlantic about a new semiconductor factory coming to Syracuse. 

  • AI detection tools try to identify whether text or imagery online was made by AI or by a human. But there's a problem: they don't work very well. Journalists at the New York Times messed around with various tools and ranked them according to their performance. What they found makes for sobering reading. 

  • Google's ad business is having a tough week. New research published by the Wall Street Journal found that around 80% of Google ad placements appear to break their own policies, which Google disputes.

Subscribe now for $60/year.

Limited Time Only: Save 25%

Subscribe for full access to the Accessibility issue and learn how technology can work for everyone. Plus, get in-depth stories on assistive devices, sonification, immigration, and climate change.

SUBSCRIBE & SAVE 25%

What I learned this week

We may be more likely to believe disinformation generated by AI, according to new research covered by my colleague Rhiannon Williams. Researchers from the University of Zurich found that people were 3% less likely to identify inaccurate tweets created by AI than those written by humans.

It's only one study, but if it's backed up by further research, it's a worrying finding. As Rhiannon writes, "The generative AI boom puts powerful, accessible AI tools in the hands of everyone, including bad actors. Models like GPT-3 can generate incorrect text that appears convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns."
 

Thanks for reading and have a great week!

As always,

Tate

Event

Business Reports

New business report available for purchase.

Was this newsletter forwarded to you, and you'd like to see more?

Sign up today
LinkedIn
Twitter
Facebook
View in browser | This email was sent to alexvarboffin.abbb@blogger.com.

Manage your preferences | Unsubscribe | Terms of Service | Privacy Policy

MIT Technology Review · 196 Broadway, 3rd fl, · Cambridge, MA 02139 · USA

Copyright © 2023 MIT Technology Review, All rights reserved.

Opt out of all promotional emails and newsletters from MIT Technology Review

Must Read: This month’s top technology stories

MIT Technology Review

Month in Review

 

June 2023

Our June recap is here and features our reader's favorite stories of the month. Check out the most-read articles below that cover topics across artificial intelligence, biotechnology, and tech policy.

Plus, subscribe & save 25% for full access to our latest Accessibility issue and dive into our in-depth stories on assistive devices, sonification, immigration, and climate change.
Welcome to the new surreal. How AI-generated video is changing film.

Welcome to the new surreal. How AI-generated video is changing film.

by Will Douglas Heaven

Exclusive: Watch the world premiere of the AI-generated short film The Frost.

Watch the film

The people paid to train AI are outsourcing their work… to AI

The people paid to train AI are outsourcing their work… to AI

by Rhiannon Williams

It's a practice that could introduce further errors into already error-prone models.

Read more →

Google DeepMind's game-playing AI just found another way to make code faster

Google DeepMind's game-playing AI just found another way to make code faster

by Will Douglas Heaven

The AI-generated algorithms are already being used by millions of developers.

Read more →

Police got called to an overcrowded presentation on

Police got called to an overcrowded presentation on "rejuvenation" technology

by Antonio Regalado

Juan Carlos Izpisua Belmonte's presentation on anti-aging technology drew a dangerously large crowd at a stem-cell conference in Boston.

Read more →

Junk websites filled with AI-generated text are pulling in money from programmatic ads

Junk websites filled with AI-generated text are pulling in money from programmatic ads

by Tate Ryan-Mosley

More than 140 brands are advertising on low-quality content farm sites—and the problem is growing fast.

Read more →

Subscribe & save 25%

Limited Time Only: Save 25%

Subscribe for full access to the Accessibility issue and learn how technology can work for everyone. Plus, get in-depth stories on assistive devices, sonification, immigration, and climate change.
 
SUBSCRIBE & SAVE 25%
LinkedIn
Twitter
Facebook
View in browser | This email was sent to alexvarboffin.abbb@blogger.com.

Manage your preferences | Unsubscribe | Terms of Service | Privacy Policy

MIT Technology Review · 196 Broadway, 3rd fl, · Cambridge, MA 02139 · USA

Copyright © 2023 MIT Technology Review, All rights reserved.

Opt out of all promotional emails and newsletters from MIT Technology Review