Поиск по этому блогу

Search1

123

пятница, 3 ноября 2023 г.

Preliminary thoughts on Biden’s executive order on AI

Transparency, disclosures, AI competence, and more.
O'Reilly
Next:Economy
Newsletter
Face of the Great Seal of the United States - East Room replica - Richard Nixon Presidential Library and Museum

Cropped image by Tim Evanson on Flickr

Biden's executive order on AI is both a good start and a missed opportunity

On Monday, President Biden issued his eagerly anticipated executive order on AI, a sweeping directive that puts the challenges of AI squarely on the agenda for numerous federal agencies. The order targets challenges across the board, directing support for privacy-preserving technology, addressing algorithmic discrimination, developing better practices for AI in the workplace, ensuring that the AI industry remains open and competitive, issuing new rules requiring companies building foundation models to disclose testing data to the government, and much more. ( Read the 100-page order in full here.)

My immediate response based on the White House's initial announcement was that it's a great piece of work and a good opening move on the way to AI regulations that protect users while still encouraging innovation. I was heartened by the emphasis in the EO on hiring AI talent and otherwise increasing the capability of government agencies to both regulate AI and deploy it for its own use. I was even more heartened that the order appeared to encourage disclosure to the federal government of risks uncovered by " red teaming" during model development. I have long thought we need a more robust disclosure regime as the first step toward disclosure, because you can't regulate what you don't understand. But determining a given model's weaknesses through red teaming alone won't give the full picture of its wider risks and impacts. To make informed decisions, we need to know what data the model was trained on, how the deployed model is being managed, and more besides. And the disclosures must be ongoing.

I was dismayed then to read the full executive order and discover that even the red-teaming requirement doesn't apply to any of the existing models, or even to those under current development, but only to those trained using far more processing power than any model available today. In short, the Biden administration seems to have drunk the Kool-Aid that we should be worried more about possible extreme future risks of artificial general intelligence than the very present problems we are experiencing today. It isn't clear to me whether this focus on existential risk is a genuine belief or a cynical ploy to reduce competition from open source AI models and to redirect regulation away from current harms, but it seems misguided. Unfortunately, I suspect we'll see more of the same at this week's UK AI Safety Summit.

+ From the Verdict: "How Involved Should Big Tech Be in Regulating AI?"

+ At the UK AI Safety Summit, US Secretary of Commerce Gina Raimondo announced that "the United States will launch a U.S. AI Safety Institute to evaluate known and emerging risks of what is called 'frontier' artificial intelligence models."

Government needs AI competence where the work gets done

In her own preliminary response to Biden's executive order, Jennifer Pahlka (who helped found the United States Digital Service) points out that one thing it didn't do was establish a US AI Service. But as she argues, "That's the right call. What government needs is digital competence (AI and otherwise) where the work gets done—embedded in the core operations of agencies." And now is the time to finally do the "boring, obvious, and long overdue" work of fixing the civil service system so that agencies can easily hire the expertise they need.

And here's why that digital competence is so important. In another recent article, Pahlka explores the Pentagon's GAMECHANGER program , which uses AI to help users make sense of the "15,000 policy and budget documents governing how the Pentagon and the services operate." While the program has been derided as the natural outcome of bloated bureaucracy run amok, Pahlka emphasizes the "ingenuity and dedication" of the teams building solutions like GAMECHANGER to better understand the policies they're governed by. After all, they're not the ones responsible for creating the bureaucratic mess—they just have to work within it. Cleaning things up will take an act of Congress. . .but, as Pahlka quips, maybe AI could help with that too:

Perhaps Congress can be inspired by GAMECHANGER to change its own game. Instead of AI to simply interpret millions of pages of documents, our lawmakers could use AI to boil those millions of pages down, suggest dramatically streamlined versions, and repeal the clutter en masse.

(Disclosure: Jen Pahlka is married to O'Reilly founder and CEO Tim O'Reilly.)

Is artificial general intelligence already here?

Blaise Agüera y Arcas, VP and fellow at Google Research, and Peter Norvig, Distinguished Education Fellow at the Stanford Institute for Human-Centered AI, think so. In a recent article in Noema, they argue that, despite the flaws of so-called "frontier models," "the most important parts of [AGI] have already been achieved by the current generation of advanced AI large language models such as ChatGPT, Bard, LLaMA and Claude." Not everyone is as convinced, and the argument comes down to heady philosophical questions on the nature of intelligence itself. But Agüera y Arcas and Norvig make a very detailed case in favor of their position. In any event, it's hard to argue with their calls for better tests and metrics and a focus on questions like "Who benefits [from AI]?"; "Who is harmed?"; and "How can we maximize benefits and minimize harms?"

Joint Statement on AI Safety and Openness

Mozilla has coordinated the development of a statement on the importance of openness to AI safety. I've signed it. If you agree, you should too:

We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.
Yes, openly available models come with risks and vulnerabilities—AI models can be abused by malicious actors or deployed by ill-equipped developers. However, we have seen time and time again that the same holds true for proprietary technologies—and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.
Further, history shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation. Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.

Transparency is a real problem for current models

Speaking of transparency, Stanford's Center for Research on Foundation Models (CRFM) has created the Foundation Model Transparency Index, which scores model developers against 100 indicators. You can read the paper introducing the index in full, or get straight to the key findings . The tl;dr: "The top-scoring model scores only 54 out of 100. No major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry." Perhaps unsurprisingly, open source models like Meta's Llama 2 and Hugging Face's BLOOMZ had some of the highest scores. Still, there's lots of room for improvement all around.

+ IEEE Spectrum also explored the paper, offering a little more context from the CRFM.

—Tim O’Reilly and Peyton Joyce

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.