| | AI risks don't exist in a vacuum Last week I took an optimistic look at how government programs like the expanded Child Tax Credit can enrich the Next Economy by assigning greater value to public goods like caregiving. But these programs must be implemented judiciously and scrutinized to ensure they actually meet their aims while avoiding any adverse effects in the process—something the ongoing UK Post Office scandal makes abundantly clear. There's no better example of a government program gone wrong. If you haven't been following along, the BBC has a good explainer: More than 900 sub-postmasters and postmistresses were prosecuted for stealing money because of incorrect information provided by a computer system called Horizon. The Post Office itself brought many of the cases to court, and between 1999 and 2015, it prosecuted 700 people - an average of one person a week. Another 283 cases were brought by other bodies, including the Crown Prosecution Service. Many of those convicted went to prison for false accounting and theft. Many were financially ruined. These sub-postmasters and sub-postmistresses were innocent of any misconduct; they were simply the victims of buggy software from government vendor Fujitsu. As CNN reports, Horizon "regularly showed that money—often many thousands of pounds—had gone missing from Post Office accounts. In many cases, it was simply wrong." And as Ars Technica's Jon Brodkin points out (citing testimony from Fujitsu's Paul Patterson), these flaws "were known 'from the start .'" But despite the clear failings of the technology, according to the BBC, "To date only 93 convictions have been overturned." It took an ITV dramatization, Mr Bates vs the Post Office, to galvanize public opinion and elicit a satisfactory response from the government. The scandal spotlights the importance of good governance, particularly for government programs, and underscores why, as I've maintained, "governance is not a 'once and done' exercise." But it's also a cautionary tale about the dangers of trusting automated systems (including frontier AI systems!) without verifying their output. As the Institute for Government notes in its "Six Lessons Government Should Learn from the Post Office Scandal": We should not have blind faith in automated systems. We need transparency about how they operate, better understanding in government and the legal system about how they work (or may not), and proper routes for redress and accountability. As both the government and the opposition talk about AI's potential to revolutionise public services and administration, these considerations will become even more critical. I've said much the same. AI risks don't exist in a vacuum. They are embedded in a human and institutional context that can either moderate or magnify them. A narrow technical view of AI risk misses the lesson of the sorcerer's apprentice, who doesn't truly understand the spells being cast and isn't prepared when the outcomes are unexpected. + Here's former UK chief digital officer Mike Bracken on the scandal in the Financial Times: "No More 'Big IT'—The Failed 90s Model Has Ruined Too Many Lives." + From TechCrunch: "Fujitsu, Facing Heat over UK Post Office Scandal, Continues to Rake in Billions from Government Deals." + Unfortunately, this story is all too common. Here's a similar account from IEEE Spectrum: "Michigan's MiDAS Unemployment System—Algorithm Alchemy Created Lead, Not Gold." | | | | | Jennifer Pahlka on harnessing AI to improve government services Jennifer Pahlka, former US deputy chief technology officer and a senior fellow at the Federation of American Scientists and the Niskanen Center, recently testified before the US Senate's Committee on Homeland Security and Government Affairs on how the government might use AI "to improve services and customer experience." In her thoughtful remarks, Pahlka explored the limitations of the current government procurement process (with a focus on where it breaks down, particularly for digital services) and explained why AI won't necessarily be a panacea for these entrenched problems: How the US government chooses to respond to the changes AI brings is indeed critical, especially in its use to improve government services and customer experience. If the change is going to be for the better (and we can't afford otherwise) it will not be primarily because of how much or how little we constrain AI's use. . . .The difference will come down to how much or how little capacity and competency we have to deploy these technologies thoughtfully. What we need, Pahlka contends, are "people in agencies" who can determine when prudence is needed (and when an overabundance of caution risks holding back a project)—and "who have the authority to act accordingly." But enticing the workers the government desperately needs will mean overhauling hiring processes and human resources to clear out the obstacles prospective talent often encounters. And keeping them around will require agencies to eliminate the burdens these workers face in getting services out the door. As Pahlka concludes, "The goal, therefore, must be Congressional action that reduces the risk aversion of the bureaucracy. . . .Getting government agencies the people they need, focused on the right work, and reducing the burden on each of them, can be profoundly transformational." Read her testimony in full at the Niskanen Center or watch it here. + Pahlka dives much deeper into ways we can improve government services in her recent book Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. + Staying with the topic at hand: In the Hill, Stanford University's Daniel Ho and University of Michigan's Nicholas Bagley warn that "the White House's new rules on artificial intelligence, unless clarified, could degrade the quality of government operations as basic—and uncontroversial—as delivering the mail." Ho and Bagley share their concerns that the rules proposed in President Biden's executive order on AI would overly burden public services by "[tying] hundreds of agencies up in red tape for no obvious reason." As they argue, "There are many benign and valuable uses of technology. Agencies need the freedom to experiment with those uses without getting snarled in bureaucracy." + An example of how these rules can be misapplied comes from the comments in one of Pahlka's posts on this same topic. "Just this summer," a health IT expert wrote, "a policy maker who runs a data analysis program said that certain programs that submitted data to their agency could not use 'algorithms.'" Guidance signaling caution around the use of AI had been interpreted to cover even basic mathematical analysis. The rules were subsequently clarified, she shared, but the incident goes to show how easy it is for government agencies to misinterpret policy and guidance, especially when the topic is poorly understood by the general public, and how that misunderstanding leads toward overly broad restrictions. (Disclosure: Jennifer Pahlka is married to O'Reilly founder and CEO Tim O'Reilly.) | | | | | Monopoly, open source, and a public AI option? As government services begin to incorporate AI (and keeping in mind the risks of choosing the wrong vendor), it's worth asking what options technologists have for deployment. The Open Markets Institute's Max von Thun argues that "monopoly power is the elephant in the room in the AI debate," warning that "the market for foundation models trends towards consolidation" among Big Tech companies. While this perspective discounts the current surge in open source AI models, von Thun is spot on in his insistence that we consider market dominance when regulating AI: Current regulatory efforts to prevent misuse of AI, while important, fail to acknowledge the central role market structure plays in determining how, when, and in whose interests a technology is rolled out. Trying to shape and limit AI's use without addressing the monopoly power behind it is at best likely to be ineffective, and at worst will entrench the power of incumbents by regulating smaller players out of existence. Legal scholars Ganesh Sitaraman of Vanderbilt University and Tejas Narechania of UC Berkeley take such monopolistic practices as their starting point in a recent op-ed in Politico , arguing for wider regulation in the AI sector and, more intriguingly, pushing for "a public option for the cloud computing infrastructure that is critical to any AI development" that "could democratize the technology, making AI development more accessible to a range of competitors and researchers." Sitaraman and Narechania aren't calling for a public foundation model as much as they are for public access to the hardware—the processors, chips, and servers—that make AI technology possible (and that Big Tech companies are stockpiling as they pursue an AI arms race ). As they note in an interview, "If you think that open-source models are going to solve the problems of concentration across the AI stack, that's just not the case." + Think a public option sounds far-fetched? The National Artificial Intelligence Research Resource Pilot Program kicked off last November with the goal of creating "a shared national research infrastructure that will connect U.S. researchers to responsible and trustworthy AI resources, as well as the needed computational, data, software, training and educational resources to fuel AI research and discovery." + National Artificial Intelligence Research and Development Strategic Plan 2023 Update + From the Verge: "FTC Investigating Microsoft, Amazon, and Google Investments into OpenAI and Anthropic" | | | | | | —Tim O’Reilly and Peyton Joyce | | | |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.