| | Will antitrust still work? Labor strikes may have gotten more coverage, but Big Tech monopolies were also center stage in September—in his anti-monopoly newsletter BIG, Matt Stoller called it "the biggest month in antitrust in 50 years." First up, U.S. et al. v. Google kicked off September 10. The trial will determine whether Google's agreements with partners like Apple helped it create and maintain a monopoly on internet search. (Stoller and Yosef Weitzman are following it in detail at Big Tech on Trial .) As Vanderbilt law professor Rebecca Allensworth explained in the New York Times, "Ultimately, the Google trial will test whether antitrust laws written in 1890 to break up sugar, steel and railroad monopolies can still work in today's economy." Then, last week, the FTC and 17 states sued Amazon, arguing that the way the company runs its third-party marketplace hurts rivals, sellers, and consumers alike. As Next:Economy readers know, I've been keeping an eye on these issues for a long time. Back in 2019, I wrote an article focusing on the limitations of traditional approaches to antitrust when regulating digital platforms and marketplaces. Two of my case studies: Google and Amazon. It's worth reading (or reading again), as it highlights some of the issues behind Google's ongoing trial and the recent Amazon lawsuit. I also offered some advice to regulators at the time that seems to align with current thinking at the FTC—a good sign of progress in the past four years. As I noted then, instead of focusing solely on consumer harm, regulators should be looking to measure whether companies like Amazon or Google are continuing to provide opportunity for their ecosystem of suppliers, or if they're increasing their own returns at the expense of that ecosystem. Rather than just asking whether consumers benefit in the short term from the companies' actions, regulators should be looking at the long-term health of the marketplace of suppliers—they are the real source of that consumer benefit, not the platforms alone. Have Amazon, Apple, or Google earned their profits, or are they coming from monopolistic rents? + "Amazon's Antitrust Paradox," FTC chair Lina Khan's 2017 article on why an antitrust framework centered on "consumer welfare" is inadequate for regulating companies in our modern economy. | | | | | Who pays when the algorithmic rent is due? As I pointed out back in 2019, digital platforms often exert their market power in the form of rents—value they're able to extract because they control a limited resource. Rents aren't inherently bad, but they become so when companies extract excess value because of scarcity or market power rather than derive value from the innovation they produce. Today, the design and manipulation of algorithmic systems in search, ecommerce, and finance have become a key means of capturing these rents. But to remedy the situation, regulators first need to understand what's happening. As I noted then, "Data is the currency of these companies. It should also be the currency of those looking to regulate them." That's the premise behind Regulating Big Tech Through Digital Disclosures , the recent policy brief for UCL Institute for Innovation and Public Purpose I cowrote with Mariana Mazzucato and Ilan Strauss. (You may remember it from our June 16 newsletter.) It's also one of the conclusions of a trio of forthcoming working papers from members of the "algorithmic rents" project at UCL funded by Omidyar: me, Ilan, Mariana, and Rufus Rock. The first is an overview of the theory behind what we're calling "Algorithmic Attention Rents"; the second is a deep dive on the application of our ideas to advertising in the Amazon Marketplace; and the third describes our empirical research project to measure algorithmic attention rents at Amazon. If you're in Europe or you're an early riser, you can catch me, Mariana, Ilan, and Rufus live at the IIPP's research showcase on algorithmic rents, October 12, 10:30am–12:30pm BST. (That's 5:30am–7:30am ET.) Register for free here. I'll also be doing an in-person-only talk, "AI and the Attention Economy: What Tech Companies Need to Disclose," at the Minderoo Centre for Technology and Democracy at Cambridge University at 5pm BST on October 10. If you're in the area, I'd love to see you there. | | | | | Can small AI startups succeed? As governments around the world attempt to rein in Big Tech, it's worth determining all the roadblocks impeding a more competitive marketplace. In the Washington Post, Gerrit De Vynck shows how access to the computing resources—and money—required to train AI models may curb competition . Smaller startups entering the market, like public benefit corp Anthropic, must make deals with larger partners such as Microsoft, Google, and Amazon, which own the cloud infrastructure (and the funding) they need to take their product to market. "Instead of breaking Big Tech's decade-long dominance of the internet economy," De Vynck writes, "the AI boom so far appears to be playing into its hands." But as he points out, the FTC "is watching [the industry] closely for signs of anti-competitive behavior." + "Will Open Source AI Shift Power from 'Big Tech'? It Depends." + Training an AI model takes a toll on the environment too, but there are ways to make AI greener. | | | | | Is it copyright infringement or fair use? One reason generative AI models are so resource-intensive is the huge datasets used to train them. And these datasets, assembled through scraped public data, have become a point of contention. The Authors Guild recently sued OpenAI, accusing the company of copyright infringement for its tools' abilities to create derivative works based on copyright-protected books that were part of their training data. (Authors have also sued Meta, while visual artists are suing the companies behind image-generating tools Stable Diffusion and Midjourney.) At issue is whether AI-generated works constitute copyright infringement or fall under fair use. Speaking to Andrew Albanese at Publishers Weekly, Cornell law professor James Grimmelmann outlines the contours of the case, particularly as it relates to a similar 2016 case against Google Books that was decided in Google's favor. Regardless of the outcome of the authors' lawsuit, the struggle over AI training data looks to be a long one. + The Atlantic's Alex Reisner analyzed the Books3 dataset used to train a number of AI models and created a tool so you can explore it yourself. Among the titles, you'll find classics from William Shakespeare, lots of Stephen King, and more than a few O'Reilly books. + OpenAI is allowing visual artists to "opt out" of inclusion in its training data. All they have to do is fill out a form. . .and upload every single image they want to be excluded from the dataset. As Kali Hays notes in Business Insider, "the opt-out process is so onerous that it almost seems like it was designed not to work." And as Matteo Wong writes in the Atlantic, " it may not make a difference" anyway. + "The Battle Over Books3 Could Change AI Forever." + "Getty Images Promises Its New AI Contains No Copyrighted Art." | | | | | | —Tim O’Reilly and Peyton Joyce | | | |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.