| | Algorithmic attention rents and digital platform market power If you've been following Next Economy, you know that I've long been concerned with the ways digital platforms exert their market power. You may also remember that last month I teased a trio of working papers focused on what my fellow UCL Institute for Innovation and Public Purpose researchers and I call "algorithmic attention rents": how Big Tech companies use their algorithms and designs to unfairly allocate user attention in ways that are not in the best interests of either their users or their ecosystem. The first paper in that series, Algorithmic Attention Rents: A Theory of Digital Platform Market Power , is out now. In it, I and my coauthors, UCL IIPP senior research associate Ilan Strauss and UCL IIPP director and professor Mariana Mazzucato, outline how economic rents are extracted by digital aggregator platforms and offer some policy recommendations to mitigate them, setting the stage for the rest of the series. I recommend reading it in full, but if you're short on time, here's what you need to know. | | | | | A theory of attention rents At the moment, we think about the market power of digital platforms largely in terms of data—particularly the ways that that data can be used to manipulate users (what Shoshana Zuboff calls "surveillance capitalism.") But as my coauthors and I argue, "It may be more productive to understand platform market power and to regulate its possible abuses by measuring the ways that internet platforms control and monetize the attention of their users." Here's why. Data can certainly be used toward bad ends, but it's also a valuable (and essential) component of our information-saturated world. We depend on powerful algorithms to use the data internet services have collected to help us make sense of the overwhelming amount of information we encounter. And when they work, these algorithms can be transformative, saving us time, money, and effort through reliable search results, accurate recommendations, and more. But our attention is a finite resource—there's only so much to go around. That isn't a problem in competitive marketplaces that encourage digital platforms to efficiently offer users the best, most reliable information. However, as we show, when companies come to dominate a sector (think Google for search, Amazon for shopping, and Facebook for social media), they often begin to "extract additional time and attention from [their] users, and economic rents from [their] supplier marketplace or advertisers, by controlling that flow of attention." They might do so "by providing lower-quality results or by charging a higher price than what the attention may be worth to those buying it, by forcing ecosystem participants to pay for visibility, or by trying to monopolise vertical product or service markets." But whatever the strategy, it leads to worse outcomes all around, affecting consumers, content creators, sellers, and advertisers alike—something Cory Doctorow has called "enshittification ." (More on that below.) + We touch on these examples in our paper, but you can read the monopoly cases brought against Amazon and Google to see just how attention rents are levied in practice. | | | | | So what's to be done? While platforms that have shifted their focus from innovation to rent-seeking "feel" worse to use, we need to quantify platform abuses in order to devise effective regulation. We could do so in a number of ways: comparing "organic" rankings to paid rankings in search results to see to what extent platform algorithms are preferring bad information they've been paid to show over the best results that they've promised to users; examining the value ads offer users; comparing a dominant platform's results with those of less popular platforms; scrutinizing the quality of information in a given platform; and assessing whether ads have increased unreasonably. But—as I've pointed out—regulators need to require that the companies themselves disclose the methods by which they manage user attention for their own profit. After all, you can't regulate what you don't understand. My coauthors and I recommend that companies be required to share metrics disaggregated by product, device type, and location "quarterly, with more detail annually, as part of the existing financial disclosures required of public companies." As for the metrics themselves, we have a few suggestions, like ad load, click-through rates for organic results and ads, and total fees, including advertising, levied on sellers in ecommerce marketplaces. It would be best, though, if regulators could base required disclosures on the metrics “that are actually used by the platforms themselves to manage search, social media, e-commerce, and other algorithmic relevancy and recommendation engines." Such enhanced disclosures—in particular, those that align with platforms' own operating metrics—are the key to effective regulation that "will allow investors, the public, regulators, and the platforms themselves to better understand and operate truly free markets." At the end of the paper, we look forward to how the same ideas may apply in the age of AI. What will we wish, in a few years, that we had known about these platforms before they had gained dominant market power? | | | | | Enshittification by another name In a post from earlier this year, author and activist Cory Doctorow describes "enshittification" as the process by which platforms die: "First, [platforms] are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die." In his latest post, Doctorow takes Algorithmic Attention Rents as a jumping-off point to reflect more broadly on "enshittification." He agrees that attention rents are an especially harmful aspect of the enshittification of digital services. But he's equally interested in the solutions we offer in our paper, especially with regard to how they supplement his thinking about an "end-to-end principle for online services," which "decrees that the role of an intermediary should be to deliver data from willing senders to willing receivers as quickly and reliably as possible." ("One interesting wrinkle" to this line of thinking, Doctorow points out, is that "it places a whole host of conduct within the regulatory remit of the FTC.") Much like rents themselves, not all intermediaries are malicious, as Doctorow is quick to emphasize. They only become so "when they aren't disciplined by competitors, by regulators, or by their own users' ability to block their bad conduct," giving them free rein to seek out rents instead of value. + Here's tech analyst Benedict Evans on why he's leaving Twitter. It's no surprise that algorithmic changes boosting questionable posts by paid accounts and monetizing trolls played a role. | | | | | Transparency is just as important for AI In her commentary on Algorithmic Attention Rents, the Financial Times' Rana Foroohar turns the spotlight on AI, picking up on our demand for enhanced disclosures and setting it against President Biden's recent executive order on AI. Just as I did , Foroohar found the executive order promising but insufficient, arguing that "we need a bit less focus on Terminator-style worst-case scenarios for AI, and much more specific economic data disclosure to curb the new technology in the here and now, before it has already gained too much power." Foroohar's conclusion mirrors our own in Algorithmic Attention Rents. AI models, such as the large language models powering generative AI tools like ChatGPT, aren't yet at a stage where they can impose attention rents. Still, like today's dominant digital platforms, these models "depend on users accepting the increasing penetration of algorithmic authority" so it's only a matter of time before they too begin to wield their market power to extract monopoly rents. That's why establishing disclosure requirements is so critical: Greater public visibility into the operation of these platforms can, in conjunction with more informed policy making, lead to better behaviour on the part of those who own and manage these systems, more balanced ecosystems of value creation, and the optimal use of knowledge in society. + Truly understanding the behavior of digital platforms will require the disclosure of their own operating metrics. But while we're waiting for those regulations to be instituted, organizations are taking it upon themselves to scrutinize transparency. Last week, I mentioned the Foundation Model Transparency Index from Stanford's Institute for Human-Centered Artificial Intelligence, which recently examined 10 major foundation model companies—and found them lacking. And the Data Nutrition Project has created a standard dataset "nutrition label" highlighting a dataset's intended use, structure, risks, and more. + Data transparency has become a huge quandary for foundation models. The Atlantic's reporting on Books3, a database of pirated books widely used to train AI models (including Meta's Llama), incited a wave of lawsuits from authors. Visual artists are also suing the image generators Stability AI and Midjourney, alleging that their works were used without their permission. And now the FTC has weighed in, "warn[ing] that AI development has enabled potential copyright infringement and consumer deception." Issues of fair use will be decided in court, but it's clear that the secrecy with which model training takes place hinders creators' ability to make the case for infringement. | | | | | Watch our showcase on algorithmic attention rents One last thing: A few weeks ago, I joined Mariana, Ilan, Cecilia Rikap, and Rufus Rock at a UCL IIPP event showcasing our research on algorithmic attention rents. It was an insightful two hours that you can now watch on YouTube. | | | | | | —Tim O’Reilly and Peyton Joyce | | | |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.