Поиск по этому блогу

Search1

123

пятница, 8 декабря 2023 г.

Exploiting the system for undeserved gains

Attention rents, algorithmic discrimination, probabilistic billionaires + more.
O'Reilly
Next:Economy
Newsletter
paper money - extreme close up of the president's eyes

Modified image by Kevin Dooley on Flickr

Read our new papers on attention rents

Our latest research on attention rents is out, with two new papers that add more context to our analysis of how Amazon is abusing its market power. In Amazon's Algorithmic Rents: The Economics of Information on Amazon , Ilan Strauss, Mariana Mazzucato, and I explore why the current antitrust paradigm, focused as it is on competition between platforms, is an inadequate tool with which to govern platforms like Amazon that act as a gatekeeper between their end users and an ecosystem of product or service providers. Users depend upon the fairness of "algorithms exploiting the highly uncertain and informationally abundant decision-making environment" matching them with the best products of suppliers. When the platform begins to place its own interests ahead of those of its users and its suppliers, this can be detected by studying its algorithmic rankings. Leading antitrust scholar Herbert Hovenkamp called it " a superb and I think very important paper on information costs, search algorithms, abundant but imperfect information, and the role of New Institutional Economics"—high praise indeed.

And in Behind the Clicks: Can Amazon Allocate User Attention as It Pleases?, Rufus Rock, Ilan, Mariana, and I use 2023 Amazon Marketplace search result data to measure Amazon's power to extract economic rents through the algorithmic arrangement of its search results. Our research shows that "the level of information and the level of competition are more closely intertwined online than in non-digital markets, and that it is through the deterioration of information quality that Amazon increasingly uses its market power to extract pecuniary rents from its ecosystem of firms" (Amazon's Algorithmic Rents ). We need a new regulatory framework, but putting one in place will require technology companies to disclose to regulators "how exactly attention is monetized on their platforms" along with "their internally-utilized operating metrics" (Behind the Clicks). And this framework should give precedence to the harms these rents do to users over any "benefits to consumers or competition."

+ Here’s a recent blog post covering our series from former Amazon engineer Greg Linden, who invented the company’s original item-to-item collaborative filter—at the time, a novel method to drive Amazon recommendations. Linden thinks through the repercussions of attention rents, asking: “When will consumers have had enough? Do consumers only continue using Amazon with all the ads until they realize the quality has changed? When does brand and reputation damage accumulate to the point that consumers start trusting Amazon less, shopping at Amazon less, and expending the effort of trying alternatives?”

+ ICYMI: Algorithmic Attention Rents: A Theory of Digital Platform Market Power—the overview paper on the theory, with explanations of how it applies to Google Search and social media as well as to marketplaces like Amazon.

Algorithm wage discrimination

Digital platforms—particularly those involved in gig work, including Amazon—also deploy algorithms to set wages and hours. And as UC Irvine law professor Veena Dubal explains in the Columbia Law Review, “These technologies are fundamentally altering the experience of labor and undermining economic stability and job mobility .” Dubal uses a “multi-year, first-of-its-kind ethnographic study of organizing on-demand workers” to demonstrate the harm algorithmic wage discrimination inflicts on workers and argues that, due to “the constantly changing nature of machine learning technologies and the asymmetrical power dynamics of the digitalized workplace,” the best way to mitigate these harms is to ban the practice completely. Dubal’s research fits neatly with our own above, and our conclusions are aligned: disclosures of key information are essential to dismantling these abuses of power—although as Dubal notes, “Transparency and legibility alone do not address the harms caused by algorithmic wage discrimination because they seek to understand, not directly impede, the source of these social harms.” Or Hamilton Nolan summed it up in his How Things Work newsletter:

We need to recognize that there is no path to worker empowerment and fair treatment that involves merely poking around in the black box, twisting the dials and hoping that the algorithm improves. If we cannot monitor the formula that determines wages, and we cannot control the formula that determine wages, then we must find another way to protect workers.

+ More from Veena Dubal in Jacobin: “Digital ‘Platforms’ Are Just a New Way for Corporations to Exploit Workers.”

Algorithms designed to deny care (and juice profits)

When algorithms extract value in the healthcare industry, they wreak more than financial damage (although they do that too). Ars Technica’s Beth Mole reports on a lawsuit alleging that “UnitedHealth uses [an] AI model with 90% error rate to deny care.” The system, nH Predict, was developed by UnitedHealth subsidiary NaviHealth (since rebranded in light of the controversy ) to “estimat[e] how much post-acute care a patient on a Medicare Advantage Plan will need after an acute injury, illness, or event, like a fall or a stroke”—but it doesn’t take into account any individual patient’s specific requirements. In practice, this has meant frequent denials of care to which patients are ostensibly entitled under their plans. Mole lays the blame squarely at a business placing profits over the people it serves: drawing from an investigation undertaken by healthcare-focused newsroom STAT , she points out that “the dubious estimates nH Predict spits out seem to be a feature, not a bug.” But the US government is working to ensure that AI use in the healthcare industry is better regulated. As KFF Health News reports, “The federal government will try to even the playing field next year, when the Centers for Medicare & Medicaid Services begins restricting how Medicare Advantage plans use predictive technology tools to make some coverage decisions.”

+ From Politico: "AI Is Driving Google's Health Care Business. Washington Doesn't Know What to Do About It."

+ Op-ed from Dr. Amol S. Navathe in the New York Times: “Why Are Nonprofit Hospitals Focused More on Dollars Than Patients?

+ From the Lever: “Why Drugs Are Disappearing from Your Insurance Coverage

On probabilistic billionaires and the grifting economy

WeWork’s Adam Neumann has been back in the news, and Matt Levine has been tracking his exploits over at Money Stuff. Levine characterizes Neumann as the epitome of what he calls a “probabilistic billionaire”:

You create some company that does a thing, or hopes to, and even before it makes any money you say, well, this company has a 1% chance of being worth $1 trillion, so its expected value, today, is $10 billion. Anyone can say that—you can say “I have a 1% chance of discovering teleportation, which is probably a $1 trillion business, so I’m worth $10 billion today”—but that won’t get you on anyone’s list of billionaires.
On the other hand if you convince someone else that you have a 1% chance at doing a trillion-dollar thing, and that person has a lot of money, and she gives you $1 billion for a 10% stake in your company, then you really are a billionaire. Because you have a billion dollars in the bank. . . .
Of course eventually you will either succeed (with 1% probability) or fail (99%) at creating a trillion-dollar company. If time goes by and you do not discover teleportation and you get bored and go back to your day job, then (let us assume) your probability of making $1 trillion goes to zero, and you are no longer a billionaire. Unless you already converted some of your stake to cash.

Which explains how Neumann remains worth $1.7 billion (according to Bloomberg) even as WeWork has crumbled. And as Levine details, Neumann “is so good at this” that he also got a sweetheart $430 million loan from SoftBank that “he has the choice to pay back with either (1) actual money or (2) shares of a bankrupt company.” Seems like a tough choice—although to be fair, Levine argues that despite the framing, this “loan” should actually be understood as “SoftBank [buying] $430 million of stock from Neumann when they kicked him out.”

+ Grifts all the way down? In a more recent newsletter, Levine covered a proposal by Cole Capital to acquire WeWork that looks an awful lot like market manipulation. (More updates here, with a new diagnosis from Levine: “market manipulation, but incompetent.”)

+ From Aeon: “Finance Fraud Is Not a Deviation from an Essentially Rational System but a Window onto the Reality-Distortion of Markets.”

The fight to end junk fees

President Biden is taking action to ban “junk fees”—those “hidden, surprise fees that companies sneak onto customer bills”—but businesses including “airlines, auto dealers, banks, credit card companies, cable giants, property owners and ticket sellers” are fighting tooth and nail for the right to keep charging them. The reason, as the Washington Post points out: they’re “making billions.” New regulations would help ensure that financial advisors actually benefit their clients, that cable bills include transparent pricing, that hotel “resort fees” are corralled, that fans understand the price of a concert ticket before checkout , and more. But pushing them through will entail battling it out with flush industry leaders and their lobbying associations.

+ From Eater San Francisco: “The State Attorney General’s Office Isn’t Sure If Its Junk Fee Law Will Ban Restaurant Service Fees.”

+ Research from 2021 “shows why ticket sellers hit you with last-second fees.” (Spoiler: it’s because they bring in more money. See above.)

Even more updates from the grifting economy

“Stay or pay” clauses in employment contracts that require workers to reimburse employers for training (which in many cases is useless or not even delivered). Glass repair shops exploiting loopholes to overcharge insurance companies on behalf of car owners. Dietician “influencers” paid by food and beverage companies “to promote industry-friendly messages on social media posts that often failed to disclose the names of sponsors.” And clothes that may be “Made in America” but are the product of illegal “piecework” sweatshops.

—Tim O’Reilly and Peyton Joyce

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.