| People are using AI to quickly spin up junk websites in order to capture some of the programmatic advertising money that's sloshing around online, according to a new report by NewsGuard, exclusively shared with MIT Technology Review. That means that blue chip advertisers and major brands are essentially funding the next wave of content farms, likely without their knowledge. NewsGuard, which rates the quality of websites, found over 140 major brands advertising on sites using AI-generated text that it considers "unreliable", and the ads they found come from some of the most recognized companies in the world. Ninety percent of the ads from major brands were served through Google's ad technology, despite the company's own policies that prohibit sites from placing Google-served ads on pages with "spammy automatically generated content." The ploy works because programmatic advertising allows companies to buy ad spots on the internet without human oversight: algorithms bid on placements to optimize the number of relevant eyeballs likely to see that ad. Even before generative AI entered the scene, around 21% of ad impressions were taking place on junk "made for advertising" websites, wasting about $13 billion each year. Now, people are using generative AI to make sites that capture ad dollars. NewsGuard has tracked over 200 "unreliable AI-generated news and information sites" since April 2023, and most seem to be seeking to profit off advertising money from, often, reputable companies. NewsGuard identifies these websites by using AI to check whether they contain text that matches the standard error messages from large language models like ChatGPT. Those flagged are then reviewed by human researchers. Most of the websites' creators are completely anonymous, and some sites even feature fake, AI-generated creator bios and photos. As Lorenzo Arvanitis, a researcher at NewsGuard, told me, "This is just kind of the name of the game on the internet." Often, perfectly well-meaning companies end up paying for junk—and sometimes inaccurate, misleading, or fake—content because they are so keen to compete for online user attention. (There's been some good stuff written about this before.) The big story here is that generative AI is being used to supercharge this whole ploy, and it's likely that this phenomenon is "going to become even more pervasive as these language models become more advanced and accessible," according to Arvanitis. And though we can expect it to be used by malign actors in disinformation campaigns, we shouldn't overlook the less dramatic but perhaps more likely consequence of generative AI: huge amounts of wasted money and resources. |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.