Next-gen content farms are using AI-generated text to spin up junk websites

programmatic ad.jpeg
programmatic ad.jpeg

Google’s programmatic ad product, called Google Ads, is the largest exchange and made $168 billion in advertising revenue last year. The company has come under criticism for serving ads on content farms in the past, even though its own policies prohibit sites from placing Google-served ads on pages with “spammy automatically generated content.” Around a quarter of the sites flagged by NewsGuard featured programmatic ads from major brands. Of the 393 ads from big brands found on AI-generated sites, 356 were served by Google.

“We have strict policies that govern the type of content that can monetize on our platform,” Michael Aciman, a policy communications manager for Google, told MIT Technology Review in an email. “For example, we don’t allow ads to run alongside harmful content, spammy or low-value content, or content that’s been solely copied from other sites. When enforcing these policies, we focus on the quality of the content rather than how it was created, and we block or remove ads from serving if we detect violations.”

Most ad exchanges and platforms already have policies against serving ads on content farms, yet they “do not appear to uniformly enforce these policies,” and “many of these ad exchanges continue to serve ads on [made-for-advertising] sites even if they appear to be in violation of … quality policies,” says Krzysztof Franaszek, founder of Adalytics, a digital forensics and ad verification company.

Google said that the presence of AI-generated content on a page is not an inherent violation. “We also recognize that bad actors are always shifting their approach and may leverage technology, such as generative AI, to circumvent our policies and enforcement systems,” said Aciman. 

A new generation of misinformation sites

NewsGuard says that most of the AI-generated sites are considered “low quality” but “do not spread misinformation.” But the economic dynamic of content farms already incentivizes the creation of clickbaity websites that are often riddled with junk and misinformation, and now that AIs can do the same thing on a bigger scale, it threatens to exacerbate the misinformation problem.

For example, one AI-written site,, had articles that spread harmful health misinformation with headlines like “Can lemon cure skin allergy?” “What are 5 natural remedies for ADHD?” and “How can you prevent cancer naturally?” According to NewsGuard, advertisements from nine major brands, including the bank Citigroup, the automaker Subaru, and the wellness company GNC, were placed on the site. Those ads were served via Google. 

Adalytics confirmed to MIT Technology Review that ads on Medical Outline appeared to be placed via Google as of June 24. We reached out to Medical Outline, Citigroup, Subaru, and GNC for comment over the weekend, but the brands have not yet replied.  

After MIT Technology Review flagged the ads on Medical Outline and other sites to Google, Aciman said Google had removed ads that were being served on many of the sites “due to pervasive policy violations.” The ads were still visible on Medical Outline as of June 25. 

Previous articleTikTok discontinues its BeReal clone: ​​TikTok Now
Next articleTesla Stock’s Best Run Since 2020 Is Spoilt by Wall Street Downgrades
Expert tech and gaming writer, blending computer science expertise