In the wake of our reporting that CNET had been quietly publishing dozens of AI-generated articles, many expressed dismay at what seemed like an underhanded attempt to eliminate the jobs of entry-level human writers while downplaying the shoddy content to the site's readers.

One group was absolutely thrilled, however: spammers, who could scarcely contain their glee that a mainstream publisher was getting away with churning out bot-written content — and immediately expressed plans to do the same.

"Time to pump out content at ultra-high speed," rhapsodized one poster on BlackHatWorld, a notorious black hat search engine optimization forum where members trade dirty tricks and sell illicit services.

"Now is the time to maximize this pivotal moment that will cut down on cost with writers," another chimed in.

The implication was clear: that tools like ChatGPT will now allow scofflaws to pollute the internet with near-infinite quantities of bot-generated garbage, and that CNET — and its sister publication Bankrate — have now paved the way. In a way, it served as a perfect illustration of a recent warning by Stanford and Georgetown academics that AI tech could rapidly start to fill the internet with endless quantities of misinformation and profiteering.

The spammers were particularly fixated on Google's response to the CNET and Bankrate AI revelations, which they interpreted as a reversal of its previous stance that it would penalize AI-generated content in search results.

"Our ranking team focuses on the usefulness of content, rather than how the content is produced," the company's Public Search Liaison Danny Sullivan said at the time. "This allows us to create solutions that aim to reduce all types of unhelpful content in Search, whether it’s produced by humans or through automated processes."

On BlackHatWorld, spammers interpreted those remarks as kicking off an AI content free-for-all.

"This is nothing but the surrender of Google to AI," one wrote. "In short they are saying, 'we are not capable [of] distinguish[ing] between AI content and manual content and hence we are accepting it as way of life.'"

"My AI content serves the user... it serves them ads!" another poster taunted the search giant. "So all's good, right big G?"

It's true that the rise of ChatGPT-level AI-generated content represents an escalation in Google's perennial battle with spammers. But according to the company, any spammers who make a hard pivot into ChatGPT-style technology will find themselves in for a rude awakening.

As a Google spokesperson pointed out in response to questions, Sullivan and others at the search giant have expressed nuance about AI-generated content for some time now.

The spokesperson said that while some at the company have warned that Google will consider AI-generated content to be spam, they were specifically talking about bot-written content designed to manipulate the search engine's system. In reality, they said, AI-generated content isn't verboten per se — but any content designed with the primary purpose of rising in search rankings is still against the company's spam policies, and ripe for a crackdown.

"We haven't said AI content is bad," Sullivan wrote back in November 2022, for example. "We've said, pretty clearly, content written primarily for search engines rather than humans is the issue. That's what we're focused on. If someone fires up 100 humans to write content just to rank, or fires up a spinner, or a AI, same issue."

In other words, he says, the ultimate goal isn't to necessarily punish material simply because it was generated by AI, but to hold spam and unhelpful content accountable, regardless of its authorship.

"For more than 20 years, Search has adapted to address new spam techniques and low quality content, including mass-produced content in various forms," Sullivan said in a new statement provided to Futurism. "We detect more than 40 billion spammy pages every day and prevent them from appearing in search results. Content created with the aim of gaming search ranking — regardless of if it’s produced by humans or AI — is still spam and will be treated as such. "

Whether Google will be able to rise to the challenge of ubiquitous AI-generated content is likely to have momentous stakes for the quality of online information, since it's still the de facto tool used by most people to find stuff on the open web.

On the one hand, Google has vast engineering prowess, decades of experience, and immense AI tech investments of its own, all of which can be leveraged against AI-generated spam. But if CNET's efforts with AI-generated content are a sign of things to come, it will still be in for a tough fight for a simple reason: this new generation of AI-generated dreck is, on a surface level, very convincing.

Following our initial story, CNET editor-in-chief Connie Guglielmo published a rather tepid post publicly acknowledging the AI program and assuring readers that a human editor was still carefully fact-checking every article. But as we reported in response, a close read of one of CNET's AI-generated financial explainers revealed that while it was written with great confidence, it was riddled with blatant factual errors.

The AI was unable to properly explain how basic compounding interest works and got rudimentary facts about personal banking garbled. Worst of all, the explainer was clearly aimed at people with remedial financial literacy, meaning that if it ranked on Google, the people most in need of help would be exposed to the lowest-quality information imaginable.

These troubling inaccuracies demonstrate that while AI is able to string together grammatically correct — albeit rather prosaic — sentences, they often are unable to get even basic facts correct despite adopting an authoritative tone.

Yet if Google's reiterated stance on spam and unhelpful content were to ring true, why then are these AI-written explainers that are plagued with embarrassing inaccuracies — and honed to please search algorithms — still ranking highly? A quick search shows that CNET's AI-written explainers, complete with accuracy warnings, are frequently still among the first results on Google.

Are you a current or former CNET employee who wants to discuss the company's foray into AI-generated articles? Email tips@futurism.com to share your perspective. It's okay if you don't want to be identified by name.

Another wrinkle is that it's likely much AI content won't be labeled. Even at CNET, former staff are now alleging, the material generated by AI that was labeled as such may have only been the tip of the iceberg.

At the end of the day,  CNET and Bankrate still have staff who care about getting things right, and who are furious at the AI initiative. One Red Ventures employee told us this week that the internal outcry had been so intense in the wake of Futurism's reporting that Bankrate had paused the publication of all AI-generated content on Wednesday. CNET appears to have done the same, at least temporarily.

The spammers on BlackHatWorld, though, are likely to have no such scruples. One in particular had a dark thought of their own about the rise of AI.

"What if Google fully accepts AI content only for them to roll out their own ChatGPT-styled search feature, where users get answers directly with no websites to click on?" they pondered. "That would be catastrophic for the entire internet."

Jon Christian contributed reporting. 

More on CNET AI saga: CNET's Article-Writing AI Is Already Publishing Very Dumb Errors


Share This Article