A bunch of AI robots behind computers.
Photo by Mohamed Nohassi on Unsplash

This Man Conducted a Fake News Experiment Using GPT. Here’s What Happened

Selman Seref, Head of Digital at tectrain GmbH, an IT course provider in Switzerland, had a novel idea. He decided to test the power of AI GPT technology to see if he could fool journalists with fake news. He created an automated fake news bot that sent e-mails to more than 200 journalists. And his findings are eye opening.

The Process

Aware of the growth in fake news, which often involves automated, mass-generated disinformation, Seref wanted to see just how dangerous the trend he calls Fake News 3.0 could be. He created a bot without malicious intent, solely as an experiment. He worked on fine-tuning the output of the messages he sent to journalists to ensure that they weren’t distinguishable from an e-mail crafted by a human. Then, he got to work.

Seref created eight steps for the bot, starting from the e-mail that would be sent to a journalist and ending with a humanized reply. It took him about three hours to create. In total, the bot processed, as noted, more than 200 e-mails.

The e-mails he sent discussed things like how his company introduced AI chatbots last year and saw improvements in satisfaction scores and response time, talked about their successful work with an eCommerce client, and even discussed the company’s tiered pricing strategy for SaaS. None of it was true.

The Results

Of the 200+ e-mails Seref sent, he received replies from about 25 people, half of whom were asking for more stats or images. (Note: WiFi HiFi was not included on this list!) Most did not reply and a few replied but said his information was too vague. What’s most troubling is that 10 people were ready to put his comments into their stories. “We selected your quote to include in our story,” wrote one journalist who asked for screenshots relating to the stats provided for verification. Another simply said they would be publishing his comments in an article and would let him know when it was live.

Seref alerted everyone to his experiment before things went further and anything was actually published. The reactions? Some laughed, he said, some cried, but most were flabbergasted and silent.

What Does This Mean?

If an e-mail is received and you can verify this person’s identity through a social site like LinkedIn and the company e-mail address from which it’s sent, most wouldn’t question the validity of what they are saying. Fact checking is crucial in journalism, of course. But does this now need to also include asking someone to confirm they’re really human and not a bot?

What’s particularly eye-opening about Seref’s experiment is that  while it was conducted on a small scale, a more advanced AI bot could create thousands, even millions, of “fake news” e-mails like this. They could be rife with made up stats, quotes from satisfied customers who don’t exist, and details about company offerings that are pulled from the imagination and sent from a seemingly authentic source. Imagine this when you’re talking about more sensitive information than simply company performance. (Indeed, that’s already happened on numerous occasions, but that’s beyond the scope of our purview here.)

With the ability for AI to create such detailed stories, fact checking by merely looking back at source e-mails may not suffice. More research needs to be done, particularly when dealing with sensitive and controversial subject matter.

Seref believes even he could have done more. It’s simple to obtain a list of e-mail addresses for journalists, he says, if you know where to look and you’re willing to pay the right price. (He found mine, after all). Once in his possession, he could easily categorize them by areas of coverage, craft personalized e-mails for each person, and completely make up stories. In essence, he could control the narrative and sculpt public opinion by sending information that appears to be completely legitimate. This process on a much larger scale than just one man with a point to prove is downright terrifying.

“The staggering implication,” he writes, “isn’t merely that Fake News 3.0 is automated; it’s that this machinery can be scaled to an almost unfathomable extent. The lines between reality and fabrication blur, threatening to disrupt not just individual opinions but the very foundation of societal truth.”

What Can We Do?

As a journalist, this is terrifying. But it’s even more terrifying as an everyday reader of news. How do you know what’s generated by a real person or an AI bot, and what contains factual information and what doesn’t? Now you also need to figure out what stories by real journalists were vetted to ensure the sources they spoke to were not “fake news” bots. The other day, I received an e-mail from a reader of an article I wrote on a TV entertainment news site insisting they knew I was not real. I decided to shock them by replying that I just checked my pulse and I’m most certainly alive!

Seref says the simplest way to protect yourself is to go old school. Use your critical thinking and skepticism. When you read something, don’t automatically consider it to be factual. That goes for both everyday Joes and journalists. Do your research and verify through reputable outlets that have real humans writing and researching.

Let go of biases. It’s natural to believe something that aligns with your existing belief system and negate something as false just because it doesn’t. Look for multiple sources on a specific topic, even ones you might not agree with, to get a more holistic picture. As the saying goes, there’s your story, there’s my story, and somewhere in the middle lies the truth.

For journalists, fact check everything. Just because someone claims to have done something, delivers jaw-dropping stats, or has specific information, do your due diligence. Ask for sourcing and verify it before publishing anything. Now more than ever, it’s crucial to ensure that information is presented by real, human journalists, and that it’s accurate and, most important, true. “In a world where even established journalists and respected news outlets fall victim to sophisticated fakes,” says Seref, “a thorough understanding of media processes becomes our most reliable safeguard.”

Was Seref’s study even real? I can’t say I have verified it beyond conversing with Seref via e-mail (was it even really him?) and examining his full case study as well as cross referencing his existence through social profiles and the company website. If it (or he?) is fake, he has gone to elaborate lengths to feel me (and others). Real or not, however, the study brings crucial points about AI, journalism, and fake news, to the forefront that are all worth examining.

See Seref’s full case study here.