• Tech Tech

Study makes troubling discovery about AI chatbots: 'Amplifying these patterns and behaviors'

"People sometimes think that it's easier to trust a machine than it is to trust a person."

"People sometimes think that it's easier to trust a machine than it is to trust a person."

Photo Credit: Getty Images

If artificial intelligence is going to be the downfall of humankind, it may be a long way off.

A NewsGuard study found that generative AI struggles to "effectively respond to false narratives," Voice of America reported.

While this is alarming on the cusp of the U.S. presidential election, the research showed that 70% of the mistakes came in response to "bad actor" prompts. The other methods tested were innocent user prompts and leading questions, and the data covered global wars and American politics. Ten leading chatbots repeated misinformation 18% of the time and offered nonresponses 38.3% of the time.

🗣️ Do you worry about companies having too much of your personal data?

🔘 Absolutely 👍

🔘 Sometimes 🤔

🔘 Not really 👎

🔘 I'm not sure 🤷

🗳️ Click your choice to see results and speak your mind

"Misinformation isn't new, but generative AI is definitely amplifying these patterns and behaviors," AI researcher Sejin Paik told VOA.

The "garbage in, garbage out" model, media scholar Matt Jordan said, is because AI mostly learns from low-quality sources that can be rife with misinformation and disinformation.

It's concerning because ChatGPT alone has more than doubled its user base over the last year to 200 million weekly users. "Foreign-influence campaigns are able to take advantage of such flaws," and Russia, Iran, and China have used generative AI in election interference efforts, VOA reported.

The stakes don't get much higher, and AI may be being used to amplify other disinformation campaigns. In the aftermath of hurricanes Helene and Milton, for example, conspiracy theories proliferated online, sowing discord among conservatives that liberals were controlling the extreme weather events to boost their candidate in the elections.

The "junk science" at the root of the issue "doesn't even make sense," as one TikToker put it. A climate expert also explained how the idea was preposterous, and the News Literacy Project detailed how and why the theories snowball.

"The antidote to misinformation is to trust in reporters and news outlets instead of AI," Jordan told VOA. "People sometimes think that it's easier to trust a machine than it is to trust a person. But in this case, it's just a machine spewing out what untrustworthy people have said."

Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.

Cool Divider