fbpx

New academic study examines the future of trust in AI-generated news

by | Jan 18, 2024 | Public Relations

Trust in AI continues to be the biggest obstacle to overcome before the tech is welcomed into business and customer service. Business leaders and consumers alike have voiced concerns about the accuracy of AI insights and the largely undefined ramifications of irresponsible use. So if people are skeptical about AI’s reliability for things like data safety and accurate product information, what are the chances they would trust it to reliably produce news stories?

The adoption of AI in the production and distribution of news has raised particular concerns about the erosion of journalistic authority, the inclusion of bias, and the spread of misinformation. And these concerns are extra worrisome given that trust in news is already low in many places worldwide. New research from AI content generation firm HeyWire AI finds that scholars and practitioners are wary of how the public will respond to news generated through automated methods, prompting calls for labeling of AI-generated content. 

The academic study, conducted by the University of Oxford and the University of Minnesota in collaboration with HeyWire AI using its AI-generated news, examines the state of public trust in AI for the future of news. The survey asked respondents, “Can AI-generated journalism help build trust among skeptical audiences?”

Two sides to the story

According to the researchers, there is a clear fear that the use of AI in news production could further damage trust, with related knock-on effects on publishers’ credibility with the audiences they seek to serve. While an increasing number of publishers have begun responding to these concerns by adding labels to AI-generated content, there is no shared consensus about what the disclosure should look like, nor is there agreement over what level of AI-involvement should trigger labeling. But at the same time, it’s also possible that some audiences might view AI-generated news more positively precisely because of the low esteem that many in the public already have for traditional journalism.

Proper labeling is the key

Although the findings were inconclusive, the majority of those surveyed felt OK with AI-generated news as long as it was labeled as such. Not surprisingly, this acceptance varied by topic—it was highest for routine reporting such as weather or stock market developments and lowest for hard-news areas like culture, science, and politics.

“The findings of the study validate the trends in the news industry and we’re pleased to see the research of the academic community support these industry trends and our related product development methodology,” said Von Raees, founder and CEO of HeyWire AI, in a news release.

Download the full report here.

The methodology for the University of Oxford and University of Minnesota study includes a preregistered survey experiment fielded in September 2023, utilizing a quasi-representative sample of U.S. public demographics. The study’s stimuli included news stories generated by HeyWire AI on timely news topics. These included stories on Barbie, Hunter Biden’s legal troubles, and the BRICS summit in South Africa.

Richard Carufel
Richard Carufel is editor of Bulldog Reporter and the Daily ’Dog, one of the web’s leading sources of PR and marketing communications news and opinions. He has been reporting on the PR and communications industry for over 17 years, and has interviewed hundreds of journalists and PR industry leaders. Reach him at richard.carufel@bulldogreporter.com; @BulldogReporter

RECENT ARTICLES