fbpx

AI is creating a tsunami of change in journalism: What does that mean for the newsroom?

by | Mar 11, 2024 | Public Relations

In just the first two months of 2024, more than 500 journalists have been laid off at top publications, including the Los Angeles Times, The Wall Street Journal, Time, and Business Insider. While some publications are downsizing to protect themselves from the tides of economic uncertainty, others are gearing up to ride the latest wave of generative AI.

In my opinion, the rapid emergence of AI has pushed American media to an inflection point, and the publications and journalists willing to embrace the new technology are positioning themselves as the changemakers that will write the future.

For centuries, newsrooms have been entrusted to provide breaking news and smart, concise analysis on complex issues—sometimes within only minutes of an event taking place. With nearly all sectors of the economy now using AI algorithms to analyze data and provide insights with unprecedented speed and efficiency, the American media should be no exception. More clearly, given AI’s sheer force and rapid adoption across industries, I think it’s critical that newsrooms get onboard now—or risk falling behind.  

As a former Silicon Valley CEO during the dot-com boom, and current PR practitioner who works with AI-native venture backed companies, I’m confident that generative AI will be even more transformative than the Internet has been. 

Some publishers are leading the way as agents of change

The New York Times, for example, is building a team of editors and engineers to explore the use of generative AI and machine learning in the newsroom. (The company emphasizes that humans will still report, write and edit the news.) 

Newsquest has seven AI-assisted reporters who have produced thousands of stories using ChatGPT, with no errors reaching the publishing stage. Journalists review the work of these “hyper-efficient copywriters” before publication.

Global news platform Semafor launched a news feed called Signals, powered by journalists using tools from Microsoft and OpenAI, that provides diverse perspectives on developing news stories. 

The Online News Association (ONA), a nonprofit membership organization for digital journalists, rolled out the AI in Journalism Initiative—with the financial support of Microsoft—to educate media professionals through virtual training sessions and resources.

Admittedly, the media’s use of AI has encountered some real challenges

After Microsoft’s MSN.com site enlisted the help of AI last year, some fake news stories emerged, such as the false claim that President Joe Biden fell asleep during a moment of silence for victims of the Maui wildfire. 

At other times, AI’s lack of emotional intelligence was striking. When a Guardian article about Lilie James—a 21-year-old found dead with serious head injuries at an Australian school—was republished on MSN, an AI-generated poll asked readers, “What do you think is the reason behind the woman’s death?” The options: murder, accident or suicide.

These are oversights that any trained fact-checker should catch before a story is published, especially when working with AI. 

Understandably, using AI in the newsroom is facing resistance

One such skeptic is Matt Pierce, a reporter for the LA Times. During an online panel hosted by USC Annenberg School for Communication and Journalism, he said, “I am just extremely leery of presuming that there is necessarily a future for generative AI in some types of work when the premise of what journalism is is that it has to be accurate,”  

Pierce added, “A lot of us are covering the Israel-Gaza conflict right now, and you have to be able to account for every word that you choose to use in a story because people are watching like a hawk, and they expect that you’re going to be using highly accurate and justifiable language in every element of this entirely contentious conflict.”

On the other hand, AI in the newsroom has its ardent supporters

The New York Times’ technology columnist Kevin Roose admitted back in 2023, “I still write my columns myself. But over the past few months, I’ve enlisted ChatGPT as my assistant. When I’m stuck, I often paste in a few sentences and see if it can spark any ideas. If I’m trying to tighten an argument, I’ll ask it to poke holes in my reasoning.”

And just two weeks ago, Roose shocked the tech community and his readers when he publicly disclosed his new found allegiance to the AI-powered search engine Perplexity over Google. 

Of course, it’s not a one-way street. The newsroom and AI must strike a balance of give and take. Generative AI shouldn’t come at the expense of hands-on journalism, and publishers and editors must learn to integrate these powerful new tools into their workflows and newsrooms.  

So should AI companies invest more in chatbots to decipher and eliminate misinformation and deepfakes? Of course.  

Should editors, journalists and media staffers be trained on the pros and cons of these new AI tools?  Absolutely.  

Should publishers be compensated for their copyrighted work—an idea that has gained traction with The New York Times’ recent lawsuit against Microsoft and Open AI? This one gets complicated and has good arguments on both sides of the issue. 

One thing is clear: newsrooms and AI can—and should—coexist

News organizations that incorporate AI will see improvements in the breath and speed of disseminating information, while still maintaining the human perspective and critical analysis essential for reporting complex stories. 

Any major technological shift of this magnitude can not only be challenging but also downright scary. That said, newsrooms can either learn about the new technology, adjust to the new reality, and embrace the change, or they can hunker down and try to resist. In the case of AI, I believe the writing is on the wall.  

David Wamsley
David Wamsley is  CEO of Rosebud Communications.

RECENT ARTICLES