Reuters experiment spotlights newsroom test for deep fake content

An experiment by Reuters to create fake video content has helped develop best practice techniques for identifying so called deep fake content.

Fake news is disinformation spread by social and traditional media. It has always been a feature of the media but rose to prominence in 2016 during the propaganda fuelled EU Referendum and US Election.

Deep fake goes a step further. It’s the manipulation images and video of people to remove the original context and superimpose alternative meaning. It’s a growing concern for democracy and public discourse.

Manipulation can range from altering the words spoken by an individual to showing an individual performing sexual acts.

Deep fake content is created by manipulating existing images and video content. Audio and animation are added to existing content and machine learning used to sync audio and images.

It’s a issue that has news organisations on high alert. Social media producers at Reuters describe receiving videos stripped of context, mislabels, edited, staged or modified using CGI.

“Overall, levels of video manipulation are escalating, making day-to-day operations challenging for newsrooms,” said journalist and editor Giles Crosse.

“Checking and verification demands time, skill and judgement within an increasingly fast paced, news-first dynamic.”

A newsgathering team at Reuters conducted an experiment to understand how easy it is to create deep fake content. Their goal was to create best practice for identifying deep fake content.

An interview was filmed in one language while a second was interviewed in a different language.

The results were combined using artificial intelligence production techniques. Captions were added to the final cut. I’d challenge you to identify the result as fake.

“These types of video fakes pose a serious challenge to our ability make factual, accurate judgements,” said Hazel Baker, head of UGC newsgathering.

“They’re a danger, harming the reliable facts essential to democracy, politics and society, and as such we need to be prepared to detect them.”

The newsroom team identified three red flags to help identify deep fake content.

  • audio to video synchronisation issues

  • unusual mouth shape, particularly with sibilant sounds

  • a static subject

Reuters suggests that even these tests may not be sufficient as technology continues to improve. It suggests that news organisations should employ a combination of human judgement, subject expertise and a well-established verification framework when seeking to minimise the impact of fakes.

Previous
Previous

Learn to write and you’ll have a job for life

Next
Next

Letter from London: an outstanding contribution