Publishers need to prepare for the onset of widely available deepfake technology, which uses deep-learning tools to manipulate videos and audio, according to Reuters.
“The ability to spread such content is evolving much, much faster than anyone’s ability to control it,” observed Nick Cohen global head/video products at the news agency Reuters. He was speaking at the SXSW 2019 event in Austin, Texas. (For a full treatment, read WARC’s exclusive report: Publishers, brands, and the threat of "deep fakes").
Two key examples, both involving doctored footage of President Barack Obama caused a splash as they demonstrated the power of the new technology. One used the voice of the comedian Jordan Peele while manipulating video of Obama speaking. The other was created by University of Washington researchers, who developed a video of Obama speaking that spliced in quotations from a talkshow appearance made several years earlier.
“The results aren’t perfect by any means, but it’s a small leap of imagination to see how these techniques could be used, whether intentionally or accidentally, to manipulate,” Cohen said.
“Thankfully, we’ve not yet had an explosive news story resulting from a ‘deep fake’, but many in our industry fear it is simply a matter of time.”
The problem is not that video can be manipulated. This has been possible for a long time, but you needed significant resources, studios, and an army of video editors to make it convincing. With “deep-fake” technology, that process is suddenly democratised.
For Reuters, whose entire business is based in providing breaking news reporting, photography, and video to publishers and financial institutions around the world, its role in responding to the technology is crucial. “We’ve got a key role in the global information ecosystem, and it’s one we take very seriously,” Cohen said.
Alongside the 2,500 journalists that Reuters employs, the agency also works with user generated content to provide information. While it doesn’t replace professional coverage, “It enhances our storytelling and is now a vital component of digital storytelling in 2019”, explained Hazel Baker, head/user-generated content (UGC) newsgathering at the news agency. Verification is now a top priority: “There are so many ways that people try to mislead us,” she said.
Though there are numerous techniques that Reuters and others use, “deep fakes” constitute a shift in the landscape and require Reuters to respond in new ways.
Reuters has developed a framework to identify fake videos:
- Zero manipulation: these are videos that are misattributed or re-used. Journalists verify this content using reverse-searches.
- Edited video: Less frequent, but can often involve replacing the audio on a fuzzy or pixelated video to suggest a different location.
- Staged video: often not made with malicious intent but can be reported without the necessary supporting information.
- Computer generated imagery (CGI): An example from the 2018 FIFA World Cup showed how a social media video can appear on social media and then be reported by mainstream news outlets (in this case, the UK’s Daily Mail).
Sourced from WARC