I recently told you about how Facebook and Google have funded research into detecting “deepfakes,” or videos which use AI to make realistic imagery where a person appears to say or do something they did not. Well, it’s a good thing those two wealthy corporations are looking into it, because, apparently, you can’t fight fire with fire.
Or, in this case, you can’t fight AI with AI.
The Verge recently cited a report from Data and Society which states deepfakes require more than just a technological fix.
“Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations,” says The Verge.
AI is New, But Media Manipulation is Not
The report says that the relationship between media and truth has never been stable. Consider these examples:
- 1850s: When judges began allowing photographic evidence in court, people mistrusted the new technology and preferred witness testimony and written records
- 1990s: Media companies misrepresented events by editing images from evening broadcasts — especially in reports about the Gulf War
What’s Being Done About It (& Will It Work?)
Facebook and Google are sponsoring research to help researchers stop deepfake videos. TruePic, an AI and blockchain tech which searches for manipulated images, has also gained attention. The US Defense Advanced Research Projects Agency gave funding to Medifor, which examines pixels in videos to suss out deepfakes.
But these fixes are aiming to stop deepfake videos at the point-of-capture or at the detection level, says The Verge. The report, however, worries that AI content filters “make things better for some but could make things worse for others…” creating “openings for companies to capture all sorts of images and create a repository of online life.”