“Threats to U.S. national security will expand and diversify in the coming year, driven in part by China and Russia as they respectively compete more intensely with the United States and its traditional allies and partners.”
This eye-opening statement comes from a 2019 report by the Senate Select Committee on Intelligence, which attempts to highlight the main threats to U.S. security and integrity. One of the main chapters addresses the threats coming from online influence operations and election interference, specifically pointing toward the use of “deepfakes or similar machine-learning technologies to create convincing — but false — image, audio and video files to augment influence campaigns directed against the United States and our allies and partners.”
As explained in the recent report by NYU Stern, the term deepfake comes from the combination of “deep learning” and “fake.” Deep learning has gained tremendous attention in recent years in the field of artificial intelligence due to outstanding results in many tasks, often outperforming human specialists in the fields of natural language and image processing.
The democratization of deep learning frameworks, together with tremendous progress in speech and video processing, has given rise to the popularity of deepfakes. As further elaborated in the NYU Stern report, these artificial neural network-backed solutions study photographs and videos of a target person, say a politician, as well as of a second person, typically an actor, to create a video in which the target person appears to be behaving and speaking exactly as the original actor did.
In other words, deepfakes are used to manipulate the appearance and voices of people to simulate real-looking footage through the use of machine learning algorithms. Probably the most well-known deepfake is the one in which Barack Obama gives a public service announcement using words that are far from politically correct.
Other examples that attracted attention in 2019 include Mark Zuckerberg boasting of how the platform “owns” its users or the heavily discussed alleged deepfake of Gabon’s president delivering a New Year’s address. One week after the release of the second video, Gabon’s military attempted a coup, citing the video’s odd nature as proof something was wrong with the president.
Deepfakes are becoming increasingly sophisticated, and the limitless potential of how they can be used raises justified concerns. For example, Congress had its first hearing on the emerging disinformation threat of deepfakes earlier this year. With the upcoming elections in both in European Union and the United States, the task of automatically detecting deepfakes has become of the hottest research topics in AI.
The current methods of detection typically focus on demasking the tiny imperfections left by the algorithms, such as boundary artifacts like unnatural corners of the lips, shadow inconsistencies or double eyebrows. Yet, the technology is ever-evolving, and today’s weaknesses may very well become the strengths of tomorrow (you can try for yourself with this online test). One of the main limitations that researchers face these days is the lack of good quality data — actual deepfakes that can act as training materials for AI tasked with detecting the forgery.
In order to address this issue and help research teams around the world work on robust deepfake-detection algorithms, some of the biggest companies and most renowned universities in the industry decided to act. These organizations developed the Deepfake Detection Challenge (DFDC).
The aim of the challenge is to stimulate more research and development and ensure that there are better open-source tools to detect doctored videos. The competition will run until the middle of 2020, and one of the main features in the challenge is a set of deepfakes featuring professional actors playing in a variety of scenes, orchestrated to resemble real-user Facebook videos.
“The ultimate goal [of the challenge] isn’t to create a system that will stop all deepfakes forever, but to find ways to make it harder and more expensive to create passable deepfakes,” said Mike Schroepfer, CTO of Facebook.
Initiatives like this are of paramount importance for limiting the possibilities of bad actors, but while we wait for some tangible results to start being generated, there are steps you can take to watch out for false videos and keep your guard up high.
Due to the fact that today’s algorithms process videos on a frame-by-frame basis, these videos still cannot retain full coherence. This leads to some odd artifacts that appear every now and then. Slowing a video down can reveal inconsistencies in the way mouths are moving.
More concretely, these algorithms still struggle with faithful representations of teeth, tongues and the interior of mouths. In addition, the boundaries between the edges of one’s face or hair and the background of a scene often contain unnatural digital smearings, which make these areas of the video look as if they’re filtered in an unrealistic way. Visual anomalies can also be seen around the eyebrows or the chin, which sometimes look like they’ve been doubled, as well as areas of the skin that have different textures compared to the rest of the face.
As the fight against deepfakes continues, you and your business can safeguard against these forgeries by keeping best practices like those mentioned above in mind. Always be a little skeptical at first glance, and you may save yourself from error down the line.