Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All

In 2017, an anonymous Reddit user under the pseudonym “deepfakes” posted links to pornographic videos that appeared to feature famous mainstream celebrities. The videos were fake. And the user created them using off-the-shelf artificial intelligence (AI) tools.

Two months later, Reddit banned the deepfakes account and related subreddit. But the ensuing scandal revealed a range of university, corporate and government research projects under way to perfect both the creation and detection of deepfake videos.

Where Deepfakes Come From (and Where They’re Going)

Deepfakes are created using AI technology called generative adversarial networks (GANs), which can be used broadly to create fake data that can pass as real data. To oversimplify how GANs work, two machine learning (ML) algorithms are pitted against each other. One creates fake data and the other judges the quality of that fake data against a set of real data. They continue this contest at massive scale, continually getting better at making fake data and judging it. When both algorithms become extremely good at their respective tasks, the product is a set of high-quality fake data.

In the case of deepfakes, the authentic data set consists of hundreds or thousands of still photographs of a person’s face, so the algorithm has a wide selection of images showing the face from different angles and with different facial expressions to choose from and judge against to experimentally add to the video during the learning phase.

Carnegie Mellon University scientists even figured out how to impose the style of one video onto another using a technique called Recycle-GAN. Instead of convincingly replacing someone’s face with another, the Recycle-GAN process enables the target to be used like a puppet, imitating every head movement, facial expression and mouth movement in the exact way as the source video. This process is also more automated than previous methods.

Most of these videos today are either pornography featuring celebrities, satire videos created for entertainment or research projects showing rapidly advancing techniques. But deepfakes are likely to become a major security concern in the future. Today’s security systems rely heavily on surveillance video and image-based biometric security. Since the majority of breaches occur because of social engineering-based phishing attacks, it’s certain that criminals will turn to deepfakes for this purpose.

Deepfake Videos Are Getting Really Good, Really Fast

The earliest publicly demonstrated deepfake videos tended to show talking heads, with the subjects seated. Now, full-body deepfakes developed in separate research projects at Heidelberg University and the University of California, Berkeley are able to transfer the movements of one person to another. One form of authentication involves gait analysis. These kinds of full-body deepfakes suggest that the gait of an authorized person could be transferred in video to an unauthorized person.

Here’s another example: Many cryptocurrency exchanges authenticate users by making them photograph themselves holding up their passport or some other form of identification as well as a piece of paper with something like the current date written on it. This can be easily foiled with Photoshop. Some exchanges, such as Binance, found many attempts by criminals to access accounts using doctored photos, so they and others moved to video instead of photos. Security analysts worry that it’s only a matter of time before deepfakes will become so good that neither photos nor videos like these will be reliable.

The biggest immediate threat for deepfakes and security, however, is in the realm of social engineering. Imagine a video call or message that appears to be your work supervisor or IT administrator, instructing you to divulge a password or send a sensitive file. That’s a scary future.

What’s Being Done About It?

Increasingly realistic deepfakes have enormous implications for fake news, propaganda, social disruption, reputational damage, evidence tampering, evidence fabrication, blackmail and election meddling. Another concern is that the perfection and mainstreaming of deepfakes will cause the public to doubt the authenticity of all videos.

Security specialists, of course, will need to have such doubts as a basic job requirement. Deepfakes are a major concern for digital security specifically, but also for society at large. So what can be done?

University Research

Some researchers say that analyzing the way a person in a video blinks, or how often they blink, is one way to detect a deepfake. In general, deepfakes show insufficient or even nonexistent blinking, and the blinking that does occur often appears unnatural. Breathing is another movement usually not present in deepfakes, along with hair (it often looks blurry or painted on).

Researchers from the State University of New York (SUNY) at Albany developed a deepfake detection method that uses AI technology to look for natural blinking, breathing and even a pulse. It’s only a matter of time, however, before deepfakes make these characteristics look truly “natural.”

Government Action

The U.S. government is also taking precautions: Congress could consider a bill in the coming months to criminalize both the creation and distribution of deepfakes. Such a law would likely be challenged in court as a violation of the First Amendment, and would be difficult to enforce without automated technology for identifying deepfakes.

The government is working on the technology problem, too. The National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA) and Intelligence Advanced Research Projects Agency (IARPA) are looking for technology to automate the identification of deepfakes. DARPA alone has reportedly spent $68 million on a media forensics capability to spot deepfakes, according to CBC.

Private Technology

Private companies are also getting in on the action. A new cryptographic authentication tool called Amber Authenticate can run in the background while a device records video. As reported by Wired, the tool generates hashes — “scrambled representations” — of the data at user-determined intervals, which are then recorded on a public blockchain. If the video is manipulated in any way, the hashes change, alerting the viewer to the probability that the video has been tampered with. A dedicated player feature shows a green frame for portions of video that are faithful to the origina, and a red frame around video segments that have been altered. The system has been proposed for police body cams and surveillance video.

A similar approach was taken by a company called Factom, whose blockchain technology is being tested for border video by the Department of Homeland Security (DHS), according to Wired.

Security Teams Should Prepare for Anything and Everything

The solution to deepfakes may lie in some combination of education, technology and legislation — but none of these will work without the technology part. Because when deepfakes get really good, as they inevitably will, only machines will be able to tell the real videos from the fake ones. This deepfake technology is coming, but nobody knows when. We should also assume that an arms race will arise with malicious deepfake actors inventing new methods to overcome the latest detection systems.

Security professionals need to consider the coming deepfake wars when analyzing future security systems. If they’re video or image based — everything from facial recognition to gait analysis — additional scrutiny is warranted.

In addition, you should add video to the long list of media you cannot trust. Just as training programs and digital policies make clear that email may not come from who it appears to come from, video will need to be met with similar skepticism, no matter how convincing the footage. Deepfake technology will also inevitably be deployed for blackmail purposes, which will be used for extracting sensitive information from companies and individuals.

The bottom line is that deepfake videos that are indistinguishable from authentic videos are coming, and we can scarcely imagine what they’ll be used for. We should start preparing for the worst.

The post Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All appeared first on Security Intelligence.