It’s time to face a stark reality: Threat actors will soon gain access to artificial intelligence (AI) tools that will enable them to defeat multiple forms of authentication — from passwords to biometric security systems and even facial recognition software — identify targets on networks and evade detection. And they’ll be able to do all of this on a massive scale.
Sounds far-fetched, right? After all, AI is difficult to use, expensive and can only be produced by deep-pocketed research and development labs. Unfortunately, this just isn’t true anymore; we’re now entering an era in which AI is a commodity. Threat actors will soon be able to simply go shopping on the dark web for the AI tools they need to automate new kinds of attacks at unprecedented scales. As I’ll detail below, researchers are already demonstrating how some of this will work.
When Fake Data Looks Real
Understanding the coming wave of AI-powered cyberattacks requires a shift in thinking and AI-based unified endpoint management (UEM) solutions that can help you think outside the box. Many in the cybersecurity industry assume that AI will be used to simulate human users, and that’s true in some cases. But a better way to understand the AI threat is to realize that security systems are based on data. Passwords are data. Biometrics are data. Photos and videos are data — and new AI is coming online that can generate fake data that passes as the real thing.
One of the most challenging AI technologies for security teams is a very new class of algorithms called generative adversarial networks (GANs). In a nutshell, GANs can imitate or simulate any distribution of data, including biometric data.
To oversimplify how GANs work, they involve pitting one neural network against a second neural network in a kind of game. One neural net, the generator, tries to simulate a specific kind of data and the other, the discriminator, judges the first one’s attempts against real data — then informs the generator about the quality of its simulated data. As this progresses, both neural networks learn. The generator gets better at simulating data, and the discriminator gets better at judging the quality of that data. The product of this “contest” is a large amount of fake data produced by the generator that can pass as the real thing.
GANs are best known as the foundational technology behind those deep fake videos that convincingly show people doing or saying things they never did or said. Applied to hacking consumer security systems, GANs have been demonstrated — at least, in theory — to be keys that can unlock a range of biometric security controls.
Machines That Can Prove They’re Human
CAPTCHAs are a form of lightweight website security you’re likely familiar with. By making visitors “prove” they’re human, CAPTCHAs act as a filter to block automated systems from gaining access. One typical kind of CAPTCHA asks users to identify numbers, letters and characters that have been jumbled, distorted and obfuscated. The idea is that humans can pick out the right symbols, but machines can’t.
However, researchers at Northwest University and Peking University in China and Lancaster University in the U.K. claimed to have developed an algorithm based on a GAN that can break most text-based CAPTCHAs within 0.05 seconds. In other words, they’ve trained a machine that can prove it’s human. The researchers concluded that because their technique uses a small number of data points for training the algorithm — around 500 test CAPTCHAs selected from 11 major CAPTCHA services — and both the machine learning part and the cracking part happen very quickly using a single standard desktop PC, CAPTCHAs should no longer be relied upon for front-line website defense.
Faking Fingerprints
One of the oldest tricks in the book is the brute-force password attack. The most commonly used passwords have been well-known for some time, and many people use passwords that can be found in the dictionary. So if an attacker throws a list of common passwords, or the dictionary, at a large number of accounts, they’re going to gain access to some percentage of those targets.
As you might expect, GANs can produce high-quality password guesses. Thanks to this technology, it’s now also possible to launch a brute-force fingerprint attack. Fingerprint identification — like the kind used by major banks to grant access to customer accounts — is no longer safe, at least in theory.
Researchers at New York University and Michigan State University recently conducted a study in which GANs were used to produce fake-but-functional fingerprints that also look convincing to any human. They said their method worked because of a flaw in the way many fingerprint ID systems work. Instead of matching the full fingerprint, most consumer fingerprint systems only try to match a part of the fingerprint.
The GAN approach enables the creation of thousands of fake fingerprints that have the highest likelihood of being matches for the partial fingerprints the authentication software is looking for. Once a large set of high-quality fake fingerprints is produced, it’s basically a brute-force attack using fingerprint patterns instead of passwords. The good news is that many consumer fingerprint sensors use heat or pressure to detect whether an actual human finger is providing the biometric data.
Is Face ID Next?
One of the most outlandish schemes for fooling biometric security involves tricking facial recognition software with fake faces. This was a trivial task with 2D technologies, in part because the capturing of 2D facial data could be done with an ordinary camera, and at some distance without the knowledge of the target. But with the emergence of high-definition 3D technologies found in many smartphones, the task becomes much harder.
A journalist working at Forbes tested four popular Android phones, plus an iPhone, using 3D-printed heads made by a company called Backface in Birmingham, U.K. The studio used 50 cameras and sophisticated software to scan the “victim.” Once a complete 3D image was created, the life-size head was 3D-printed, colored and, finally, placed in front of the various phones.
The results: All four Android phones unlocked with the phony faces, but the iPhone didn’t.
This method is, of course, difficult to pull off in real life because it requires the target to be scanned using a special array of cameras. Or does it? Constructing a 3D head out of a series of 2D photos of a person — extracted from, say, Facebook or some other social network — is exactly the kind of fake data that GANs are great at producing. It won’t surprise me to hear in the next year or two that this same kind of unlocking is accomplished using GAN-processed 2D photos to produce 3D-printed faces that pass as real.
Stay Ahead of the Unknown
Researchers can only demonstrate the AI-based attacks they can imagine — there are probably hundreds or thousands of ways to use AI for cyberattacks that we haven’t yet considered. For example, McAfee Labs predicted that cybercriminals will increasingly use AI-based evasion techniques during cyberattacks.
What we do know is that as we enter into a new age of artificial intelligence being everywhere, we’re also going to see it deployed creatively for the purpose of cybercrime. It’s a futuristic arms race — and your only choice is to stay ahead with leading-edge security based on AI.
The post AI May Soon Defeat Biometric Security, Even Facial Recognition Software appeared first on Security Intelligence.