Just when you thought you had enough to keep you up at night, there’s another threat to add to the list of enterprise security nightmares lurking under the bed. The deepfake, once a threat only to celebrities, has now transcended into the realm of potential risks to the organization.
According to Axios, deepfake audio technology has already begun wreaking havoc on the business world, as threat actors use the tech to impersonate CEOs. Symantec has reported three successful audio attacks on private companies that involved a call from the “CEO” to a senior financial officer requesting an urgent money transfer. Just imagine how an incident like this would affect your company.
Make no mistake: The threat is real. Especially because we don’t yet have tools reliable enough to distinguish between deepfake audio and the genuine article. So what can the enterprise do? Are there any steps we can take to mitigate the risk?
Taking Social Engineering to the Next Level
Independent cybersecurity expert Rod Soto views deepfakes as the next level of social engineering attacks.
“Deepfakes, either in video or audio form, go far beyond the simple email link, well-crafted SMS/text, or a phone call that many criminals use to abuse people’s trust and mislead them into harmful actions,” Soto said. “They can indeed extend the way social engineering techniques are employed.”
Simulated leaked audio may happen sooner than later, possibly featuring cloned recordings of executives with their entire conversation altered for malicious purposes. This information could easily affect investments and present situations in which a company’s competitors try to inflict reputational damage.
Soto’s primary concern when he first read about this is that we are not prepared for this type of attack, and it is only a matter of time until we start seeing significant consequences.
“Further on, as the technologies to create these audios and videos become more prevalent and easy to use, the attacks will become more widespread, affecting more than just executives, VIPs or government officials,” he said.
Soto is even aware of deepfake technology that can successfully emulate or clone people’s voices. Even without perfect technology, hackers can effectively add other artifacts to a cloned voice, such as airport background noise or car-driving noises. Obfuscating the voice in these ways, Soto noted, may affect the ability of a potential victim to identify the cloned voice and believe the message.
The Silver Linings
Unlike zero-day attacks, one thing we have going for us is time. As deepfake audio technology stands today, threat actors need sophisticated tools to pull one over on unsuspecting victims. Moreover, the barrier to entry is higher than the average attack available to anyone with cash to spend on the darknet.
Another positive is that training a very convincing deepfake audio model costs thousands of dollars in computing resources, according to CPO Magazine. However, if there’s a threat group with lots of money behind it, isn’t that cause for concern?
“There is certainly a computational cost and technology that is likely not available for the common criminal or script kiddie-type of threat actor,” said Soto. “But higher levels of organized crime or professional criminals can absolutely do it. As long as they have resources, it is possible to perform these types of attacks.”
Ultimately, the technology is still in development and, at this point, social engineering attacks couldn’t rely only on deepfake technology, as trained eyes and ears can still detect them. However, as Soto warned, “this may not be the case in the near future.”
How to Fend Off Deepfake Audio Attacks
Even if the audio is convincing enough to dupe most employees, all hope is not lost.
“For this type of attack to be successful, it needs to be supported by other social engineering means, such as emails or texts,” Soto explained. “As these technologies advance and become more difficult to detect, it will become necessary to create anti-deepfake protocols, which will probably involve multiple checks and verifications.”
As with similar attacks, you can train employees not to execute or follow instructions based only on audio or email messages. It is crucial for organizations to enhance enterprise security by ensuring that employees learn the lingo and understand cutting-edge social engineering methods. And the enterprise isn’t limited to awareness as the sole prevention strategy.
“While awareness always works, when facing these types of threats, it is necessary to develop anti-deepfake protocols that can provide users and employees with tools to detect or mitigate these types of attacks,” he said.
In addition to deepfake protocols, Soto sees the need for multifactor authentication (MFA) across the corporate environment, because most attacks are combined with other social engineering techniques that can be prevented — or, at least, mitigated — with solid identity and access management (IAM) solutions.
“This will force all of us to implement new verification protocols, in addition to simply listening to a voice mail, or reading an email or text message,” he said. “Regulation will likely be needed as well to address the widespread use of these technologies that can be weaponized and, potentially, cause harm.”
While I’m not trying to paint a picture of doom and gloom here, recent deepfake audio and video trends should serve as serious warnings to the enterprise. The deepfake threat is real, but with airtight security awareness training, carefully developed protocols and advanced security tools, organizations can greatly increase their chances of defeating any deepfake-based attacks.
The post Why Deepfake Audio Technology Is a Real Threat to Enterprise Security appeared first on Security Intelligence.