The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures:
- 68% are concerned about insider threats from employee layoffs and churn
- 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments
- 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors.
Additionally, many feel digital identity proliferation is on the rise and the attack surface is at risk from artificial intelligence (AI) attacks, credential attacks and double extortion. For now, let’s focus on digital identity proliferation and AI-powered attacks.
Digital identities: The solution or the ultimate Trojan horse?
For some time now, digital identities have been considered a potential solution to improve cybersecurity and reduce data loss. The general thinking goes like this: Every individual has unique markers, ranging from biometric signatures to behavioral actions. This means digitizing and associating these markers to an individual should minimize authorization and authentication risks.
Loosely, it is a “trust and verify” model.
But what if the “trust” is no longer reliable? What if, instead, something fake is verified — something that should never be trusted in the first place? Where is the risk analysis happening to remedy this situation?
The hard sell on digital identities has, in part, come from a potentially skewed view of the technology world. Namely, both information security technology and malicious actor tactics, techniques, and procedures (TTPs) change at a similar rate. Reality tells us otherwise: TTPs, especially with the assistance of AI, are blasting right past security controls.
You see, a hallmark of AI-enabled attacks is that the AI can learn about the IT estate faster than humans can. As a result, both technical and social engineering attacks can be tailored to an environment and individual. Imagine, for example, spearphishing campaigns based on large data sets (e.g., your social media posts, data that has been scraped off the internet about you, public surveillance systems, etc.). This is the road we are on.
Digital identities may have had a chance to successfully operate in a non-AI world, where they could be inherently trusted. But in the AI-driven world, digital identities are having their trust effectively wiped away, turning them into something that should be inherently untrustworthy.
Trust needs to be rebuilt, as a road where nothing is trusted only logically leads to one place: total surveillance.
Artificial intelligence as an identity
Identity verification solutions have become quite powerful. They improve access request time, manage billions of login attempts and, of course, use AI. But in principle, verification solutions rely on a constant: trusting the identity to be real.
The AI world changes that by turning “identity trust” into a variable.
Assume the following to be true: We are relatively early into the AI journey but moving fast. Large language models can replace human interactions and conduct malware analysis to write new malicious code. Artistry can be performed at scale, and filters can make a screeching voice sound like a professional singer. Deep fakes, in both voice and visual representations, have moved away from “blatantly fake” territory to “wait a minute, is this real?” territory. Thankfully, careful analysis still permits us the ability to distinguish the two.
There is another hallmark of AI-enabled attacks: machine learning capabilities. They will get faster, better and ultimately prone to manipulation. Remember, it is not the algorithm that has a bias, but the programmer inputting their inherent bias into the algorithm. Therefore, with open source and commercial AI technology availability on the rise, how long can we maintain the ability to distinguish between real and fake?
Overlay technologies to make the perfect avatar
Think of the powerful monitoring technologies available today. Biometrics, personal nuances (walking patterns, facial expression, voice inflections, etc.), body temperatures, social habits, communication trends and everything else that makes you unique can be captured, much of it by stealth. Now, overlay increasing computational power, data transfer speeds and memory capacity.
Finally, add in an AI-driven world, one where malicious actors can access large databases and perform sophisticated data mining. The delta to create a convincing digital replica shrinks. Paradoxically, as we create more data about ourselves for security measures, we grow our digital risk profile.
Reduce the attack surface by limiting the amount of data
Imagine our security as a dam and data as water. To date, we have leveraged data for mostly good means (e.g., water harnessed for hydroelectricity). There are some maintenance issues (e.g., attackers, data leaks, bad maintenance) that are mostly manageable thus far, if exhausting.
But what if the dam fills at a rate faster than that of what the infrastructure was designed to manage and hold? The dam fails. Using this analogy, the play is then to divert excess water and reinforce the dam or limit data and rebuild trust.
What are some methods to achieve this?
- The top-down approach creates guardrails (strategy). Generate and hold only the data you need, and even go as far as disincentivizing excess data holds, especially data tied to individuals. Fight the temptation to scrape and data mine absolutely everything for the sake of micro-targeting. It’s more water into the reservoir unless there are more secure reservoirs (hint: segmentation).
- The bottom-up approach limits access (operations). Whitelisting is your friend. Limit permissions and start to rebuild identity trust. No more “opt-in” by default; move to “opt-out” by default. This allows you to manage water flow through the dam better (e.g., reduced attack surface and data exposure).
- Focus on what matters (tactics). We have demonstrated we cannot secure everything. This is not a criticism; it is reality. Focus on risk, especially for identity and access management. Coupled with limited access, the risk-based approach prioritizes the cracks in the dam for remediation.
In closing, risk must be taken to realize future rewards. “Risk-free” is for fantasy books. Therefore, in the age of a glut of data, the biggest “risk” may be to generate and hold less data. The reward? Minimized impact from data loss, allowing you to bend while others break.
The post Artificial intelligence threats in identity management appeared first on Security Intelligence.