AI Security: Curation, Context and Other Keys to the Future


Security leaders need to cut through the hype when it comes to artificial intelligence (AI) security. While AI offers promise, buzzwords and big-picture thinking aren’t enough to deliver practical, useful results. Instead, using AI security correctly starts with knowing what it looks like today and what AI will look like tomorrow.

Improved curation, enhanced context and the growing field of stateful solutions are three trends that can help you better understand the AI of the future.

The State of AI Cybersecurity Today 

The AI security market has undergone major growth, surpassing $8.6 billion in 2019. In the shorter term, Forbes reports that 76% of enterprises now “prioritize AI and machine learning (ML) over other IT initiatives in 2021.”

While current AI deployments focus largely on key tasks, such as incident reporting and analysis, the Institute of Electrical and Electronics Engineers notes that ongoing improvement of AI security techniques can increase threat detection rates, reduce false positives and improve behavioral analysis. But what does this look like in practice?

Curation: Distilling the Digital Impact of AI Security

First, take a look at curation: Intelligent tools can sort through millions of research papers, blogs, news stories and network events and then deliver relevant and real-time threat intelligence that helps people make data-driven decisions and improve front-line defensive posture.

In effect, curation acts to reduce the scaled-up problem of alert fatigue for IT teams. That problem now includes much more than simply perimeter security detection and application issue notification. By empowering AI to consume and then curate multiple sources, it’s possible for infosec experts to get a bird’s-eye view of what’s happening across security landscapes — and what steps are needed to improve overall protection.

AI and Cybersecurity: Context and Beyond

Context comes next. This speaks to the algorithmic infrastructure needed to go beyond the ‘what’ offered in curation tools and help people understand why specific events are occurring.

For enterprises, a contextual approach offers two key benefits: improved root cause response and reduced access complexity. Consider an attack on front-facing organizational apps. While existing tools can detect forbidden actions and close application sessions on their own, machine learning cybersecurity analysis makes it possible to pinpoint the nature and type of specific risks.

When it comes to user permissions, meanwhile, AI security tools can leverage context cues to approve or deny access. For example, if access requests are made from a new user location at an odd time of day, AI tools can deny entry and flag these events for further review. On the flip side, keeping tabs on users with familiar and repeating access patterns makes it possible for AI tools to approve specific sign-on requests without the need for more verification.

In addition to ease of access, this AI security approach also has knock-on revenue effects.

“The establishment of low-friction end user experiences has the potential to help boost security effectiveness while reducing management efforts and related costs,” says Steve Brasen, Research Director, Enterprise Management Associates.

Stateless Versus Stateful Applications

No matter how advanced AI becomes, humans remain a critical part of the cybersecurity loop. On the infosec side, humans will always be required for oversight and interpretation. Meanwhile, on the end-user side they introduce the risk of randomness. What people do and why they do it isn’t always obvious.

As a result, enterprises are often best served by a mix of stateless and stateful applications. Stateless applications have no stored knowledge and therefore no reference frame for past transactions. Stateful apps, meanwhile, leverage the context of previous actions to help assess user requests.

While stateless solutions offer a way to gate one-time transactions, such as 2FA access requests, stateful ones make it possible to understand the impact of people. They aggregate historical and contextual datasets to form a framework that helps better model and manage incidents driven by people.

The Human Side of AI Security

So what’s next for AI security? Recent survey data suggests worry among IT leaders that intelligent tools will replace their roles by 2030. Fully 41% believe they’ll be replaced by 2030. But, that outcome isn’t likely.

Here, trust is the tipping point. Experts agree that building trust across AI and security is critical for widespread adoption. Consider the ongoing challenge of bias, which occurs when systems include unconscious preference for specific actions or outcomes. This bias could lead to the under-representation or over-weighting of specific events, in turn exposing enterprises to risk.

The solution is twofold: better data to train AI on and expert human oversight.

“Machines get biased because the training and data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar.

For malicious actors, their goal may be to use biased AI to exploit key system or network resources.

Solving for this issue, therefore, demands human oversight. Just as human action (or inaction) can cause problems, expert oversight of AI-driven results can help ensure that tools are targeting the right incidents at the right time for the right reasons. Tools capable of curation, context and stateful solutions enhance this ability, helping to give human-lead infosec teams the edge over threat actors.

Bottom line? The future of AI security depends on curation tempered by critical context, informed by stateful analysis and watched over by human experts.

The post AI Security: Curation, Context and Other Keys to the Future appeared first on Security Intelligence.