Machine learning (ML) has brought us self-driving cars, machine vision, speech recognition, biometric authentication and the ability to unlock the human genome. But it has also given attackers a variety of new attack surfaces and ways to wreak havoc.
Machine learning applications are unlike those that came before them, making it all the more important to understand their risks. What are the potential consequences of an attack on a model that controls networks of connected autonomous vehicles or coordinates access controls for hospital staff? The results of a compromised model can be catastrophic in these scenarios, but there are also more prosaic threats to consider, such as fooling biometric security controls into granting access to unauthorized users.
Machine learning is still in its early stages of development, and the attack vectors are not yet clear. Cyberdefense strategies are also in their nascent stages. While we can’t prevent all forms of attacks, understanding why they occur helps us narrow down our response strategies.
A Structured Approach to Machine Learning Security
Threat modeling is a security optimization process that applies a structured approach to identifying and addressing threats. Machine learning security threat modeling does the same thing for ML models. It’s used at the early stages of building and deploying ML models to identify all possible threats and attack vectors.
There are four fundamental questions to ask.
Who Are the Threat Actors?
Threat actors can range from nation-states to hacktivists to rogue employees. Each category of potential adversaries has different characteristics that require different defense/response strategies. Their reasons for attacking also vary, which is why the “why” and “what” questions described below are so critical.
Why Are They Threats and What Are Their Motivations?
A wide range of factors can influence attackers to target ML systems. Defense strategies should proceed from the CIA triad, which is a three-sided information security governance model that encompasses confidentiality, integrity and availability:
- Confidentiality is about ensuring that only those with appropriate rights can access information. These protections can guard against an attacker who wants to extract sensitive data by compromising training data.
- An integrity attack might attempt to influence the model’s behavior, such as returning false positives in a facial recognition system. Protections such as frequent backups, digital signatures and audits ensure that information isn’t altered or tampered with.
- An availability attack may be aimed at reducing the consistency, performance or access to the machine learning model. Good availability practices, such as maintaining redundant servers and applying data loss prevention tools, make information available when needed.
How Will They Attack?
Machine learning systems open new avenues for attacks that don’t exist in conventional procedural programs. One of these is the evasion or adversarial attack, in which a foe attempts to inject inputs to ML models that are intentionally meant to trigger mistakes. The data may look okay to humans, but subtle variances can cause ML algorithms to go wildly off track.
Such attacks may occur at inference time by exploiting the model’s internal information, typically in one of two ways. In a white box attack, the attacker has some information about the model, obtained either directly or from untrusted actors in the data processing pipeline. In a black box scenario, the assailant knows nothing about the system’s internal workings, but identifies vulnerabilities by repeatedly probing and finding patterns in the results that betray the learning model.
New Threat Vectors
There are two dimensions we can use to classify the “how” of an ML attack: inference and training. In an attack at the inference phase, the enemy has specific information about the model and/or the data used to train it. It isn’t necessary to have direct access to the system to obtain this information. Exploratory techniques, such as side-channel and remote attacks, can permit an adversary to penetrate deployed ML systems by inferring their logic through responses to inputs or by using data poisoning so the attacker can target the hardware directly.
Attacks at the training phase attempt to learn and corrupt the model. Depending on the availability of data, the attacker may use substitute models to test potential inputs before submitting them to the victim.
There are also two ways to alter the model. The injection method modifies existing data by inserting untrusted components, causing the results of the model to become untrustworthy. A particularly dangerous alternative approach is logic corruption, in which the attacker changes the learning algorithm itself. This tactic is extremely dangerous because intruders can effectively take control of systems and direct them to produce whatever output they want.
Machine Learning Attacks
Putting all of these factors together, we can identify three distinct attacks that target different phases of the machine learning life cycle:
- Evasion attacks — These attacks are the most common. Typically performed during inference time, evasion attacks are intended to introduce inputs that cause the model to produce incorrect results.
- Poisoning attacks — These attacks are carried out during the inference stage and are intended to threaten integrity and availability. Poisoning alters training data sets by inserting, removing or editing decision points to change the boundaries of the target model.
- Privacy attacks — These usually happen during the training phase. The intent is not to corrupt the training model, but to retrieve sensitive information.
In addition to the above, there are several attacks that may occur at either or both of the training and inference stages. They go by names such as anchor points, mimicry, model extraction, path finding, least cost, and constrained and gradient descent.
Unfortunately, we can expect new attack types to emerge as ML goes mainstream, but understanding the basic vulnerabilities and prevention tactics is the first step toward combating them.
The post Machine Learning: With Great Power Come New Security Vulnerabilities appeared first on Security Intelligence.