Cyber Security

DeepSloth: Researchers find denial-of-service equivalent against machine learning systems

Published

on

A new adversarial attack developed by scientists at the University of Maryland, College Park, can force machine learning systems to slow to a crawl, taxing servers and possibly causing critical failures in some applications.

Presented at the International Conference on Learning Representations (ICLR), the technique neutralizes optimization techniques that speed up the operation of deep neural networks.

Multi-exit architectures

Deep neural networks, a popular type of machine learning algorithm, sometimes require gigabytes of memory and very strong processors, making them inaccessible to resource-constrained IoT devices, smartphones, and wearables.

Many of these devices must send their data to a cloud server that can run deep learning models.

To overcome these challenges, researchers have invented different techniques to optimize neural networks for small devices.

So-called ‘multi-exit architectures’, one optimization technique, causes neural networks to stop computation as soon as they reach an acceptable threshold.

“Early-exit models are a relatively new concept, but there is a growing interest,” Tudor Dumitras, researcher at the University of Maryland, told The Daily Swig.

“This is because deep learning models are getting more and more expensive, computationally, and researchers look for ways to make them more efficient.”

DeepSloth attacks

Dumitras and his collaborators developed a slowdown adversarial attack that targets the efficacy of multi-exit neural networks. Called DeepSloth, the attack makes subtle changes to input data to prevent neural networks from making early exits and force them to perform full computations.

“Slowdown attacks have the potential of negating the benefits of multi-exit architectures,” Dumitras said. “These architectures can half the energy consumption of a DNN model at inference time, and we showed that for any input we can craft a perturbation that wipes out those savings completely.”

The researchers tested DeepSloth on various multi-exit architectures. In cases where attackers had full knowledge of the architecture of the target model, slowdown attacks reduced early-exit efficacy by 90-100%.

Even when the attacker doesn’t have exact information about the target model, DeepSloth still reduced efficacy by 5-45%.

This is the equivalent of a denial-of-service (DoS) attack on neural networks. When models are served directly from a server, DeepSloth can occupy the server’s resources and prevent it from using its full capacity.

In cases where a multi-exit network is split between an edge device and the cloud, it could force the device to send all its data to the server, which can cause harm in different ways.

“In a scenario typical for IoT deployments, where the model is partitioned between edge devices and the cloud, DeepSloth amplifies the latency by 1.5-5X, negating the benefits of model partitioning,” Dumitras said.

This could cause the edge device to miss critical deadlines, for instance in an elderly monitoring program that uses AI to quickly detect accidents and call for help if necessary.”

New directions for security research

The researchers found that adversarial training, the standard of protecting machine learning models against adversarial attacks, is not effective against DeepSloth attacks.

“I want to bring this threat model to the attention of the machine learning community,” Dumitras said. “DeepSloth is just the first attack that works in this threat model, and I am sure that more devastating slowdown attacks will be discovered.”

In the future, Dumitras and his colleagues will further explore vulnerabilities in early-exit models and develop methods to make them more secure and robust.

Source: https://portswigger.net/daily-swig/deepsloth-researchers-find-denial-of-service-equivalent-against-machine-learning-systems

Click to comment
Exit mobile version