July 27, 2021

Malware Protection

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Hack The Box: Fooling Deep Learning Abstraction-Based Monitors. (arXiv:2107.04764v3 [cs.LG] UPDATED)

Deep learning is a type of machine learning that adapts a deep hierarchy of
concepts. Deep learning classifiers link the most basic version of concepts at
the input layer to the most abstract version of concepts at the output layer,
also known as a class or label. However, once trained over a finite set of
classes, some deep learning models do not have the power to say that a given
input does not belong to any of the classes and simply cannot be linked.
Correctly invalidating the prediction of unrelated classes is a challenging
problem that has been tackled in many ways in the literature. Novelty detection
gives deep learning the ability to output “do not know” for novel/unseen
classes. Still, no attention has been given to the security aspects of novelty
detection. In this paper, we consider the case study of abstraction-based
novelty detection and show that it is not robust against adversarial samples.
Moreover, we show the feasibility of crafting adversarial samples that fool the
deep learning classifier and bypass the novelty detection monitoring at the
same time. In other words, these monitoring boxes are hackable. We demonstrate
that novelty detection itself ends up as an attack surface.