Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 5, 2017

Overlap – Attacking on Machine Learning Models

Filed under: Machine Learning,XML — Patrick Durusau @ 4:48 pm

Robust Physical-World Attacks on Machine Learning Models by Ivan Evtimov, et al.

Abstract:

Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world–they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm–Robust Physical Perturbations (RP2)– that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.

I was struck by the image used for this paper in a tweet:

I recognized this as an “overlapping” markup problem before discovering the authors were attacking machine learning models. On overlapping markup, see: Towards the unification of formats for overlapping markup by Paolo Marinelli, Fabio Vitali, Stefano Zacchiroli, or more recently, It’s more than just overlap: Text As Graph – Refining our notion of what text really is—this time for sure! by Ronald Haentjens Dekker and David J. Birnbaum.

From the conclusion:


In this paper, we introduced Robust Physical Perturbations (RP2), an algorithm that generates robust, physically realizable adversarial perturbations. Previous algorithms assume that the inputs of DNNs can be modified digitally to achieve misclassification, but such an assumption is infeasible, as an attacker with control over DNN inputs can simply replace it with an input of his choice. Therefore, adversarial attack algorithms must apply perturbations physically, and in doing so, need to account for new challenges such as a changing viewpoint due to distances, camera angles, different lighting conditions, and occlusion of the sign. Furthermore, fabrication of a perturbation introduces a new source of error due to a limited color gamut in printers.

We use RP2 to create two types of perturbations: subtle perturbations, which are small, undetectable changes to the entire sign, and camouflage perturbations, which are visible perturbations in the shape of graffiti or art. When the Stop sign was overlayed with a print out, subtle perturbations fooled the classifier 100% of the time under different physical conditions. When only the perturbations were added to the sign, the classifier was fooled by camouflage graffiti and art perturbations 66.7% and 100% of the time respectively under different physical conditions. Finally, when an untargeted poster-printed camouflage perturbation was overlayed on a Right Turn sign, the classifier was fooled 100% of the time. In future work, we plan to test our algorithm further by varying some of the other conditions we did not consider in this paper, such as sign occlusion.

Excellent work but my question: Is the inability of the classifier to recognize overlapping images similar to the issues encountered as overlapping markup?

To be sure overlapping markup is in part an artifice of unimaginative XML rules, since overlapping texts are far more common than non-overlapping texts. Especially when talking about critical editions or even differing analysis of the same text.

But beyond syntax, there is the subtlety of treating separate “layers” or stacks of a text as separate and yet tracking the relationship between two or more such stacks, when arbitrary additions or deletions can occur in any of them. Additions and deletions that must be accounted for across all layers/stacks.

I don’t have a solution to offer but pose the question of layers of recognition in hopes that machine learning models can capitalize on the lessons learned about a very similar problem with overlapping markup.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress