Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

November 9, 2017

Is That a Turtle in Your Pocket or Are You Just Glad To See Me?

Filed under: Image Recognition,Machine Learning — Patrick Durusau @ 10:14 am

Apologies to Mae West for spoiling her famous line from Sexette:

Is that a gun in your pocket, or are you just glad to see me?

Seems appropriate since Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok have created a 3-D turtle that is mistaken by neural networks as a rifle.

You can find the details in: Synthesizing Robust Adversarial Examples.

Abstract:

Neural network-based classifiers parallel or exceed human-level accuracy on many common tasks and are used in practical systems. Yet, neural networks are susceptible to adversarial examples, carefully perturbed inputs that cause networks to misbehave in arbitrarily chosen ways. When generated with standard methods, these examples do not consistently fool a classifier in the physical world due to viewpoint shifts, camera noise, and other natural transformations. Adversarial examples generated using standard techniques require complete control over direct input to the classifier, which is impossible in many real-world systems.

We introduce the first method for constructing real-world 3D objects that consistently fool a neural network across a wide distribution of angles and viewpoints. We present a general-purpose algorithm for generating adversarial examples that are robust across any chosen distribution of transformations. We demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation. Finally, we apply the algorithm to produce arbitrary physical 3D-printed adversarial objects, demonstrating that our approach works end-to-end in the real world. Our results show that adversarial examples are a practical concern for real-world systems.

All in good fun until you remember neural networks feed classification decisions to humans who make fire/no fire decisions and soon, fire/no fire decisions will be made by autonomous systems. Errors in classification decisions such as turtle vs. rifle will have deadly results.

What are the stakes in your neural net classification system? How easily can it be fooled by adversaries?

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress