Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 12, 2014

Deep Neural Networks are Easily Fooled:…

Filed under: Deep Learning,Machine Learning,Neural Networks — Patrick Durusau @ 7:47 pm

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images by Anh Nguyen, Jason Yosinski, Jeff Clune.

Abstract:

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

This is a great paper for weekend reading, even if computer vision isn’t your field. In part because the results were unexpected. Computer science is moving towards being an experimental science, at least in some situations.

Before you read the article, spend a few minutes thinking about how DNNs and human vision differ.

I haven’t run it to ground yet but I wonder if the authors have stumbled upon a way to deceive deep neural networks outside of computer vision applications? If so, does that suggest experiments that could identify ways to deceive other classification algorithms? And how would you detect such means if they were employed? Still confident about your data processing results?

I first saw this in a tweet by Gregory Piatetsky.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress