Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 27, 2014

A Common Logic to Seeing Cats and Cosmos

Filed under: Machine Learning — Patrick Durusau @ 5:40 pm

A Common Logic to Seeing Cats and Cosmos by Natalie Wolchover.

From the post:

CatCollage_03_SM

There may be a universal logic to how physicists, computers and brains tease out important features from among other irrelevant bits of data.

When in 2012 a computer learned to recognize cats in YouTube videos and just last month another correctly captioned a photo of “a group of young people playing a game of Frisbee,” artificial intelligence researchers hailed yet more triumphs in “deep learning,” the wildly successful set of algorithms loosely modeled on the way brains grow sensitive to features of the real world simply through exposure.

Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.

Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos.

The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.

“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

As for our own remarkable knack for spotting a cat in the bushes, a familiar face in a crowd or indeed any object amid the swirl of color, texture and sound that surrounds us, strong similarities between deep learning and biological learning suggest that the brain may also employ a form of renormalization to make sense of the world.

“Maybe there is some universal logic to how you can pick out relevant features from data,” said Mehta. “I would say this is a hint that maybe something like that exists.”

The finding formalizes what Schwab, Mehta and others saw as a philosophical similarity between physicists’ techniques and the learning procedure behind object or speech recognition. Renormalization is “taking a really complicated system and distilling it down to the fundamental parts,” Schwab said. “And that’s what deep neural networks are trying to do as well. And what brains are trying to do.”

If you weren’t already planning on learning/catching up on deep learning in 2015, this article should tip the balance towards deep learning. Not simply because it appears to be “the” idea for 2015 but because you are likely to be called upon to respond to analysis/conclusions based upon deep learning techniques.

Unlike Stephen Hawking, I don’t fear the rise of artificial intelligence. What I fear is the uncritical acceptance of machine learning results, whether artificial intelligence ever arrives or not.

Critical discussion of deep learning results and techniques is going to require people as informed as the advocates of deep learning on all sides. How can you oppose a policy that is justified by a algorithm considering far more factors than any person and that has no racial prejudice. How can it? It is simply an algorithm.

Saying that a result or algorithm is racist isn’t very scientific. What opposition to the policies of tomorrow will require is detailed analysis of both data and algorithms so as to leave little or no doubt that a racist outcome was an intentional one.

Here’s a concrete example of where greater knowledge allows someone to deceive the general public while claiming to be completely open. In the Michael Brown case, Prosecutor McCulloch claims to have allowed everyone who claimed to have knowledge of the case to testify. Which is true, as far as it went. What he failed to say was that every witness that supported a theory that Darren Wilson was guilty of murdering Michael Brown, had their prior statements presented to the grand jury and were heavily cross-examined by the prosecutors. On the surface fair, just beneath, extremely unfair. But you have to know the domain to see the unfairness.

The same is going to be the case when results of deep learning are presented. How much do you trust the person presenting the results? And the people they trusted with the data and analysis?

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress