Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 24, 2018

Intel Neural Compute Stick 2

Filed under: Neural Information Processing,Neural Networks — Patrick Durusau @ 3:20 pm

Intel Neural Compute Stick 2 (Mouser Electronics)

From the webpage:

Intel® Neural Compute Stick 2 is powered by the Intel™ Movidius™ X VPU to deliver industry leading performance, wattage, and power. The NEURAL COMPUTE supports OpenVINO™, a toolkit that accelerates solution development and streamlines deployment. The Neural Compute Stick 2 offers plug-and-play simplicity, support for common frameworks and out-of-the-box sample applications. Use any platform with a USB port to prototype and operate without cloud compute dependence. The Intel NCS 2 delivers 4 trillion operations per second with 8X performance boost over previous generations.

At $99 (US) with a USB stick form factor, the Intel® Neural Compute Stick 2 makes a great gift any time of the year. Not to mention offering the opportunity to test your hacking skills on “out-of-the-box sample applications.” The most likely ones you will see in the wild.

Enjoy!

August 5, 2017

Neuroscience-Inspired Artificial Intelligence

Neuroscience-Inspired Artificial Intelligence by Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick.

Abstract:

The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

Extremely rich article with nearly four (4) pages of citations.

Reading this paper closely and chasing the citations is a non-trivial task but you will be prepared understand and/or participate in the next big neuroscience/AI breakthrough.

Enjoy!

December 6, 2016

Four Experiments in Handwriting with a Neural Network

Four Experiments in Handwriting with a Neural Network by Shan Carter, David Ha, Ian Johnson, and Chris Olah.

While the handwriting experiments are compelling and entertaining, the author’s have a more profound goal for this activity:


The black box reputation of machine learning models is well deserved, but we believe part of that reputation has been born from the programming context into which they have been locked into. The experience of having an easily inspectable model available in the same programming context as the interactive visualization environment (here, javascript) proved to be very productive for prototyping and exploring new ideas for this post.

As we are able to move them more and more into the same programming context that user interface work is done, we believe we will see richer modes of human-ai interactions flourish. This could have a marked impact on debugging and building models, for sure, but also in how the models are used. Machine learning research typically seeks to mimic and substitute humans, and increasingly it’s able to. What seems less explored is using machine learning to augment humans. This sort of complicated human-machine interaction is best explored when the full capabilities of the model are available in the user interface context.

Setting up a search alert for future work from these authors!

November 12, 2015

Why Neurons Have Thousands of Synapses! (Quick! Someone Call the EU Brain Project!)

Single Artificial Neuron Taught to Recognize Hundreds of Patterns.

From the post:

Artificial intelligence is a field in the midst of rapid, exciting change. That’s largely because of an improved understanding of how neural networks work and the creation of vast databases to help train them. The result is machines that have suddenly become better at things like face and object recognition, tasks that humans have always held the upper hand in (see “Teaching Machines to Understand Us”).

But there’s a puzzle at the heart of these breakthroughs. Although neural networks are ostensibly modeled on the way the human brain works, the artificial neurons they contain are nothing like the ones at work in our own wetware. Artificial neurons, for example, generally have just a handful of synapses and entirely lack the short, branched nerve extensions known as dendrites and the thousands of synapses that form along them. Indeed, nobody really knows why real neurons have so many synapses.

Today, that changes thanks to the work of Jeff Hawkins and Subutai Ahmad at Numenta, a Silicon Valley startup focused on understanding and exploiting the principles behind biological information processing. The breakthrough these guys have made is to come up with a new theory that finally explains the role of the vast number of synapses in real neurons and to create a model based on this theory that reproduces many of the intelligent behaviors of real neurons.

A very enjoyable and accessible summary of a paper on the cutting edge of neuroscience!

Relevant for another concern, that I will be covering in the near future, but the post concludes with:


One final point is that this new thinking does not come from an academic environment but from a Silicon Valley startup. This company is the brain child of Jeff Hawkins, an entrepreneur, inventor and neuroscientist. Hawkins invented the Palm Pilot in the 1990s and has since turned his attention to neuroscience full-time.

That’s an unusual combination of expertise but one that makes it highly likely that we will see these new artificial neurons at work on real world problems in the not too distant future. Incidentally, Hawkins and Ahmad call their new toys Hierarchical Temporal Memory neurons or HTM neurons. Expect to hear a lot more about them.

If you want all the details, see:

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex by Jeff Hawkins, Subutai Ahmad.

Abstract:

Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.

BTW, did I mention the full source code is available at: https://github.com/numenta/nupic?

Coming from a startup, this discovery doesn’t have a decade of support for travel, meals, lodging, support staff, publications, administrative overhead, etc., for a cast of hundreds across the EU. But, then that decade would not have resulted in such a fundamental discovery in any event.

Is that a hint about the appropriate vehicle for advancing fundamental discoveries in science?

April 7, 2015

Exploring the Unknown Frontier of the Brain

Filed under: Neural Information Processing,Neural Networks,Neuroinformatics,Science — Patrick Durusau @ 1:33 pm

Exploring the Unknown Frontier of the Brain by James L. Olds.

From the post:

To a large degree, your brain is what makes you… you. It controls your thinking, problem solving and voluntary behaviors. At the same time, your brain helps regulate critical aspects of your physiology, such as your heart rate and breathing.

And yet your brain — a nonstop multitasking marvel — runs on only about 20 watts of energy, the same wattage as an energy-saving light bulb.

Still, for the most part, the brain remains an unknown frontier. Neuroscientists don’t yet fully understand how information is processed by the brain of a worm that has several hundred neurons, let alone by the brain of a human that has 80 billion to 100 billion neurons. The chain of events in the brain that generates a thought, behavior or physiological response remains mysterious.

Building on these and other recent innovations, President Barack Obama launched the Brain Research through Advancing Innovative Neurotechnologies Initiative (BRAIN Initiative) in April 2013. Federally funded in 2015 at $200 million, the initiative is a public-private research effort to revolutionize researchers’ understanding of the brain.

James reviews currently funded efforts under the BRAIN Initiative, each of which is pursuing possible ways to explore, model and understand brain activity. Exploration in its purest sense. The researchers don’t know what they will find.

I suspect the leap from not understanding <302 neurons in a worm to understanding the 80 to 100 billion neurons in each person, is going to happen anytime soon. Just as well, think of all the papers, conferences and publications along the way!

March 15, 2015

Artificial Neurons and Single-Layer Neural Networks…

Artificial Neurons and Single-Layer Neural Networks – How Machine Learning Algorithms Work Part 1 by Sebastian Raschka.

From the post:

This article offers a brief glimpse of the history and basic concepts of machine learning. We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.

Machine learning is one of the hottest and most exciting fields in the modern age of technology. Thanks to machine learning, we enjoy robust email spam filters, convenient text and voice recognition, reliable web search engines, challenging chess players, and, hopefully soon, safe and efficient self-driving cars.

Without any doubt, machine learning has become a big and popular field, and sometimes it may be challenging to see the (random) forest for the (decision) trees. Thus, I thought that it might be worthwhile to explore different machine learning algorithms in more detail by not only discussing the theory but also by implementing them step by step.
To briefly summarize what machine learning is all about: “[Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Machine learning is about the development and use of algorithms that can recognize patterns in data in order to make decisions based on statistics, probability theory, combinatorics, and optimization.

The first article in this series will introduce perceptrons and the adaline (ADAptive LINear NEuron), which fall into the category of single-layer neural networks. The perceptron is not only the first algorithmically described learning algorithm [1], but it is also very intuitive, easy to implement, and a good entry point to the (re-discovered) modern state-of-the-art machine learning algorithms: Artificial neural networks (or “deep learning” if you like). As we will see later, the adaline is a consequent improvement of the perceptron algorithm and offers a good opportunity to learn about a popular optimization algorithm in machine learning: gradient descent.

Starting point for what appears to be a great introduction to neural networks.

While you are at Sebastian’s blog, it is very much worthwhile to look around. You will be pleasantly surprised.

November 30, 2014

BRAIN WORKSHOP [Dec. 3-5, 2014]

Filed under: Artificial Intelligence,Neural Information Processing,Neuroinformatics — Patrick Durusau @ 10:18 am

BRAIN WORKSHOP: Workshop on the Research Interfaces between Brain Science and Computer Science

From the post:

brain conference logo

Computer science and brain science share deep intellectual roots – after all, computer science sprang out Alan Turing’s musings about the brain in the spring of 1936.  Today, understanding the structure and function of the human brain is one of the greatest scientific challenges of our generation. Decades of study and continued progress in our knowledge of neural function and brain architecture have led to important advances in brain science, but a comprehensive understanding of the brain still lies well beyond the horizon.  How might computer science and brain science benefit from one another? Computer science, in addition to staggering advances in its core mission, has been instrumental in scientific progress in physical and social sciences. Yet among all scientific objects of study, the brain seems by far the most blatantly computational in nature, and thus presumably most conducive to algorithmic insights, and more apt to inspire computational research. Models of the brain are naturally thought of as graphs and networks; machine learning seeks inspiration in human learning; neuromorphic computing models attempt to use biological insight to solve complex problems. Conversely, the study of the brain depends crucially on interpretation of data: imaging data that reveals structure, activity data that relates to the function of individual or groups of neurons, and behavioral data that embodies the complex interaction of all of these elements.

 

This two-day workshop, sponsored by the Computing Community Consortium (CCC) and National Science Foundation (NSF), brings together brain researchers and computer scientists for a scientific dialogue aimed at exposing new opportunities for joint research in the many exciting facets, established and new, of the interface between the two fields.   The workshop will be aimed at questions such as these:

  • What are the current barriers to mapping the architecture of the brain, and how can they be overcome?
  • What scale of data suffices for the discovery of “neural motifs,” and what might they look like?
  • What would be required to truly have a “neuron in-silico,” and how far are we from that?
  • How can we connect models across the various scales (biophysics – neural function – cortical functional units – cognition)?
  • Which computational principles of brain function which can be employed to solve computational problems? What sort of platforms would support such work?
  • What advances are needed in hardware and software to enable true brain-computer interfaces? What is the right “neural language” for communicating with the brain?
  • How would one be able to test equivalence between a computational model and the modeled brain subsystem?
  • Suppose we could map the network of billions nodes and trillions connections that is the brain, how would we infer structure?
  • Can we create open-science platforms enabling computational science on enormous amounts of heterogeneous brain data (as it has happened in genomics)?
  • Is there a productive algorithmic theory of the brain, which can inform our search for answers to such questions?

Plenary addresses to be live-streamed at: http://www.cra.org/ccc/visioning/visioning-activities/brain

December 4, 2014 (EST):

8:40 AM Plenary: Jack Gallant, UC Berkeley, A Big Data Approach to Functional Characterization of the Mammalian Brain

2:00 PM Plenary: Aude Oliva, MIT Time, Space and Computation: Converging Human Neuroscience and Computer Science

7:30 PM Plenary: Leslie Valiant, Harvard, Can Models of Computation in Neuroscience be Experimentally Validated?

December 5, 2014 (EST)

10:05 AM Plenary: Terrence Sejnowski, Salk Institute, Theory, Computation, Modeling and Statistics: Connecting the Dots from the BRAIN Initiative

Mark your calendars today!

April 9, 2014

clortex

Filed under: Clojure,Neural Information Processing,Neural Networks,Neuroinformatics — Patrick Durusau @ 7:24 pm

clortex – Clojure Library for Jeff Hawkins’ Hierarchical Temporal Memory

From the webpage:

Hierarchical Temporal Memory (HTM) is a theory of the neocortex developed by Jeff Hawkins in the early-mid 2000’s. HTM explains the working of the neocortex as a hierarchy of regions, each of which performs a similar algorithm. The algorithm performed in each region is known in the theory as the Cortical Learning Algorithm (CLA).

Clortex is a reimagining and reimplementation of Numenta Platfrom for Intelligent Computing (NuPIC), which is also an Open Source project released by Grok Solutions (formerly Numenta), the company founded by Jeff to make his theories a practical and commercial reality. NuPIC is a mature, excellent and useful software platform, with a vibrant community, so please join us at Numenta.org.

Warning: pre-alpha software. This project is only beginning, and everything you see here will eventually be thrown away as we develop better ways to do things. The design and the APIs are subject to drastic change without a moment’s notice.

Clortex is Open Source software, released under the GPL Version 3 (see the end of the README). You are free to use, copy, modify, and redistribute this software according to the terms of that license. For commercial use of the algorithms used in Clortex, please contact Grok Solutions, where they’ll be happy to discuss commercial licensing.

An interesting project both in terms of learning theory but also for the requirements for the software implementing the theory.

The first two requirements capture the main points:

2.1 Directly Analogous to HTM/CLA Theory

In order to be a platform for demonstration, exploration and experimentation of Jeff Hawkins’ theories, the system must at all levels of relevant detail match the theory directly (ie 1:1). Any optimisations introduced may only occur following an effectively mathematical proof that this correspondence is maintained under the change.

2.2 Transparently Understandable Implementation in Source Code

All source code must at all times be readable by a non-developer. This can only be achieved if a person familiar with the theory and the models (but not a trained programmer) can read any part of the source code and understand precisely what it is doing and how it is implementing the algorithms.

This requirement is again deliberately very stringent, and requires the utmost discipline on the part of the developers of the software. Again, there are several benefits to this requirement.

Firstly, the extreme constraint forces the programmer to work in the model of the domain rather than in the model of the software. This constraint, by being adhered to over the lifecycle of the project, will ensure that the only complexity introduced in the software comes solely from the domain. Any other complexity introduced by the design or programming is known as incidental complexity and is the cause of most problems in software.

Secondly, this constraint provides a mechanism for verifying the first requirement. Any expert in the theory must be able to inspect the code for an aspect of the system and verify that it is transparently analogous to the theory.

Despite my misgivings about choosing the domain in which you stand, I found it interesting the project recognizes the domain of its theory and the domain of software to implement that theory are separate and distinct.

How closely two distinct domains can be mapped one to the other should be an interesting exercise.

BTW, some other resources you will find helpful:

NuPicNumenta Platform for Intelligent Computing

Cortical Learning Algorithm (CLA) white paper in eight languages.

Real Machine Intelligence with Clortex and NuPIC (book)

December 8, 2013

Advances in Neural Information Processing Systems 26

Advances in Neural Information Processing Systems 26

The NIPS 2013 conference ended today.

All of the NIPS 2013 papers were posted today.

I count three hundred and sixty (360) papers.

From the NIPS Foundation homepage:

The Foundation: The Neural Information Processing Systems (NIPS) Foundation is a non-profit corporation whose purpose is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. Neural information processing is a field which benefits from a combined view of biological, physical, mathematical, and computational sciences.

The primary focus of the NIPS Foundation is the presentation of a continuing series of professional meetings known as the Neural Information Processing Systems Conference, held over the years at various locations in the United States, Canada and Spain.

Enjoy the proceedings collection!

I first saw this in a tweet by Benoit Maison.

Powered by WordPress