Archive for the ‘Neuroinformatics’ Category

Neuroscience-Inspired Artificial Intelligence

Saturday, August 5th, 2017

Neuroscience-Inspired Artificial Intelligence by Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick.


The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

Extremely rich article with nearly four (4) pages of citations.

Reading this paper closely and chasing the citations is a non-trivial task but you will be prepared understand and/or participate in the next big neuroscience/AI breakthrough.


More Bad News For EC Brain Project Wood Pigeons

Sunday, February 14th, 2016

I heard the story of how the magpie tried to instruct other birds, particularly the wood pigeon, on how to build nests in a different form but the lesson was much the same.

The EC Brain project reminds me of the wood pigeon hearing “…take two sticks…” and running off to build its nest.

With no understanding of the human brain, the EC set out to build one, on a ten year deadline.

Byron Spice’s report in: Project Aims to Reverse-engineer Brain Algorithms, Make Computers Learn Like Humans casts further doubt upon that project:

Carnegie Mellon University is embarking on a five-year, $12 million research effort to reverse-engineer the brain, seeking to unlock the secrets of neural circuitry and the brain’s learning methods. Researchers will use these insights to make computers think more like humans.

The research project, led by Tai Sing Lee, professor in the Computer Science Department and the Center for the Neural Basis of Cognition (CNBC), is funded by the Intelligence Advanced Research Projects Activity (IARPA) through its Machine Intelligence from Cortical Networks (MICrONS) research program. MICrONS is advancing President Barack Obama’s BRAIN Initiative to revolutionize the understanding of the human brain.

“MICrONS is similar in design and scope to the Human Genome Project, which first sequenced and mapped all human genes,” Lee said. “Its impact will likely be long-lasting and promises to be a game changer in neuroscience and artificial intelligence.”

Artificial neural nets process information in one direction, from input nodes to output nodes. But the brain likely works in quite a different way. Neurons in the brain are highly interconnected, suggesting possible feedback loops at each processing step. What these connections are doing computationally is a mystery; solving that mystery could enable the design of more capable neural nets.

My goodness! Unknown loops in algorithms?

The Carnegie Mellon project is exploring potential algorithms, not trying to engineer the unknown.

If the EC had titled its project the Graduate Assistant and Hospitality Industry Support Project, one could object to the use of funds for travel junkets but it would otherwise be intellectually honest.

Exploring the Unknown Frontier of the Brain

Tuesday, April 7th, 2015

Exploring the Unknown Frontier of the Brain by James L. Olds.

From the post:

To a large degree, your brain is what makes you… you. It controls your thinking, problem solving and voluntary behaviors. At the same time, your brain helps regulate critical aspects of your physiology, such as your heart rate and breathing.

And yet your brain — a nonstop multitasking marvel — runs on only about 20 watts of energy, the same wattage as an energy-saving light bulb.

Still, for the most part, the brain remains an unknown frontier. Neuroscientists don’t yet fully understand how information is processed by the brain of a worm that has several hundred neurons, let alone by the brain of a human that has 80 billion to 100 billion neurons. The chain of events in the brain that generates a thought, behavior or physiological response remains mysterious.

Building on these and other recent innovations, President Barack Obama launched the Brain Research through Advancing Innovative Neurotechnologies Initiative (BRAIN Initiative) in April 2013. Federally funded in 2015 at $200 million, the initiative is a public-private research effort to revolutionize researchers’ understanding of the brain.

James reviews currently funded efforts under the BRAIN Initiative, each of which is pursuing possible ways to explore, model and understand brain activity. Exploration in its purest sense. The researchers don’t know what they will find.

I suspect the leap from not understanding <302 neurons in a worm to understanding the 80 to 100 billion neurons in each person, is going to happen anytime soon. Just as well, think of all the papers, conferences and publications along the way!

Artificial Neurons and Single-Layer Neural Networks…

Sunday, March 15th, 2015

Artificial Neurons and Single-Layer Neural Networks – How Machine Learning Algorithms Work Part 1 by Sebastian Raschka.

From the post:

This article offers a brief glimpse of the history and basic concepts of machine learning. We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.

Machine learning is one of the hottest and most exciting fields in the modern age of technology. Thanks to machine learning, we enjoy robust email spam filters, convenient text and voice recognition, reliable web search engines, challenging chess players, and, hopefully soon, safe and efficient self-driving cars.

Without any doubt, machine learning has become a big and popular field, and sometimes it may be challenging to see the (random) forest for the (decision) trees. Thus, I thought that it might be worthwhile to explore different machine learning algorithms in more detail by not only discussing the theory but also by implementing them step by step.
To briefly summarize what machine learning is all about: “[Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959). Machine learning is about the development and use of algorithms that can recognize patterns in data in order to make decisions based on statistics, probability theory, combinatorics, and optimization.

The first article in this series will introduce perceptrons and the adaline (ADAptive LINear NEuron), which fall into the category of single-layer neural networks. The perceptron is not only the first algorithmically described learning algorithm [1], but it is also very intuitive, easy to implement, and a good entry point to the (re-discovered) modern state-of-the-art machine learning algorithms: Artificial neural networks (or “deep learning” if you like). As we will see later, the adaline is a consequent improvement of the perceptron algorithm and offers a good opportunity to learn about a popular optimization algorithm in machine learning: gradient descent.

Starting point for what appears to be a great introduction to neural networks.

While you are at Sebastian’s blog, it is very much worthwhile to look around. You will be pleasantly surprised.

BRAIN WORKSHOP [Dec. 3-5, 2014]

Sunday, November 30th, 2014

BRAIN WORKSHOP: Workshop on the Research Interfaces between Brain Science and Computer Science

From the post:

brain conference logo

Computer science and brain science share deep intellectual roots – after all, computer science sprang out Alan Turing’s musings about the brain in the spring of 1936.  Today, understanding the structure and function of the human brain is one of the greatest scientific challenges of our generation. Decades of study and continued progress in our knowledge of neural function and brain architecture have led to important advances in brain science, but a comprehensive understanding of the brain still lies well beyond the horizon.  How might computer science and brain science benefit from one another? Computer science, in addition to staggering advances in its core mission, has been instrumental in scientific progress in physical and social sciences. Yet among all scientific objects of study, the brain seems by far the most blatantly computational in nature, and thus presumably most conducive to algorithmic insights, and more apt to inspire computational research. Models of the brain are naturally thought of as graphs and networks; machine learning seeks inspiration in human learning; neuromorphic computing models attempt to use biological insight to solve complex problems. Conversely, the study of the brain depends crucially on interpretation of data: imaging data that reveals structure, activity data that relates to the function of individual or groups of neurons, and behavioral data that embodies the complex interaction of all of these elements.


This two-day workshop, sponsored by the Computing Community Consortium (CCC) and National Science Foundation (NSF), brings together brain researchers and computer scientists for a scientific dialogue aimed at exposing new opportunities for joint research in the many exciting facets, established and new, of the interface between the two fields.   The workshop will be aimed at questions such as these:

  • What are the current barriers to mapping the architecture of the brain, and how can they be overcome?
  • What scale of data suffices for the discovery of “neural motifs,” and what might they look like?
  • What would be required to truly have a “neuron in-silico,” and how far are we from that?
  • How can we connect models across the various scales (biophysics – neural function – cortical functional units – cognition)?
  • Which computational principles of brain function which can be employed to solve computational problems? What sort of platforms would support such work?
  • What advances are needed in hardware and software to enable true brain-computer interfaces? What is the right “neural language” for communicating with the brain?
  • How would one be able to test equivalence between a computational model and the modeled brain subsystem?
  • Suppose we could map the network of billions nodes and trillions connections that is the brain, how would we infer structure?
  • Can we create open-science platforms enabling computational science on enormous amounts of heterogeneous brain data (as it has happened in genomics)?
  • Is there a productive algorithmic theory of the brain, which can inform our search for answers to such questions?

Plenary addresses to be live-streamed at:

December 4, 2014 (EST):

8:40 AM Plenary: Jack Gallant, UC Berkeley, A Big Data Approach to Functional Characterization of the Mammalian Brain

2:00 PM Plenary: Aude Oliva, MIT Time, Space and Computation: Converging Human Neuroscience and Computer Science

7:30 PM Plenary: Leslie Valiant, Harvard, Can Models of Computation in Neuroscience be Experimentally Validated?

December 5, 2014 (EST)

10:05 AM Plenary: Terrence Sejnowski, Salk Institute, Theory, Computation, Modeling and Statistics: Connecting the Dots from the BRAIN Initiative

Mark your calendars today!


Wednesday, April 9th, 2014

clortex – Clojure Library for Jeff Hawkins’ Hierarchical Temporal Memory

From the webpage:

Hierarchical Temporal Memory (HTM) is a theory of the neocortex developed by Jeff Hawkins in the early-mid 2000’s. HTM explains the working of the neocortex as a hierarchy of regions, each of which performs a similar algorithm. The algorithm performed in each region is known in the theory as the Cortical Learning Algorithm (CLA).

Clortex is a reimagining and reimplementation of Numenta Platfrom for Intelligent Computing (NuPIC), which is also an Open Source project released by Grok Solutions (formerly Numenta), the company founded by Jeff to make his theories a practical and commercial reality. NuPIC is a mature, excellent and useful software platform, with a vibrant community, so please join us at

Warning: pre-alpha software. This project is only beginning, and everything you see here will eventually be thrown away as we develop better ways to do things. The design and the APIs are subject to drastic change without a moment’s notice.

Clortex is Open Source software, released under the GPL Version 3 (see the end of the README). You are free to use, copy, modify, and redistribute this software according to the terms of that license. For commercial use of the algorithms used in Clortex, please contact Grok Solutions, where they’ll be happy to discuss commercial licensing.

An interesting project both in terms of learning theory but also for the requirements for the software implementing the theory.

The first two requirements capture the main points:

2.1 Directly Analogous to HTM/CLA Theory

In order to be a platform for demonstration, exploration and experimentation of Jeff Hawkins’ theories, the system must at all levels of relevant detail match the theory directly (ie 1:1). Any optimisations introduced may only occur following an effectively mathematical proof that this correspondence is maintained under the change.

2.2 Transparently Understandable Implementation in Source Code

All source code must at all times be readable by a non-developer. This can only be achieved if a person familiar with the theory and the models (but not a trained programmer) can read any part of the source code and understand precisely what it is doing and how it is implementing the algorithms.

This requirement is again deliberately very stringent, and requires the utmost discipline on the part of the developers of the software. Again, there are several benefits to this requirement.

Firstly, the extreme constraint forces the programmer to work in the model of the domain rather than in the model of the software. This constraint, by being adhered to over the lifecycle of the project, will ensure that the only complexity introduced in the software comes solely from the domain. Any other complexity introduced by the design or programming is known as incidental complexity and is the cause of most problems in software.

Secondly, this constraint provides a mechanism for verifying the first requirement. Any expert in the theory must be able to inspect the code for an aspect of the system and verify that it is transparently analogous to the theory.

Despite my misgivings about choosing the domain in which you stand, I found it interesting the project recognizes the domain of its theory and the domain of software to implement that theory are separate and distinct.

How closely two distinct domains can be mapped one to the other should be an interesting exercise.

BTW, some other resources you will find helpful:

NuPicNumenta Platform for Intelligent Computing

Cortical Learning Algorithm (CLA) white paper in eight languages.

Real Machine Intelligence with Clortex and NuPIC (book)

Advances in Neural Information Processing Systems (NIPS)

Sunday, April 7th, 2013

Advances in Neural Information Processing Systems (NIPS)

From the homepage:

The Neural Information Processing Systems (NIPS) Foundation is a non-profit corporation whose purpose is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. Neural information processing is a field which benefits from a combined view of biological, physical, mathematical, and computational sciences.

Links to videos from NIPS 2012 meetings are featured on the homepage. The topics are as wide ranging as the foundation’s description.

A tweet from Chris Diehl, wondering what to do with “old hardbound NIPS proceedings (NIPS 11)” led me to: Advances in Neural Information Processing Systems (NIPS) [Online Papers], which has the papers from 1987 to 2012 by volume and a search interface to the same.

Quite a remarkable collection just from a casual skim of some of the volumes.

Unless you need to fill book shelf space, suggest you bookmark the NIPS Online Papers.

Neuroscience Information Framework (NIF)

Saturday, December 15th, 2012

Neuroscience Information Framework (NIF)

From the about page:

The Neuroscience Information Framework is a dynamic inventory of Web-based neuroscience resources: data, materials, and tools accessible via any computer connected to the Internet. An initiative of the NIH Blueprint for Neuroscience Research, NIF advances neuroscience research by enabling discovery and access to public research data and tools worldwide through an open source, networked environment.

Example of a subject specific information resource that provides much deeper coverage than possible with Google, for example.

If you aren’t trying to index everything, you can out perform more general search solutions.

Streaming Analytics: with sparse distributed representations

Monday, May 28th, 2012

Streaming Analytics: with sparse distributed representations by Jeff Hawkins.


Sparse distributed representations appear to be the means by which brains encode information. They have several advantageous properties including the ability to encode semantic meaning. We have created a distributed memory system for learning sequences of sparse distribute representations. In addition we have created a means of encoding structured and unstructured data into sparse distributed representations. The resulting memory system learns in an on-line fashion making it suitable for high velocity data streams. We are currently applying it to commercially valuable data streams for prediction, classification, and anomaly detection In this talk I will describe this distributed memory system and illustrate how it can be used to build models and make predictions from data streams.


Looking forward to learning more about “sparse distributed representation (SDR).”

Not certain about Jeff’s claim that matching across SDRs = semantic similarity.

Design of the SDR determines the meaning of each bit and consequently of matching.

Which feeds back into the encoders that produce the SDRs.

Other resources:

The core paper: Hierarchical Temporal Memory including HTM Cortical Learning Algorithms. Check the FAQ link if you need the paper in Chinese, Japanese, Korean, Portuguese, Russian, or Spanish. (unverified translations)

Grok – Frequently Asked Questions

A very good FAQ that goes a long way to explaining the capabilities and limitations (currently) of Grok. “Unstructured text” for example isn’t appropriate input into Grok.

Jeff Hawkins and Sandra Blakeslee co-authored On Intelligence in 2004. The FAQ describes the current work as an extension of “On Intelligence.”

BTW, if you think you have heard the name Jeff Hawkins before, you have. Inventor of the Palm Pilot among other things.

Structural Abstractions in Brains and Graphs

Wednesday, May 9th, 2012

Structural Abstractions in Brains and Graphs.

Marko Rodriguez compares the brain to a graph saying (in part):

A graph database is a software system that persists and represents data as a collection of vertices (i.e. nodes, dots) connected to one another by a collection of edges (i.e. links, lines). These databases are optimized for executing a type of process known as a graph traversal. At various levels of abstraction, both the structure and function of a graph yield a striking similarity to neural systems such as the human brain. It is posited that as graph systems scale to encompass more heterogenous data, a multi-level structural understanding can help facilitate the study of graphs and the engineering of graph systems. Finally, neuroscience may foster an appreciation and understanding of the various structural abstractions that exist within the graph.

It is a very suggestive post for thinking about graphs and I commend it to you for reading, close reading.

Picking the Connectome Data Lock

Tuesday, May 1st, 2012

Picking the Connectome Data Lock by Nicole Hemsoth

From the post:

Back in 2005, researchers at Indiana University and Lausanne University simultaneously (yet independently) spawned a concept and pet term that would become the hot topic in neuroscience for the next several years—connectomics.

The concept itself isn’t necessarily new, even thought the use of “connectomics” in popular science circles is relatively so.


A hybrid between the study of genomics (the biological blueprint) and neural networks (the “connect”) this term quickly caught on, including with large organizations like the National Institutes of Health (NIH) and its Human Connectome Project.

For instance, the NIH is in the midst of a five-year effort (starting in 2009) to map the neural pathways that underlie human brain function. The purpose is to acquire and share data about the structural and functional connectivity of the human brain to advance imaging and analysis capabilities and make strides in understanding brain circuitry and associated disorders.

[images omitted]

And talk about data… just to reconstruct the neural and synaptic connections in a mouse retina and primary visual cortex involved a 12 TB data set (which incidentally is now available to all at the Open Connectome Project).

Mapping the connectome requires a complete mapping process of the neural systems on a neuron-by-neuron basis, a task that requires accounting for billions of neurons, at least for most larger, complex mammals. According to Open Connectome Project, the human cerebral cortex alone contains something in the neighborhood of 1010 neurons linked by 1014 synaptic connections.

That number is a bit difficult to digest without context, so how about this: the number of base-pairs in a human genome is 109.

I didn’t want anyone to feel I was neglecting the “big data” side of things, although 12 TB of data will only be “big data” for your home computer. 😉

Moreover, Sebastian Seung, Professor of Computational Neuroscience at MIT and author of the book, Connectome, is quoted as speculating that memories may be represented in the patterns of connections between neurons. Which sounds familiar to anyone who has heard Steve Newcomb talk about the subjects that are implicit in associations.

I wonder if it is possible to represent a summation of the connectome, much in the same way that we accept lower resolution images for some purposes? So that the task isn’t a one-to-one representation of the connectome, which would be a connectome itself (a map equivalent to the territory itself is the territory, one of those philosophy things).

That’s a nice data structure/information theory problem that would not require dimming the lights in your neighborhood when your system boots up. At least until you wanted to run a simulation. 😉

If you are interested in a game to make discoveries about the neural structure of the retina, see:

Data mining opens the door to predictive neuroscience (Google Hazing Rituals)

Tuesday, April 17th, 2012

Data mining opens the door to predictive neuroscience

From the post:

Ecole Polytechnique Fédérale de Lausanne (EPFL) researchers have discovered rules that relate the genes that a neuron switches on and off to the shape of that neuron, its electrical properties, and its location in the brain.

The discovery, using state-of-the-art computational tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it.

That in turn makes modeling the brain in silico — the goal of the proposed Human Brain Project — a more realistic, less Herculean, prospect.

The fulcrum of predictive analytics is finding the “basis” for prediction and within what measurement of error.

Curious how that would work in an employment situation?

Rather than Google’s intellectual hazing rituals, project a thirty-minute questionnaire on Google hires against their evaluations at six-month intervals. Give prospective hires the same questionnaire and then “up” or “down” decisions on hiring. Likely to be as accurate as the current rituals.

Announcing Google-hosted workshop videos from NIPS 2011

Wednesday, February 29th, 2012

Announcing Google-hosted workshop videos from NIPS 2011 by John Blitzer and Douglas Eck.

From the post:

At the 25th Neural Information Processing Systems (NIPS) conference in Granada, Spain last December, we engaged in dialogue with a diverse population of neuroscientists, cognitive scientists, statistical learning theorists, and machine learning researchers. More than twenty Googlers participated in an intensive single-track program of talks, nightly poster sessions and a workshop weekend in the Spanish Sierra Nevada mountains. Check out the NIPS 2011 blog post for full information on Google at NIPS.

In conjunction with our technical involvement and gold sponsorship of NIPS, we recorded the five workshops that Googlers helped to organize on various topics from big learning to music. We’re now pleased to provide access to these rich workshop experiences to the wider technical community.

Watch videos of Googler-led workshops on the YouTube Tech Talks Channel:

Not to mention several other videos you will find at the original post.

Suspect everyone will find something they will enjoy!

Comments on any of these that you find particularly useful?

A Well-Woven Study of Graphs, Brains, and Gremlins

Friday, February 24th, 2012

A Well-Woven Study of Graphs, Brains, and Gremlins by Marko Rodriguez.

From the post:

What do graphs and brains have in common? First, they both share a relatively similar structure: Vertices/neurons are connected to each other by edges/axons. Second, they both share a similar process: traversers/action potentials propagate to effect some computation that is a function of the topology of the structure. If there exists a mapping between two domains, then it is possible to apply the processes of one domain (the brain) to the structure of the other (the graph). The purpose of this post is to explore the application of neural algorithms to graph systems.

Entertaining and informative post by Marko Rodriguez comparing graphs, brains and the graph query language Gremlin.

I agree with Marko on the potential of graphs but am less certain than I read him to be on how well we understand the brain. Both the brain and graphs have many dark areas yet to be explored. As we shine new light on one place, more unknown places are just beyond the reach of our light.

Clojure and XNAT: Introduction

Saturday, February 4th, 2012

Clojure and XNAT: Introduction

Over the last two years, I’ve been using Clojure quite a bit for managing, testing, and exploratory development in XNAT. Clojure is a new member of the Lisp family of languages that runs in the Java Virtual Machine. Two features of Clojure that I’ve found particularly useful are seamless Java interoperability and good support for interactive development.

“Interactive development” is a term that may need some explanation: With many languages — Java, C, and C++ come to mind — you write your code, compile it, and then run your program to test. Most Lisps, including Clojure, have a different model: you start the environment, write some code, test a function, make changes, and rerun your test with the new code. Any state necessary for the test stays in memory, so each write/compile/test iteration is fast. Developing in Clojure feels a lot like running an interpreted environment like Matlab, Mathematica, or R, but Clojure is a general-purpose language that compiles to JVM bytecode, with performance comparable to plain old Java.

One problem that comes up again and again on the XNAT discussion group and in our local XNAT support is that received DICOM files land in the unassigned prearchive rather than the intended project. Usually when this happens, there’s a custom rule for project identification where the regular expression doesn’t quite match what’s in the DICOM headers. Regular expressions are a wonderfully concise way of representing text patterns, but this sentence is equally true if you replace “wonderfully concise” with “maddeningly cryptic.”

Interesting “introduction” that focuses on regular expressions.

If you don’t know XNAT (I didn’t):

XNAT is an open source imaging informatics platform, developed by the Neuroinformatics Research Group at Washington University. It facilitates common management, productivity, and quality assurance tasks for imaging and associated data. Thanks to its extensibility, XNAT can be used to support a wide range of imaging-based projects.

Important neuroinformatics project based at Washington University, which has a history of very successful public technology projects.

Never hurts to learn more about any informatics project, particularly one in the medical sciences. With an introduction to Clojure as well, what more could you want?