Building fast Bayesian computing machines out of intentionally stochastic, digital parts by Vikash Mansinghka and Eric Jonas.
Abstract:
The brain interprets ambiguous sensory information faster and more reliably than modern computers, using neurons that are slower and less reliable than logic gates. But Bayesian inference, which underpins many computational models of perception and cognition, appears computationally challenging even given modern transistor speeds and energy budgets. The computational principles and structures needed to narrow this gap are unknown. Here we show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude. We find that by connecting stochastic digital components according to simple mathematical rules, one can build massively parallel, low precision circuits that solve Bayesian inference problems and are compatible with the Poisson firing statistics of cortical neurons. We evaluate circuits for depth and motion perception, perceptual learning and causal reasoning, each performing inference over 10,000+ latent variables in real time – a 1,000x speed advantage over commodity microprocessors. These results suggest a new role for randomness in the engineering and reverse-engineering of intelligent computation.
Ironic that the greater precision and repeatability of our digital computers may be choices that are holding back advancements in Bayesian digital computing machines.
I have written before about the RDF ecosystem being over complex and precise for use by everyday users.
We should strive to capture semantics as understood by scientists, researchers, students, and others. Less precise than professional semantics but precise enough to make it usable?
I first saw this in a tweet by Stefano Bertolo.