The Complexity of Neurons are Beyond Our Current Imagination by Carlos E. Perez.
From the post:
One of the biggest misconceptions around is the idea that Deep Learning or Artificial Neural Networks (ANN) mimic biological neurons. At best, ANN mimic a cartoonish version of a 1957 model of a neuron. Neurons in Deep Learning are essentially mathematical functions that perform a similarity function of its inputs against internal weights. The closer a match is made, the more likely an action is performed (i.e. not sending a signal to zero). There are exceptions to this model (see: Autoregressive networks) however it is general enough to include the perceptron, convolution networks and RNNs.
Jeff Hawkins of Numenta has always lamented that a more biologically-inspired approach is needed. So, in his research on building cognitive machinery, he has architected system that more mimic the structure of the neo-cortex. Numenta’s model of a neuron is considerably more elaborate than the Deep Learning model of a neuron:
…
I rather like the line “ANN mimic a cartoonish version of a 1957 model of a neuron.”
You need not worry about the MIT Intelligence Quest replicating neurons anytime soon.
In part because no one really knows how neurons work or how much more we need to learn to replicate them.
The AI crowd could train a neural network to recognize people and to fire weapons at them. Qualifies as destruction of humanity by an AI but if we are really that stupid, perhaps its time to make space for others.