The Perceptron, the basis of Artificial Neural Networks (ANN), was conceived in 1957:
It is of course an outdated model of how the neurons actually work. The current neural network research and development is more driven by mathematically techniques that ensure continuity and convergence rather than anything biological inspired.
However there are other research groups like Numenta and IBM TrueNorth that are investigating more biologically inspired systems. These systems are referred to as “Spiking Neural Networks (SNN)” (see: https://en.wikipedia.org/wiki/Spiking_neural_network ) or alternatively “Neuromorphic computing”.
These SNN have unfortunately not proven to be as effective as their less biologically inspired cousins (ANN). In a recent paper (see: http://arxiv.org/abs/1603.08270 ) however, IBM TrueNorth has shown competitive results simulating a Convolution Network. This is direct evidence that an “integrate-and-spike” mechanism has the similar computational capability as the more proven ANNs. The IBM paper however highlighted one major weakness of SNN. That is, training of the TrueNorth system required simulation of back-propagation using another conventional GPU:
Training was performed offline on conventional GPUs, using a library of custom training layers built upon functions from the MatConvNet toolbox. Network specication and training complexity using these layers is on par with standard deep learning.
Said in a different way, using “back-propagation”. Biological inspired SNNs seem to lack a mechanism to receive feedback. Although it had been previously conjectured that such a mechanism was not necessary.
Prior to the invention of machine powered flight, many people could observe what a bird does to fly. They can point out the flapping of wings, the large ratio of the size of the wings to the body, to the weight of the bird or the presence of feathers. However, none of these features leads one to the actual mechanics of flight. This is one of the arguments against biologically inspired research. However birds and planes are able to fly because the dynamics are the same. The airflow under a wing has a higher density than above the wing and thus creating an upward pressure.
There is a commonality with the brain and neural networks is the fact that they are both dynamical systems. ANN researchers have observed that if we assign weights to input signals, multiply the signals with the weights and sum up the results then we have a NN that can perform pretty impressive pattern classification. The discovery of the weights is done through what is called “training” and this is done by adjusting all the weights slightly in a way that reduces the observed error in the pattern classification. Learning is achieved when the observed error settles to one that is consistently acceptable. This training mechanism is what is called “Back-propagation”.
There are many variants of “back-propagation”, the most common is gradient descent with a variant called RProp (see: https://en.wikipedia.org/wiki/Rprop ) which is an extreme simplification that uses only the sign of the gradient to perform its update. Natural Gradient based methods that are second order update mechanism an interesting variant called NES (https://en.wikipedia.org/wiki/Natural_evolution_strategy) employs genetic evolution methods. Field Alignment is another simplistic method that is extremely efficient ( see: http://arxiv.org/pdf/1411.0247.pdf ). In general, back-propagation does not necessarily require that the implementation is performed by a strict application of an analytic gradient calculation. What is essential is that there is some approximation of an appropriate weight change update and a corresponding structure to propagate the updates. Incidentally, recent research (see: http://cbmm.mit.edu/publications/how-important-weight-symmetry-backpropagation-0 ) appears to conclude that the magnitude of gradient update isn’t as import as the sign of the update.
There however has been no biological evidence of a structural mechanism of “back-propagation” in biological brains. Yoshua Bengio published a paper in 2015 (see: http://arxiv.org/abs/1502.04156 ) “Towards Biologically Plausible Deep Learning”. The investigation attempts to explain a mechanism for back-propagation exists in Spike-Timing-Dependent Plasticity (STDP) of biological neurons. It is however questionable whether neurons are able to learn by themselves without the need of an external feedback pathway that spans multiple layers.
There is however an alternative mechanism that recently has been discovered that may be a more convincing argument that is based on a structure that is independent of the brain’s neurons. There is a large class of cells in the Brain called Microglia ( see: https://www.technologyreview.com/s/601137/the-rogue-immune-cells-that-wreck-the-brain ) that are responsible for regulating the neurons and their connectivity.
It turns out that in the absence of chemicals released by glia, the neurons committed the biochemical version of suicide. Barres also showed that the astrocytes appeared to play a crucial role in forming synapses, the microscopic connections between neurons that encode memory. In isolation, neurons were capable of forming the spiny appendages necessary to reach the synapses. But without astrocytes, they were incapable of connecting to one another.
Research on the nature of the Microglia has been ignored until very recently:
Hardly anyone believed him. When he was a young faculty member at Stanford in the 1990s, one of his grant applications to the National Institutes of Health was rejected seven times. “Reviewers kept saying, ‘Nah, there’s no way glia could be doing this,’” Barres recalls. “And even after we published two papers in Science showing that [astrocytes] had profound, almost all-or-nothing effects in controlling synapses’ formation or synapse activity, I still couldn’t get funded! I think it’s still hard to get people to think about glia as doing anything active in the nervous system.”
In fact, conventional wisdom was that the Lymphatic System did not interface with the brain (see: https://cosmosmagazine.com/life-sciences/lymphatic-drain-inside-your-brain )
Generations of medical students have been taught the mammalian brain has no connection to the lymphatic system, to help keep the brain isolated and protected from infection. Louveau’s discovery will force a rewrite of anatomy textbooks…
However, recent research ( see: http://www.nih.gov/news-events/nih-research-matters/lymphatic-vessels-discovered-central-nervous-system ) has debunked that conventional understanding.
50% of the brain’s volume consists of glia (see: http://brainblogger.com/2015/03/16/what-have-your-glia-done-for-you-today/ ). This new model of the brain does provide a more convincing pathway to explaining the notion of “Back-propagation” and does hint at explaining the lack of convincing results of SNN based systems. SNN have been formulated with unfortunately only a partial understanding of how the brain works and thus an incomplete model that is missing a regulatory mechanism.
In addition, learning does appear to be to be influenced by the microglia in the brain (see: http://www.cell.com/trends/immunology/abstract/S1471-4906(15)00200-8 ).
Microglia processes constantly move as they survey the surrounding environment.
Microglia can modify activity-dependent changes in synaptic strength between neurons that underlie memory and learning using classical immunological signaling pathways…
This is further validated in a more recent comprehensive study (see: http://journal.frontiersin.org/article/10.3389/fnint.2015.00073/full ).
Their dynamism and functional capabilities position them perfectly to regulate individual synapses and to be undoubtedly involved in optimizing information processing, learning and memory, and cognition.
Microglia are even able to communicate with neurons by neurotransmitters (see: http://www.ncbi.nlm.nih.gov/pubmed/25451814 ) :
The presence of specific receptors on the surface of microglia suggests communication with neurons by neurotransmitters. Here, we demonstrate expression of serotonin receptors, including 5-HT2a,b and 5-HT4 in microglial cells and their functional involvement in the modulation of exosome release by serotonin.
There are in additional experimental evidence that sleep, also vital in learning, involves the Glial cells ( see: http://www.bbc.com/news/health-24567412 ):
Scientists, who imaged the brains of mice, showed that the glymphatic system became 10-times more active when the mice were asleep.
Back-propagation, perhaps at work, while we sleep?
In summary, biological brains have a regulatory mechanism in the form of microglia that are highly dynamic in regulating synapse connectivity and pruning neural growth. The activity is most pronounced during sleep. SNNs have been shown to have inference capabilities equivalent to Convolution Networks. SNNs however have not shown to effectively learn on their own without a ‘back-propagation’ mechanism. This mechanism is most plausibly provided by the microglia.