Authors: Alessandro Betti,Marco Gori,Giuseppe Marra
ArXiv: 1808.06934
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1808.06934v1
By and large, Backpropagation (BP) is regarded as one of the most important
neural computation algorithms at the basis of the progress in machine learning,
including the recent advances in deep learning. However, its computational
structure has been the source of many debates on its arguable biological
plausibility. In this paper, it is shown that when framing supervised learning
in the Lagrangian framework, while one can see a natural emergence of
Backpropagation, biologically plausible local algorithms can also be devised
that are based on the search for saddle points in the learning adjoint space
composed of weights, neural outputs, and Lagrangian multipliers. This might
open the doors to a truly novel class of learning algorithms where, because of
the introduction of the notion of support neurons, the optimization scheme also
plays a fundamental role in the construction of the architecture.