This portal has been archived. Explore the next generation of this technology.

A Radically New Theory of how the Brain Represents and Computes with Probabilities

lib:91cf7d7efd2c42aa (v1.0.0)

Authors: Gerard Rinkus
ArXiv: 1701.07879
Document:  PDF  DOI 
Abstract URL: http://arxiv.org/abs/1701.07879v4


The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e., all(most) units participate in every code; 3) graded synapses; 4) rate coding; 5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy units; and 7) noise/correlation is considered harmful. We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed representation (SDR) (cell assembly), comprises any individual code; 3) binary synapses; 4) signaling formally requires only single (i.e., first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are far less intrinsically noisy than traditionally thought; rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, epiphenomenally determining correlation patterns across neurons. The theory, Sparsey, was introduced 20+ years ago as a canonical cortical circuit/algorithm model achieving efficient sequence learning/recognition, but not elaborated as an alternative to PPC theories. Here, we show that: a) the active SDR simultaneously represents both the most similar/likely input and the entire (coarsely-ranked) similarity likelihood/distribution over all stored inputs (hypotheses); and b) given an input, the SDR code selection algorithm, which underlies both learning and inference, updates both the most likely hypothesis and the entire likelihood distribution (cf. belief update) with a number of steps that remains constant as the number of stored items increases.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!