The brain is believed to implement probabilistic reasoning and to represent
information via population, or distributed, coding. Most previous
population-based probabilistic (PPC) theories share several basic properties:
1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e.,
all(most) units participate in every code; 3) graded synapses; 4) rate coding;
5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy
units; and 7) noise/correlation is considered harmful. We present a radically
different theory that assumes: 1) binary units; 2) only a small subset of
units, i.e., a sparse distributed representation (SDR) (cell assembly),
comprises any individual code; 3) binary synapses; 4) signaling formally
requires only single (i.e., first) spikes; 5) units initially have completely
flat TFs (all weights zero); 6) units are far less intrinsically noisy than
traditionally thought; rather 7) noise is a resource generated/used to cause
similar inputs to map to similar codes, controlling a tradeoff between storage
capacity and embedding the input space statistics in the pattern of
intersections over stored codes, epiphenomenally determining correlation
patterns across neurons. The theory, Sparsey, was introduced 20+ years ago as a
canonical cortical circuit/algorithm model achieving efficient sequence
learning/recognition, but not elaborated as an alternative to PPC theories.
Here, we show that: a) the active SDR simultaneously represents both the most
similar/likely input and the entire (coarsely-ranked) similarity
likelihood/distribution over all stored inputs (hypotheses); and b) given an
input, the SDR code selection algorithm, which underlies both learning and
inference, updates both the most likely hypothesis and the entire likelihood
distribution (cf. belief update) with a number of steps that remains constant
as the number of stored items increases.