TY - JOUR

T1 - On the choice of a sparse prior

AU - Körding, Konrad P.

AU - Kayser, Christoph

AU - König, Peter

N1 - Funding Information:
We thank Bruno Olshausen for inspiring discussions, and the EU IST-2000-28127 and the BBW 01.0208-1, Collegium Helveticum (KPK), the Center of Neuroscience Zürich (CK) and the SNF (PK, Grant Nr 31-65415.01) for financial support.

PY - 2003

Y1 - 2003

N2 - An emerging paradigm analyses in what respect the properties of the nervous system reflect properties of natural scenes. It is hypothesized that neurons form sparse representations of natural stimuli: each neuron should respond strongly to some stimuli while being inactive upon presentation of most others. For a given network, sparse representations need fewest spikes, and thus the nervous system can consume the least energy. To obtain optimally sparse responses the receptive fields of simulated neurons are optimized. Algorithmically this is identical to searching for basis functions that allow coding for the stimuli with sparse coefficients. The problem is identical to maximizing the log likelihood of a generative model with prior knowledge of natural images. It is found that the resulting simulated neurons share most properties of simple cells found in primary visual cortex. Thus, forming optimally sparse representations is a very compact approach to describing simple cell properties. Many ways of defining sparse responses exist and it is widely believed that the particular choice of the sparse prior of the generative model does not significantly influence the estimated basis functions. Here we examine this assumption more closely. We include the constraint of unit variance of neuronal activity, used in most studies, into the objective functions. We then analyze learning on a database of natural (cat-cam™) visual stimuli. We show that the effective objective functions are largely dominated by the constraint, and are therefore very similar. The resulting receptive fields show some similarities but also qualitative differences. Even for coefficient values for which the objective functions are dissimilar, the distributions of coefficients are similar and do not match the priors of the assumed generative model. In conclusion, the specific choice of the sparse prior is relevant, as is the choice of additional constraints, such as normalization of variance.

AB - An emerging paradigm analyses in what respect the properties of the nervous system reflect properties of natural scenes. It is hypothesized that neurons form sparse representations of natural stimuli: each neuron should respond strongly to some stimuli while being inactive upon presentation of most others. For a given network, sparse representations need fewest spikes, and thus the nervous system can consume the least energy. To obtain optimally sparse responses the receptive fields of simulated neurons are optimized. Algorithmically this is identical to searching for basis functions that allow coding for the stimuli with sparse coefficients. The problem is identical to maximizing the log likelihood of a generative model with prior knowledge of natural images. It is found that the resulting simulated neurons share most properties of simple cells found in primary visual cortex. Thus, forming optimally sparse representations is a very compact approach to describing simple cell properties. Many ways of defining sparse responses exist and it is widely believed that the particular choice of the sparse prior of the generative model does not significantly influence the estimated basis functions. Here we examine this assumption more closely. We include the constraint of unit variance of neuronal activity, used in most studies, into the objective functions. We then analyze learning on a database of natural (cat-cam™) visual stimuli. We show that the effective objective functions are largely dominated by the constraint, and are therefore very similar. The resulting receptive fields show some similarities but also qualitative differences. Even for coefficient values for which the objective functions are dissimilar, the distributions of coefficients are similar and do not match the priors of the assumed generative model. In conclusion, the specific choice of the sparse prior is relevant, as is the choice of additional constraints, such as normalization of variance.

KW - Independent component analysis

KW - Natural scenes

KW - Optimal coding

KW - Sparse coding

UR - http://www.scopus.com/inward/record.url?scp=18844473370&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=18844473370&partnerID=8YFLogxK

U2 - 10.1515/REVNEURO.2003.14.1-2.53

DO - 10.1515/REVNEURO.2003.14.1-2.53

M3 - Article

C2 - 12929918

AN - SCOPUS:18844473370

SN - 0334-1763

VL - 14

SP - 53

EP - 62

JO - Reviews in the Neurosciences

JF - Reviews in the Neurosciences

IS - 1-2

ER -