Overview. Understanding how living systems perform computations and learn from their environment is one of the grand challenges in biophysics. These feats are often performed by large populations of cells or organisms, and it has recently become possible to jointly measure the states of many components of biological systems engaging in strongly correlated collective computations. It is vital that theorists in biology engage directly with these experiments in an effort to develop data-driven phenomenology, as a step towards a more mechanistic understanding. For example, an intriguing feature of these data has been the realization that these systems often reside close to a critical point [1, 2]. While this surprising fact is a clear feature of the data, how it arises and what functional benefits it confers remains a mystery. In an effort to understand this, I discovered a novel mechanism employing latent variables by which such phenomena may arise robustly without fine-tuning . The representation of visual stimulus information in the brain is performed by large populations of neurons. While the response properties of single neurons in numerous brain regions in the visual pathway have been characterized in great detail, much less is known about how neurons act in concert to encode a visual scene. With the development of experimental techniques that allow us to monitor the electrical activity of hundreds to thousands of neurons and to image the anatomical connectome, the search for a convincing description of these collective behaviors is increasingly urgent. Separately, deep neural networks now provide us with examples of artificial learning systems that achieve remarkable results. With these models, we can observe every unit’s activity and have access to their entire connectivity matrix. Nonetheless, we lack a fundamental understanding of how they work. These models thus provide an ideal test-bed for guiding theoretical efforts towards extracting knowledge out of big neural data. In this proposal, I will first demonstrate that the observed signatures of statistical criticality in retinal responses  arise from the hidden variable mechanism I previously proposed . Next, I will utilize techniques from deep learning, which make vital use of hidden variables, to model and decode the joint activity of hundreds of retinal ganglion cells. Through this, I hope to discover how the population activity structure facilitates downstream decoding of the visual input. Finally, I will use my recently discovered connection between deep neural networks and the variational renormalization group (RG) from statistical physics  to both provide a framework for understanding deep learning systems and to suggest a novel perspective on the computations performed in the visual cortex, focusing on the transformation from V1 to V2. I will also develop new data analysis methods based on RG. The broad questions motivating this proposal are: (1) What computations do biological systems perform to construct effective models of their sensory environment? and (2) How can we extract meaningful insight from statistical models of complex biological data? I maintain diverse interests in theoretical neuroscience and biological physics that are unified by the study of collective behavior in living systems. In addition to the work described here, I have made significant contributions to a variety of other areas including understanding the fundamental energetic costs of cellular computations, the compensation for processing delays through gap junctions in retinal c
|Effective start/end date||8/1/17 → 8/31/17|
- Simons Foundation (510976)
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.