Plenary Speakers


Paul Bressloff (University of Utah): Wednesday May 31, 9:10-9:50am
Beyond the neural master equation: Stochastic hybrid neural networks
One of the major challenges in neuroscience is to determine how noise that is present at the molecular and cellular levels affects dynamics and information processing at the macroscopic level of synaptically-coupled neuronal populations. Often noise is incorporated into deterministic network models using extrinsic noise sources. An alternative approach is to assume that noise arises intrinsically as a collective population effect, which has led to a master equation formulation of stochastic neural networks. In this talk we extend the master equation formulation by introducing a stochastic model of neural population dynamics in the form of a stochastic hybrid system. The latter has the advantage of keeping track of synaptic processing as well as spiking activity, and reduces to the neural master equation in a particular limit. We consider the particular problem of noise-induced transitions between metastable states of a stochastic network operating in a bistable regime.

Ila Fiete (University of Texas): Wednesday May 31, 1:10-1:50pm
How fast is neural winner-take-all when deciding between many options?

Stefano Fusi (Columbia University): Wednesday May 31, 1:50-2:30pm
The importance of biological complexity in synaptic memory consolidation
Memories are stored and retained through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions, we construct a broad class of synaptic models that efficiently harness biological complexity to preserve numerous memories by protecting them against the adverse effects of overwriting. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Notably, the interactions between fast and slow variables are bidirectional. We finally show that the proposed synaptic model can be used to efficiently store and retrieve memories in simple neural circuits. Work done in collaboration with M. Benna.

Marla Feller (University of California Berkeley): Thursday June 1, 9:00-9:40am
Wiring up a circuit to perform computations: development of direction selectivity
How are circuits wired up during development to perform specific computations? We address this question in the retina, which comprises multiple circuits that encode different features of the visual scene, culminating in over 20 different types of retinal ganglion cells. Direction-selective ganglion cells respond strongly to an image moving in the preferred direction and weakly to an image moving in the opposite, or null, direction. These directional responses are produced by an asymmetry in overall inhibitory conductance onto direction selective ganglion cells, such that object motion in the null direction elicits a greater amount of inhibition from GABAergic interneurons. I will present recent progress in the lab in determining the relative role of wiring and direction-selective dendritic integration as the basis of this circuit computation and how these properties emerge during development.

Sophie Deneve (Ecole Normale Superieure): Thursday June 1, 9:40-10:20am
Efficient and robust adaptive learning in neural circuits
Understanding how the brain learns to compute functions reliably, efficiently and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent network, e.g. the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.

Danielle Bassett (University of Pennsylvania): Thursday June 1, 1:40-2:20pm
A developmental arc of white matter supporting a growing diversity of brain dynamics
As the human brain develops, it increasingly supports the coordinated synchronization and control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. We use a network representation of diffusion imaging data to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamicsin development. Such optimized topologies emerge across 882 youth from ages 8 to 22 evidencing increasing local specialization. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Seeking to investigate structural mechanisms that support these changes, we simulate network evolution with a set of growth rules, to find that all brain networks - from child to adult - are structured in a manner highly optimized for network control. We further observe differences in the predicted control mechanisms of the child and adult brains, suggesting that the white matter architecture in children has a greater potential to increasingly support brain state transitions, potentially underlying cognitive switching. This work suggests mechanisms for the propagation and stabilization of brain activity at various spatial scales, revealing a possible mechanism of human brain development that preferentially optimizes dynamic network control over static features of network architecture.

Peter Thomas (Case Western Reserve University): Thursday June 1, 2:20-3:00pm
Defining the Phase of a Stochastic Oscillator
Noisy oscillations are ubiquitous in neural systems, occurring at the cell, circuit, and population levels.  For deterministic oscillators, synchronization and entrainment have been successfully analyzed via reduction to a 1D phase variable. The level curves of the phase function, the isochrons, form a system of Poincar\'{e} sections for which the return time from a section to itself is invariant across the section, and equals the oscillator period.  Schwabedal and Pikovsky [Phys. Rev. Lett. 2013] proposed a definition of oscillator phase for stochastic systems by numerically constructing Poincar\'{e} sections for which the mean first passage time, after completing one full oscillation, is invariant across the section.  We recast their definition in terms of a partial differential equation with a jump boundary condition; the solution of the PDE simultaneously gives the geometry of the isochrons, and their temporal relation.  Finally, we compare the resulting "mean isophase" isochrons to the spectral asymptotic phase proposed by Thomas and Lindner [Phys. Rev. Lett. 2014]. joint work with: Alexander Cao, Case Western Reserve University Benjamin Lindner, Humboldt University

Nicolas Brunel (University of Chicago): Friday June 2, 9:00-9:40am
Minimal biophysical models of synaptic plasticity
Synaptic plasticity is widely assumed to be one of the main mechanisms by which memories are stored. In spite of decades of research in experimental and theoretical neuroscience, there is still no consensus on what a good synaptic plasticity model should look like, with a myriad of different models being used at the single synapse, neuron and network levels. In this talk, I will describe a class of minimal biophysical synaptic plasticity models, in which plasticity depends exclusively on a single variable, the calcium concentration on the post-synaptic spine. These models can reproduce naturally a wide diversity of experimental results on how synaptic plasticity is controlled by spike timing, spike patterns and firing rates. They predict how plasticity should be affected by changes in extracellular calcium concentration. These predictions have been tested successfully in hippocampal slices. These experiments show that at physiological extracellular calcium concentrations, plasticity rules are completely different from what is commonly assumed: in particular, protocols using pairs of single pre and post-synaptic spikes at low frequency lead to no changes. Plasticity is restored only in the presence of bursts of spikes, and/or at high pre and post-synaptic firing rates.

Brent Doiron (University of Pittsburgh): Friday June 2, 1:00-1:40pm
Watch this space (in balanced networks)
A characteristic feature of cortex is the large temporal and trial-to-trial variability of its spiking activity. Seminal theoretical and modeling studies have explored how such variability can be an emergent property of networks that have a dynamic and balanced tension between large excitation and inhibition. While balanced networks readily capture the statistics of single unit cortical spiking activity the population activity in these models is asynchronous. This is at odds with the abundant experimental evidence from population recordings showing a weakly correlated cortical state. Further, these recordings show that a large fraction of population correlations are low dimensional, and cortical state (attention, arousal, anesthetics etc.) modulates this low dimensional component. These discrepancies between the variability exhibited in real and model networks limit the contributions of mechanism based theoretical neuroscience to systems neuroscience. We explore how spatially structured connectivity in balanced network models decouples balanced firing rate solutions and asynchronous network dynamics. We establish the circuit requirements for an internally generated low dimensional correlated network state, and one that is easily modulated through top-down recruitment of inhibitory activity. Our theoretical work captures the population-wide variability recorded during attention modulated tasks in the primate visual system (joint work with Marlene Cohen). As population recording and targeted cell specific manipulation techniques advance we will require mechanistically derived theories such as we propose to give interpretation to emerging data sets.

Taro Toyoizumi (RIKEN Brain Science Institute): Friday June 2, 1:40-2:20pm
A theory of how active behavior affects spontaneous activity and neural responses: neural gain modulation by closed-loop environmental feedback
During active behaviours (e.g. running, swimming, whisking, sniffing) motor actions shape sensory input and sensory percepts inform future motor commands forming a closed-loop feedback signal between the brain and the body/environment. Closed-loop feedback is mediated by neural circuits across the brain but how it impacts on the dynamics and function of neurons is not well understood. Here we present and theory suggesting that closed-loop feedback between brain/body/environment can change neural gain, modulating endogenous neural dynamics and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative closed-feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this mechanisms provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. We also show it suggests novel mechanism that selectively accentuates active, rather than passive, whisker touch signals. We test this theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor simple theory can account for the interplay between neural and environmental dynamics and predicts changes in coherent neural this theory highlights the dependence of brain function on closed-loop brain/body/environment interactions suggesting it cannot be fully understood through open-loop approaches.