Nonlinear dynamics and stochastic methods: from neuroscience to other biological applications
O'Hara Student Center, University of Pittsburgh
3900 O'Hara Street, Pittsburgh, PA 15260
Note: Wyndham Hotel is very close to O'Hara Student Center (see map)
Easy transportation from the airport to the hotel by bus: take 28X Airport Flyer (fare: $3.75 - exact change is required); get off at stop S. Bellefield Ave at Fifth Avenue (in Oakland); then walk 4 min to Wyndham Hotel
March 10-12, 2014
University of Pittsburgh - Pittsburgh, PA
In conjunction with the conference, we plan to honor Professor Bard Ermentrout who has been at the forefront of the conference's topics throughout his career (see conference flier).
Rodica Curtu, University of Iowa
Angela Reynolds, Virginia Commonwealth University
Brent Doiron, University of Pittsburgh
Jonathan Rubin, University of Pittsburgh
This three-day conference will bring together a mix of senior and junior scientists
to report on theoretical methods that proved successful in mathematical neuroscience,
and to encourage their dissemination and application to modeling in computational medicine
and other biological fields. The invited speakers will present on mathematical topics such
as dynamical systems, multi-scale modeling, phase resetting curves, pattern formation and
The mathematical tools will be demonstrated in the context of the following main topics:
1. Rhythms in biological systems;
2. The geometry of systems with multiple time scales;
3. Pattern formation in biological systems;
4. Stochastic models: statistical methods and mean field approximations.
|Paul Bressloff||University of Utah|
|Carson Chow||National Institutes of Health|
|Jack Cowan||University of Chicago|
|Sharon Crook||Arizona State University|
|Lance Davidson||University of Pittsburgh (Bioengineering)|
|Jonathan Drover||Cornell Medical College|
|Leah Edelstein-Keshet||University of British Columbia|
|Roberto Fernandez Galan||Case Western Reserve University|
|Pranay Goel||Indian Institute of Science, Education and Research, Pune - India|
|Boris Gutkin||Ecole Normale Superieure, Paris - France|
|Zachary Kilpatrick||University of Houston|
|Nancy Kopell||Boston University|
|Cheng Ly||Virginia Commonwealth University|
|Remus Osan||Georgia State University|
|John Rinzel||New York University|
|Jonathan Rubin||University of Pittsburgh (Mathematics)|
|Daniel Simons||University of Pittsburgh (Neurobiology)|
|David Terman||Ohio State University|
Maximum dimensions for posters: 48 inches (height) x 46 inches (width)
Abstracts of the invited talks:
Paul Bressloff (University of Utah)
Breakdown of fast-slow analysis in an excitable neuron with channel noise Abstract Hide Abstract
Subtitle: Bard doesn't like path-integrals, but they can be useful sometimes! |
We consider a stochastic version of an excitable system based on the Morris-Lecar model of a neuron, in which the noise originates from the stochastic opening and closing of Na and K ion channels. It is well known that in the deterministic limit, neural excitability can be analyzed using a separation of time scales involving a fast voltage variable and a slow recovery variable, which represents the fraction of open K channels. Using large deviation theory and WKB methods, we show that when the number of K channels is finite, fluctuations in the opening and closing of these channels leads to a breakdown in the standard fast-slow analysis, and the level of spontaneous spiking activity cannot be estimated using standard Kramer's rate theory. We also show that an exit time problem for spontaneous action potentials can be formulated by considering maximum likelihood trajectories of the stochastic process.
Carson C. Chow (National Institutes of Health)
How many neurons code a percept? Abstract Hide Abstract
|I will present some arguments of how the dominance time distribution in binocular rivalry can be used to estimate the number of neurons that code for a stimulus using finite network size effects as a source of noise.|
Jack Cowan (University of Chicago)
Geometric Visual Hallucinations: what they tell us about the architecture of the brain Abstract Hide Abstract
|In 1979 Ermentrout and Cowan showed how to analyze spatial pattern formation in a simplified model of visual cortex, using equivariant bifurcation theory applied to the mean-field Wilson-Cowan equations. They provided the first account of how geometric visual hallucinations might be generated in the brain. In 2001 Bressloff, Cowan, Golubitsky, Thomas, and Wiener introduced a more elaborate model of visual cortex, and used the equivariant branching lemma of Golubitsky, Stewart, and Schaeffer (1988), to extend the classes of hallucinatory images generated by such models. Further work by Rule, Stoffregen, and Ermentrout (2011) on flicker induced hallucinations, and by Butler, Benayoun, Wallace, van Drongelen, Goldenfeld, and Cowan (2011) on fluctuation driven spatial pattern formation in stochastic Wilson-Cowan equations has further developed our understanding of the origins of visual hallucinations. In this talk I will provide an overview of such developments, and suggest some further extensions of the Ermentrout-Cowan model.|
Sharon Crook (Arizona State University)
A continuum model approach for exploring the role of neuronal structure Abstract Hide Abstract
One of the most prevalent models in neuroscience is the dimensionless cable equation, which is used to model changes in the transmembrane potential in a passive dendrite. Here we extend the cable equation with a continuum approach that allows for modeling a density of dendritic spines or dendritic branches along the cable. Several examples will demonstrate how we are using this approach to address fundamental questions in neuroscience relating to how excitability depends on neuronal structure and structural plasticity.
This work is done jointly with Steve Baer and Sandra Berger.
Lance Davidson (University of Pittsburgh)
Engines of cell shape change: actomyosin dynamics within the cell cortex Abstract Hide Abstract
|Embryonic morphogenesis is driven by physical processes that include the coordinated activity of cytoskeletal actomyosin complexes to generate force and establish material properties. Actomyosin networks within the cell cortex are composed of filamentous actin (F-actin), non-muscle myosin II, and a variety of regulatory proteins. Work in our group uses both experimental and theoretical biophysics to explore the role of these complexes during morphogenesis in the aquatic frog Xenopus laevis. Specifically, we ask how these complexes produce force and control cell shape changes during the earliest tissue movements that establish the three principle germ layers (gastrulation) and the create the central nervous system (neurulation). We use live-cell confocal time-lapse imaging and biophysical testing to understand how signaling pathways modulate actomyosin dynamics and biomechanical properties of cells and tissues. We use computational and mathematical tools to investigate how actomyosin complexes form and how dynamics of networks of F-actin and myosin motors are responsible for the mechanical properties of cells within the embryo. In this talk I will describe some of the diverse actomyosin dependent phenomena in the embryo and efforts to use models to understand the organizing principles behind those phenomena.|
Jonathan Drover (Cornell Medical College)
A mean-field model suggests a novel EEG analysis technique to index thalamocortical dynamics Abstract Hide Abstract
|The mechanisms responsible for recovery after a severe brain injury are poorly understood. Subcortical structures, such as the thalamus, are known to play an important role in the transfer of information from one cortical area to another, and so some knowledge of the behavior of these structures would yield information about the recovery process. However, thalamocortical activity is extremely difficult to assay in a healthy individual, and generally more so in an impaired subject. The electroencephalogram (EEG) has many advantages over other imaging techniques, but it only records brain activity from the cortex. In an attempt to glean information about the dynamics of the thalamus and other structures from EEG, we develop a mean field model describing the different interactions between distant cortical areas. Our model is based on the model described by Robinson et al. (2002) and consists of cortex, thalamic relay nuclei, and reticular nucleus. The primary result is that shared thalamic inhibition (reticular nucleus) can allow multistable behavior. This provides a mechanism for spontaneous fluctuations of activity, and corresponding changes in the coherence between distant cortical regions that occur on short time scales. We look for this behavior in the EEG of a subject who demonstrates a marked (paradoxical) change in behavior upon the administration of Zolpidem (eg. Ambien). We found that (using techniques that I describe), following Zolpidem administration, there are changing patterns of coherence between frontal and distant cortical regions. We suggest that these fluctuations involve thalamic contributions, and are characteristic of normal thalamic function.|
Leah Edelstein-Keshet (University of British Columbia)
From actin assembly to cell motility, with a little help from my friend Abstract Hide Abstract
|In this talk I will survey a project that started nearly 30 years ago (1980's) when I first met and had the luck to work with Bard Ermentrout. What started out as a project on branching and alignment then led to our work on actin polymers and the cytoskeleton. In more recent years, this morphed into multiscale projects in my group on cytoskeletal-based cell motility, complete with pattern formation aspects (of regulatory proteins) and reaction-diffusion systems on deforming cell-shaped domains. I plan to give an overview of this work and of inspiration derived from working with my good friend, and role-model, Bard Ermentrout.|
Roberto Fernandez Galan (Case Western Reserve University)
Stochastic Neural Dynamics and Information Processing in the Autistic Brain Abstract Hide Abstract
I will show that large-scale brain dynamics at rest can be accurately modeled with a multivariate Ornstein-Uhlenbeck process (mOUP).
This allows us to: 1) define functional connectivity in a rigorous way as the drift operator; 2) determine the stochastic inputs
driving the brain as the stochastic term of the mOUP. Applying this method to magnetoencephalography (MEG) data from children with
and without autism we found that functional connectivity in children with autism is mainly altered between frontal and parietal/occipital
areas with increased excitation relative to controls. In addition, the spatial distribution of stochastic inputs is significantly more
homogeneous and less complex. Determining the stochastic inputs also allows us to quantify the information gain, or the amount of information
that the brain creates spontaneously at rest. The information gain is significantly higher in children with autism. Finally, I will discuss
the relevance of these findings and their interpretation from a cognitive science perspective. |
This work was done in collaboration with Dr. J. L. Perez Velazquez and Dr. L. Garcia Dominguez from the University of Toronto and SickKids Hospital.
- L. Garcia Dominguez, J. L. Perez Velazquez and R. F. Galan (2013). A Model of Functional Brain Connectivity and Background Noise as a Biomarker for Cognitive Phenotypes: Application to Autism. PLoS One, 8(4): e61493.
- J. L. Perez Velazquez and R. F. Galan (in press). Information Gain in the Brain's Resting State: A New Perspective on Autism. Frontiers in Neuroinformatics.
Pranay Goel (Indian Institute of Science, Education and Research, Pune - India)
Using the Dual Oscillator Model (DOM) to study bursting in pancreatic islets Abstract Hide Abstract
|Insulin secretion is closely tied to understanding bursting in the pancreatic islets of Langerhans. The Dual Oscillator Model (DOM) was introduced by Bertram et al. to study bursting that is driven not only by electrical activity, but glucose metabolism as well. In the first part of my talk I shall describe the geometry of bursting as we understand it in the DOM, how it explains oscillations of various timescales observed in experiments, and the changes that take place with glucose stimulation. In a second part I will describe our recent attempts at estimating network connectivity of gap junctions between beta-cells in islets.|
Boris Gutkin (Ecole Normale Superieure, Paris - France)
Working with Gamma, Theta, Alpha Oscillations (and Noise Correlations) to Make Working Memory Work Abstract Hide Abstract
|Working memory is tightly capacity limited, it requires selective information gating, active information maintenance, and rapid active updating. Hence performing a working memory task needs rapid and controlled transitions between neural persistent activity and the resting state. We propose that changes in spike-time structure of background neural activity provides a mechanism for the required working memory operations. As a proof of principle, we implement sustained activity and working memory in a recurrently-coupled spiking network with neurons receiving excitatory random background activity where background correlations are induced by a common source. With first show that sufficiently high correlations, the sustained state becomes practically unstable, so it cannot be initiated by a transient stimulus. We find that activity termination and the non-initiation can be controlled separately. We exploit this in a working memory model implementing the delay match to sample task by modulating flexibly in time the correlation level at different phases of the task. We then show that similar control can be exerted with oscillatory background inputs by shifting the frequency of the oscillations. We find that beta/gamma band ensures memory intitiation, theta provides robust maintenance in face of distractors and alpha leads to memory clearance. We contruct the full task using this mechanism and relate our theory to human and primate data.|
Zachary Kilpatrick (University of Houston)
Getting the most out of bumps Abstract Hide Abstract
|We explore the effects of network connectivity and noise on the dynamics of bump attractors. Wilson and Cowan (1973) originally proposed lateral inhibitory neuronal networks as canonical models of large-scale neural pattern formation. Stationary, spatially-localized activity is often referred to as a "bump" (Amari 1977). Bard and many authors (e.g: Laing, Gutkin, Chow, Pinto) since have studied how bumps respond to external inputs, adaptation, heterogeneity, and noise. Recent experimental evidence suggests the positions of bumps can encode spatial location in working memory (Compte et al 2013) and navigation (Yoon et al 2013) tasks. I will discuss work I began with Bard concerning how spatial heterogeneity in network connectivity can stabilize bumps to noisy perturbations that may change their position. The spatial frequency of heterogeneity can be tuned to obtain optimal information transfer (obeying Bard's law of interior optima) by balancing information loss from position quantization with information preservation through bump stabilization. In addition, I will discuss bump attractor networks capable of encoding certainty. One key feature is a balance of network excitation and inhibition that leads to bump solutions that lie in a two-dimensional neutrally stable manifold - the bump's position and amplitude can each take on one of a continuum of possible values.|
Nancy Kopell (Boston University)
Brain Rhythms: Multiple Roles of Inhibition Abstract Hide Abstract
|Brain rhythms are known to be associated with all aspects of cognition, from sensory processing to attention and decision making to motor planning. Nevertheless, the functions of brain rhythms in cognition are far from understood. This talk focuses on how inhibition participates in some brain rhythms, and how the physiology of the rhythms produces both mathematical structure and implications for function.|
Cheng Ly (Virginia Commonwealth University)
Networks of heterogeneous neural oscillators Abstract Hide Abstract
|Heterogeneity is a realistic physiological attribute neglected in many mean field models. To this end, we consider pulse-coupled noisy phase oscillators that have distinct/heterogeneous phase resetting curves (PRC). A full probability density or mean field description of this system is inherently large dimensional, requiring reduced descriptions for tractability. We present some preliminary results of reduction methods to capture various statistical quantities and explore their robustness. Using asymptotic methods, we also analyze how the variability of the time between spikes changes with coupling. A straightforward explanation is provided for how PRC type alter the variability in a heterogeneous network.|
Remus Osan (Georgia State University)
Targeting performances for stochastic models of neural growth with uniform branching and pruning Abstract Hide Abstract
The highly interconnected and functional circuitry of the nervous system relies on the complex architecture of neuronal
networks elaborated during development, which in turn depends on the ability of neurons to branch and establish connections
with appropriate targets. Traditional mathematical and computational research have focused on characterizing arborization
patterns of functionally-specific neuronal populations and showing how particular growth dynamics shape their morphological
distinctiveness. Less effort, however, has been placed in understanding how the growth rules associated with neuronal
morphogenesis translate into effective search/avoid strategies for reaching desirable targets. In this work, we start with
a simplified growth model to evaluate the search efficiency of targets by neurons evolving within a stochastic arborization
algorithm under a conditional finite resource constraint. Specifically, for a set time interval of evolution t0, each active
neurite branches independent of other branches with set probability (p) and prunes with set probability (r), while total tree
length is limited to maximal value (L). Targeting effectiveness (number of active branches at a certain distance away from
origin) is evaluated as described in Osan et al, PLOS One 2011. We then proceed with a mathematical treatment of the search
problem using probability rules and statistics to derive the trends induced by variations in the parameters p, r and L.
We use the probability distribution obtained from the analytical treatment to compute the expected performance and variability.
These results match the full-fledged numerical simulations of stochastically growing neurons, thus showing that the optimal
arborization dynamics associated with target effectiveness can be similarly derived from two separate treatments.
The agreement between computational simulations and non-algorithmic statistical analysis, demonstrates the ability of
in charta (on paper) approaches to build on standard in silico evaluations of in vivo observations. Future work will
examine parameter-based branching and pruning, allowing us to formulate predictions on how changing specific aspects
of arborization can fine-tune targeting performances. The in charta work, expanded to include aspects of arbor geometry
(i.e. branch angle), can provide insight into phenomena observed during neurogenesis, such as sibling competition,
fasciculation, and tiling. This parallel approach to uncovering biological explanations associated with the dynamics
of growth, contributes to our understanding of design principles that operate during neural development and regeneration.|
This work was done in collaboration with Jun Xia (Georgia State University) and Emily Su (Rutgers University)
John Rinzel (New York University)
Biased competition for context-dependent perceptual choice Abstract Hide Abstract
|For an ambiguous sensory stimulus your perception may vary with each new presentation or alternate haphazardly during long, steady presentations (e.g., when viewing the faces-vase image). Competition models for the dynamics of percept choice involve mutual inhibition and slow adaptation, either half-center oscillators or multistable units, with noise affecting dominance durations. The neuronal computation may be biased if the sensory input is imbalanced (say, in binocular rivalry with different contrasts for images to the left and right eyes) or if influenced "top down" by the subject trying to "hold" one percept or the other. When biased, the mean dominance durations are not identical. Percept probability is also affected by pre-conditioning stimuli, "context", before the presentation of the target stimulus. Models for biased competition dynamics will be presented for visual and auditory cases.|
Jonathan Rubin (University of Pittsburgh)
...and Out Come the Boundary Conditions Abstract Hide Abstract
|In this talk, we explore the effects of working within 7m of Bard Ermentrout's office over the better part of 14 years. Furthermore, we consider the hypothesis that every dynamics problem can be posed in a form that allows it to be simulated using the boundary value solver in an obscure program known as XPPAUT.|
Daniel J. Simons (University of Pittsburgh)
Receptive field transformations in feedforward thalamocortical circuits Abstract Hide Abstract
In thalamocortical circuits of the rat whisker-to-barrel system inhibitory interneurons are contacted strongly by convergent inputs from numerous thalamic input neurons. In rats, strong feedforward inhibition renders excitatory barrel circuitry especially responsive to synchronous or near-synchronous firing of thalamic neurons. For example, in the thalamus deflection amplitude is encoded by the magnitude of the entire response, whereas initial population firing rates reflect deflection velocity. Barrel neurons are more sensitive to deflection velocity than amplitude. Spatial receptive fields, which in the thalamus typically include a number of nearby whiskers, become focused by similar means, because deflections of the barrels "principal" whisker evoke great thalamic population firing synchrony than do deflections of neighboring whiskers. Computer simulations show that activity in model barrel circuits is dominated by inhibition throughout the brief response evoked by a whisker deflection. With synchronous inputs, excitatory barrel neurons are able to fire, albeit momentarily, before they are overwhelmed by strong inhibition. |
Feedforward inhibition establishes a temporally elastic window of opportunity for thalamic inputs to engage excitatory circuitry within the barrel for a period of time that reflects the synchrony of thalamic firing. The window is open "wider" in young animals and in adult animals whose whiskers were trimmed early in life and then allowed to regrow in adulthood. The window is also wider in mice wherein inhibitory cell firing is less robust; compared to rat barrels, velocity sensitivity is reduced and receptive fields are less spatially focused. The whisker-hairs themselves are thinner and hence less velocity-sensitive than comparable whiskers in rats. Thus, the strength of feedforward inhibition may be regulated by the physical properties of the sensory periphery, providing a means for species-specific regulation of cortical sensory function via engagement of feedforward inhibitory circuitry.
David Terman (Ohio State University)
What do small toy models tell us about large complicated networks? Abstract Hide Abstract
|I will review mathematically rigorous results concerning oscillatory dynamics in a reduced model for two mutually coupled neurons. I will then consider larger networks and discuss mechanisms underlying both rhythmic and irregular activity patterns.|
The general goal of the conference is to present and demonstrate both the successes and challenges of mathematical modeling in neuroscience, and to encourage the dissemination and application of such techniques to modeling in other biological fields. Therefore, the organizing committee will strive to balance the number of neuroscience-related posters with posters presenting mathematical and computational results for other biological problems, such as disease dynamics, epidemiology and physiology (please see abstract submission). Special attention will be paid when selection the
poster presentations, priority will be given to young researchers, including graduate students, women and minorities.
Researchers interested in attending the conference should submit a registration application. The attendees will be selected among these based on research interest overlapping with the conference theme, and to ensure diversity and breadth of participation by individuals and institutions.
To ensure more general mathematical and computational biology audience, the invited speakers will be asked to assume no prior knowledge of any particular biological area, when preparing their talk.
!!! NEW: Deadline for registration application is January 14th, 2014 at 11:59 PM EST !!!
September 10, 2013: Registration application and travel award application open.
Abstract submission open.
December 1, 2013: Travel award application close.
Abstract submission close.
!!! Extended deadline for abstract submission and travel award application: December 22, 2013 !!!
!!! Given the wide interest shown in the conference, we reopen the abstract submission and travel award application with FINAL DEADLINE: January 2, 2014. NOTE: The website may still accept submissions after the above date, but those received after 11:59 PM EST on 1/2/2014 will not be considered !!!
January 17, 2014: Authors notified about outcome of review
(Poster presentation: accept/reject).
Travel award notifications.
Please complete the registration application online by providing your name, home institution, position, research interest and email address.
Registration Application Deadline for registration application (January 14th, 2014 at 11:59 PM EST) has passed!!!
Travel awards for students, postdocs and other early career researchers presenting a poster may become available. If you are interested in being considered for a travel award, you will have an opportunity to apply via the registration application. In order to complete the travel award application, you will need to email a copy of your CV. Please have it available.
Travel Award Application Deadline for travel award application has passed!!!
The conference will start Monday 3/10/2014 in the morning and will end Wednesday 3/12/2014 at noon. There will be four 40-minute long invited talks each morning and three invited talks each afternoon.
There will be no parallel sessions, and all lectures will be held in the same conference hall.
A two-hour poster session is scheduled for Monday evening. Selected poster presenters will be notified by January 17, 2014 .
Abstract Submission Deadline for abstract submission has passed!!!
Wyndham Pittsburgh University Center
100 Lytton Avenue Pittsburgh, PA 15213
A group rate ("Nonlinear Dynamic") is available for attendees to this conference, for the interval Sunday March 9th (check in) through Thursday March 13th (check out). The rate is $135/night + taxes. Parking is an additional charge of $14/day. The hotel is in walking distance to the conference site.
Hotel reservation deadline for this group rate: Sunday February 16th, 2014, via the following website:
Wyndham Pittsburgh University Center
Group code: 03096841ND (enter this code in the special rate code marked group code)