This Site umd.edu

Applied Dynamics Seminar Series

 

Applied Dynamics Seminar Series

 

Thursdays, 12:30 p.m.

 

IREAP Large Conference Room (ERF 1207)

The organizers of the Applied Dynamics seminar gratefully acknowledge support from

the Institute for Research in Electronics and Applied Physics and the Institute for Physical Science and Technology.

Subscribe to our mailing list for announcements sending an email to listserv@listserv.umd.edu with no subject, and SUBSCRIBE APPLIED-DYNAMICS [Your full name] or SUBSCRIBE APPLIED-DYNAMICS [Anonymous] (no square brackets) in the body of the email. An email signature might prevent your subscription command from working.

 
 

September 5, 2019

Population collapse in elite-dominated societies: A differential equation model without differential equations

James Yorke and Naghmeh Akhavan

University of Maryland | Department of Mathematics

Abstract: We discuss models of interactions with the environment by human populations, both between poor and rich people, "Commoners'' and "Elites''. The Elites control the society's wealth and consume it at a higher rate than Commoners, whose work produces the wealth. We say a model is “Elite-dominated” when the Elites' per capita population change rate is always at least as large as the Commoners'. We can show the model always exhibits population crashes for all choices of parameter values for which it is Elite-dominated. But any such model with explicit equations raises questions of how the resulting behaviors depend on the details of the models. How important are the particular design features codified in the differential equations? We discard the differential equations, replacing them with qualitative conditions that the original model satisfies, and we prove these conditions imply population collapse must occur. In particular, one condition is that the model is Elite-dominated. Our approach of introducing qualitative mathematical hypotheses can better show the underlying features of the model that lead to collapse. We also ask how societies can avoid collapse.

September 12, 2019

Strong neuron-to-body coupling implies weak neuron-to-neuron coupling in motor cortex

Woodrow Shew

University of Arkansas | Department of Physics

Abstract: Cortical neurons can be strongly or weakly coupled to the network in which they are embedded, firing in sync with the majority or firing independently. Both these scenarios have potential computational advantages in motor cortex. Commands to the body might be more robustly conveyed by a strongly coupled population, whereas a motor code with greater information capacity could be implemented by neurons that fire more independently. Which of these scenarios prevails? Here we measure neuron-to-body coupling and neuron-to-population coupling for neurons in motor cortex of freely moving rats. We find that neurons with high and low population coupling coexist, and that population coupling was tunable by manipulating inhibitory signaling. Importantly, neurons with different population coupling tend to serve different functional roles. Those with strong population coupling are not involved with body movement. In contrast, neurons with high neuron-to-body coupling are weakly coupled to other neurons in the cortical population.

September 19, 2019

No seminar. We invite you to attend the Paint Branch Distinguished Lecture at 4pm, 1101 A. James Clark Hall.

Paint Branch Distinguished Lecture: From Nonlinear Optics to High-Intensity Laser Physics

Donna Strickland

University of Waterloo | Department of Physics and Astronomy

Abstract: The laser increased the intensity of light that can be generated by orders of magnitude and thus brought about nonlinear optical interactions with matter. Chirped pulse amplification, also known as CPA, changed the intensity level by a few more orders of magnitude and helped usher in a new type of laser-matter interaction that is referred to as high-intensity laser physics. In this talk, I will discuss the differences between nonlinear optics and high-intensity laser physics. The development of CPA and why short, intense laser pulses can cut transparent material will also be included. I will also discuss future applications.

September 26, 2019

Tree-like approximations and critical network cascades

Sarthak Chandra

University of Maryland | Department of Physics

Abstract: Network science is a rapidly expanding field, with a large and growing body of work on network-based dynamical processes. Most theoretical results in this area rely on the so-called "locally tree-like approximation" (which assumes that one can ignore small loops in a network). This is, however, usually an `uncontrolled' approximation, in the sense that the magnitudes of the error are typically unknown, although numerical results show that this error is often surprisingly small. In our work, we place this approximation on more rigorous footing by calculating the magnitude of deviations away from tree-based theories in the context of network cascades (i.e., a network dynamical process describing the spread of activity through a network). For this widely applicable problem, we discuss the conditions under which tree-like approximations give good results, and also explain the reasons for deviation from this approximation. More specifically, we show that these deviations are negligible for networks with a large number of network links, justifying why tree-based theories appear to work well for most real-world networks.

October 3, 2019

Can we use future observations to improve current forecasts without cheating?

Eugenia Kalnay

University of Maryland | Department of Civil and Environmental Engineering, Department of Mechanical Engineering, Department of Atmospheric and Oceanic Science

Abstract: Co-authored with Tse-Chun Chen and Daisuke Hotta. The National Weather Service computes operational weather forecasts using a process called “data assimilation”: A 6 hour forecast is computed starting from the current “analysis”. The 6 hour forecast is then optimally combined with the observations collected 6 hours later to create the new analysis which serves as initial conditions for the next forecast. This process, known as “analysis cycle”, is repeated every 6 hours. Miyakoda (personal communication, ~1980) pointed out that using any future information to improve current forecasts should be considered “cheating” because it cannot be done in operational forecasting. Chen (2018, PhD thesis), Chen and Kalnay (2019a) MWR, and Chen and Kalnay (2019b, under review), developed an application of Ensemble Forecast Sensitivity to Observations (EFSO, Kalnay et al., 2012, Tellus) combined with Proactive Quality Control (PQC, Hotta et al., 2017). It uses future data (e.g., observations obtained 6 hours after the present analysis) to identify and delete current detrimental observations (in the present analysis). We found that making a late correction of every current analysis after the new observations have been received, accumulates improvements with time. The accumulated improvement is found to be much larger than the last correction that cannot be used in order to avoid cheating, so that forecasts are significantly improved “without cheating”.

October 10, 2019

Quantum impulse control

 

Christopher Jarzinsky

University of Maryland | Department of Chemistry & Biochemistry and Department of Physics

Abstract: The quantum adiabatic theorem governs the evolution of a wavefunction under a slowly time-varying Hamiltonian. I will consider the opposite limit of a Hamiltonian that is varied impulsively: a strong perturbation U(x,t) is applied over a time interval of infinitesimal duration ε → 0. When the strength of the perturbation scales like 1/ ε 2, there emerges an interesting dynamical behavior characterized by an abrupt displacement of the wave function in coordinate space. I will solve for the evolution of the wavefunction in this situation. Remarkably, the solution involves a purely classical construction, yet describes the quantum evolution exactly, rather than approximately. I will use these results to show how appropriately tailored impulses can be used to control the behavior of a quantum wavefunction.

October 17, 2019

Using machine learning to assess short term causal dependence and infer network links

Amitava Banerjee

University of Maryland | Department of Physics and IREAP

Abstract: The general problem of determining causal dependences in an unknown time evolving system from observations is of great interest in many fields. Examples include inferring neuronal connections from spiking data, deducing causal dependences between genes from expression data, discovering long spatial range influences in climate variations, etc. Previous work has tackled such problems by consideration of correlations, prediction impact, or information transfer metrics. Here we propose a new method that leverages the ability of machine learning to generalize from examples, combined with concepts from dynamical systems theory. We test our proposed technique on numerical examples obtaining results that suggest excellent performance for a large range of situations. An important, somewhat surprising, conclusion is that, although our rationale is based on noiseless deterministic systems, dynamical noise can greatly enhance our technique's effectiveness.

October 24, 2019

Title TBA

Speaker TBA

Institution TBA | Unit TBA

Abstract: TBA

October 31, 2019

A putative mechanism for implicit learning in biological and artificial neural systems

Zhixin Lu

University of Pennsylvania | School of Engineering and Applied Science

Abstract: The human brain is capable of diverse feats of intelligence. A particularly salient example is the ability to implicitly learn dynamics from experiencing the physical world. Analogously, artificial neural systems such as reservoir computing (RC) networks have shown great success in learning the long-term behavior of various complex dynamical systems from data, without knowing the explicit governing equation. Regardless of the marked differences between biological and artificial neural systems, one fundamental similarity is that they are essentially dynamical systems that are fine-tuned towards the imitation of other dynamical systems. To shed some light on how such a learning function may emerge from biological systems, we draw inspiration from observations of the human brain to propose a first-principles framework explicating its putative mechanisms. Within this framework, one biological or artificial dynamical system, regardless of its specific composition, implicitly and adaptively learns other dynamical attractors (chaotic or non-chaotic) by embedding the dynamical attractors into its own phase space through the invertible generalized synchronization, and imitates those attractors by sustaining the embedded attractors through fine-tuned feedback loops. To demonstrate this general framework, we construct several distinct neural network models that adaptively learn and intimate multiple attractors. With these, we observe and explain the emergence of five distinct phenomena reminiscent of cognitive functions: (i) imitation of a dynamical system purely from learning the time series, (ii) learning of multiple dynamics by a single system, (iii) switching among the imitations of multiple dynamical systems, either spontaneously or driven by external cues, (iv) filling-in missing variable from incomplete observations of a learned dynamical system, and (v) deciphering superimposed input from different dynamical systems.

November 11, 2019

Title TBA

Speaker TBA

Institution TBA | Unit TBA

Abstract: TBA

November 14, 2019

Title TBA

Speaker TBA

Institution TBA | Unit TBA

Abstract: TBA

November 21, 2019

Title TBA

Henry Abarbanel

UCSD | Department of Physics

Abstract: TBA

November 28, 2019

Thanksgiving Break - No seminar

December 5, 2019

Critical phenomena with interacting photons in driven-dissipative micro-cavities

Said Rodriguez

AMOLF | Interacting Photons (Group Leader)

Abstract: Networks of nonlinear optical cavities offer unprecedented opportunities for exploring novel phases of light and matter with potential applications to simulation and computation. In this talk, I will discuss the fundamental building-block of such networks: single nonlinear cavities. First I will present measurements of the dynamic optical hysteresis of a semiconductor micro-cavity with a Kerr nonlinearity. Due to the influence of quantum fluctuations, the hysteresis area follows a double power law decay as a function of the scanning time across the bistability. I will explain how this behavior can be understood within the framework of the Kibble-Zurek mechanism, which describes the onset of non-adiabatic dynamics near a critical point. I will introduce the concept of a dissipative phase transition, and I will explain how to measure its main quantity – the Liouvillian gap – based on the statistics of quantum jumps in the nonlinear cavity. In the second part of the talk, I will explain how the optical response of a micro-cavity is modified by a non-instantaneous nonlinearity. I will present dynamic optical hysteresis experiments based on tunable micro-cavities filled with thermal nonlinear media. The dynamic hysteresis area displays a non-monotonic behavior as the scanning time across the bistability approaches the thermal relaxation time. For large speeds, the non-instantaneous nonlinearity leads to a power law decay of the hysteresis area with a universal exponent. I will conclude with perspectives for realizing lattices of bistable optical cavities, and for exploring non-Markovian nonlinear dynamics, in a tunable system at room-temperature.

2019 Archive | 2018 Archive | 2017 Archive | 2016 Archive | 2015 Archive