Meditations on the future of particle physics.

]]>In the summer of 1918, Emmy Noether published the theorem that now bears her name, establishing a profound two-way connection between symmetries and conservation laws. The influence of this insight is pervasive in physics; it underlies all of our theories of the fundamental interactions and gives meaning to conservation laws that elevates them beyond useful empirical rules. Noether’s papers, lectures, and personal interactions with students and colleagues drove the development of abstract algebra, establishing her in the pantheon of twentieth-century mathematicians. This essay traces her path from Erlangen through Göttingen to a brief but happy exile at Bryn Mawr College, illustrating the importance of "Noether's Theorem" for the way we think today.

Neutrinos are a tiny subatomic particle with surprising properties under active study. In particular, neutrinos oscillate, that is, they convert from one type of neutrino to another, is a surprising phenomenon under active study. The origin of neutrino mass is important for astrophysics, cosmology and particle physics, and many open questions surrounding neutrino oscillation exist. The Tokai-to-Kamioka (T2K) neutrino oscillation experiment sends a beam of muon flavor neutrinos or antineutrinos 295km across Japan. This seminar will discuss the state of the field of neutrino oscillation physics, including recent results from T2K, and T2K's exciting future program.

Join Zoom Meeting

https://umich.zoom.us/j/92296304951

Meeting ID: 922 9630 4951

Passcode: 665739

Sensitive and accurate detection of time dependent magnetic fields has myriad applications from fundamental physics to astronomy to medicine to geophysics and, of course, to defense. Motivated by two experiments that challenge the Standard Model of Elementary particle physics, we have developed new approaches to magnetometry using optically pumped 3He applied from Tesla to microTesla. For the recently announced new measurement of the muon magnetic moment anomaly at Fermilab, an absolute 3He NMR magnetometer accurate to 30 ppb was introduced into the calibration chain of the 1.45 T magnetic field of the storage ring. For experiments under development to extend sensitivity to the neutron electric dipole moment, 3He magnetometers provide "read out" by micro-fabricated rubidium optical magnetometers. I'll describe the motivations and our continuing efforts to develop and improve these sensors.

Abstract: Disease transmission systems are highly nonlinear and stochastic and are imperfectly observable. However, conducting high-dimensional parameter learning for partially observed, nonlinear, and stochastic spatiotemporal processes is a methodological challenge and is an open problem so far. We propose the iterated block particle filter (IBPF) algorithm for learning high-dimensional parameters over graphical state space models with general state spaces, measures, transition densities, and graph structure. Theoretical performance guarantees are obtained on beating the curse of dimensionality (COD), algorithm convergence, and likelihood maximization. Experiments on a highly nonlinear and non-Gaussian spatiotemporal model for measles transmission reveal that the iterated ensemble Kalman filter algorithm (Li et al. (2020), Science) is ineffective and the iterated filtering algorithm (Ionides et al. (2015), PNAS) suffers from the COD, while our IBPF algorithm beats COD consistently across various experiments with different metrics.

Talk based on paper: "Iterated Block Particle Filter for High-dimensional Parameter Learning: Beating the Curse of Dimensionality'', Ning Ning and Edward Ionides, ArXiv: https://arxiv.org/abs/2110.10745, 2021.

Ning Ning is currently a Postdoctoral Research Fellow in the Dept. of Statistics at the University of Michigan, Ann Arbor. Her research interests are stochastic processes, Markov chains, time series, networks, and machine learning. She received her PhD in Statistics and Applied Probability at UCSB. Prior to joining University of Michigan, she was holding a position as Postdoctoral Research Associate in the Dept. of Applied Math at the Univ. of Washington, Seattle. Her personal website is https://sites.google.com/site/patricianing/

Materials composed of d5 magnetic atoms in a strong octahedral crystal field and large spin-orbit coupling host Heisenberg and pseudo-dipolar interactions, but also unconventional spatially anisotropic magnetic interactions. Of particular interest are Ir 4+ with a network of edge-sharing IrO_6 octahedra arranged in a honeycomb lattice, for example, α-Li_2IrO_3 and NaIrO_3, in which strong Kitaev interactions are present. Recently, aiming to modify the magnetic interactions in these compounds towards the Kitaev Quantum Spin Liquid limit, new honeycomb iridates Ag_3LiIr_2O_6 and H_3LiIr_2O_6, have been synthesized by introducing alternative atomic species between the IrO layers. I will present resonant x-ray spectroscopy measurements in these two new-generation honeycomb iridates. Motivated by recent proposals of using coherent light-matter interaction to nudge these materials towards the Kitaev QSL, I will also discuss time-resolved optical polarimetry and time-resolved resonant techniques as probes for competing magnetic phases far from equilibrium.

Abstract:

Well-conducted field experiments, broadly construed to contain both randomized controlled trials and quasi- experiments, involve extensive planning with substantive deliberation. Such deliberation has the potential to fuel and strengthen the analysis stage of the study. Each field experiment is unique, from the subgroups on which effects are expected to concentrate to the design of the study itself. Reliance on off the-shelf methods to analyze field experiments may exclude this potentially valuable information that, if handled properly, would provide a greater opportunity to detect an effect. In this dissertation, we propose two novel methods that look to extract information unique to a specific study and translate it into additional power. We demonstrate these methods on a large-scale education intervention aimed at correcting the stalled reading trajectories of early elementary students.

The first method, PWRD aggregation, converts the theory of change behind a class of education interventions into a test statistic that maximizes the Pitman efficiency over standard methods, thus providing greater power. The scheme emphasizes cohorts and years-of-follow-up on which effects are expected to accrue with appropriate attention paid to the relative precision of estimates within cohorts. While PWRD aggregation increases power, confidence interval estimation is more difficult. To alleviate this problem, we partition our parameter space into three regions: equivalence, superiority, and inferiority. In the first, we employ PWRD aggregation to provide the greatest opportunity to detect an effect. In the latter two regions, we employ a standard method such that when we are able to detect an effect, interpretation of the point estimate and confidence interval proceeds in a typical fashion.

The second method we propose is a dry run simulation scheme that creates a pseudo-experiment replicating the initial randomized trial in a manner that preserves blinding to impact estimates. This procedure, which uses real rather than synthetic data, provides a sandbox in which various models may be tested to discover the model specification that most precisely estimates an artificially imposed treatment effect. The dry run method allows the statistician advising field experiments to estimate expected losses for each of a variety of methods, enabling them to elect a novel or unfamiliar method if it demonstrably outperforms methods more familiar to the broader team. When applied to the reading intervention that motivated dry runs, results from this method challenged received notions about covariate choice, suggesting we control for covariates beyond pre-test scores.

A grasshopper lands at a random point on a planar lawn of area one. It then makes one jump of fixed distance d in a random direction. What shape should the lawn be to maximize the chance that the grasshopper remains on the lawn after jumping? This easily stated yet hard to solve mathematical problem has intriguing connections to quantum information and statistical physics. A generalized version on the sphere can provide insight into a new class of Bell inequalities. A discrete version can be modeled by a spin system, representing a new class of statistical models with fixed-range interactions, where the range d can be large. I will show that, perhaps surprisingly, there is no d > 0 for which a disc shaped lawn is optimal. If the jump distance is smaller than the radius of the unit disc, the optimal lawn resembles a cogwheel, with transitions to more complex, disconnected shapes at larger d. Using parallel tempering Monte Carlo for the discrete spin model, several classes of optimal lawn shapes with different symmetry properties can be identified.

I will present the first oscillation-related results from the MicroBooNE neutrino experiment at Fermilab. These measurements, featuring extensive searches for anomalous rates of both electron neutrinos and neutrino-induced gammas from the Booster Neutrino Beamline with multiple final-state topologies, directly address the 4.8sigma excess of electron-like events seen by the MiniBooNE experiment.

Practical applications of the 5DJ levels of Rb include their role in portable, vapor-cell atomic clocks [1] and the definition of the meter [2]. I will outline a recent experiment where simultaneous measurements of the 5D_{3/2} state’s dynamic polarizability and photoionization (PI) cross section at 1.064 μm are made using laser spectroscopy. In this experiment, cold ^{85}Rb atoms are trapped in a deep, 1.064-μm optical lattice of ~10^5 photon recoils and probed with two lasers scanning over the D1 and 5P_{1/2}-to-5D_{3/2} lines. Detection of the 5D_{3/2} population as a function of the laser detunings is achieved by collecting and counting photo-ions. This procedure yields a dynamic scalar polarizability of -524(17) atomic units and PI cross section of 44(1) Mb.

[1] Kyle W. Martin, Gretchen Phelps, Nathan D. Lemke, Matthew S. Bigelow, Benjamin Stuhl, Michael Wojcik, Michael Holt, Ian Coddington, Michael W. Bishop, and John H. Burke, Phys. Rev. Applied 9, 014019 (2018).

[2] T. J. Quinn, Metrologia 40, 103 (2003).

Infinite towers of massive modes arise for every compactification of higher dimensional theories. Understanding the properties of these Kaluza-Klein towers on non-trivial solutions with an AdS factor has been a longstanding issue with clear holographic interest, as they describe the spectrum of single-trace operators of the dual CFTs at strong coupling and large N. In this talk, I will focus on two classes of solutions of such kind. The first class consists of AdS4 S-fold solutions of Type IIB supergravity that can be obtained from maximal gauged supergravity in D=4. For the later part, I will describe new families of solutions in N=(1,1) supergravity in D=6 which uplift from half-maximal supergravity in D=3. In both cases, the spectra can be computed using recent techniques from Exceptional Field Theory, and the information thus obtained leads to several unexpected conclusions.

]]>Quantum hardware has advanced to the point where it is now possible to perform simulations of physical systems and elucidate their topological and thermodynamic properties, which we will discuss in this talk. I will give a brief introduction to quantum computing and why they might be useful tools for solving problems in condensed matter physics and beyond. Following that, I will present a perspective on thermodynamics of quantum systems ideally suited to quantum computers, namely the zeros of the partition function, or Lee-Yang zeros. We developed quantum circuits to measure the Lee-Yang zeros, and used these to reconstruct the thermodynamic partition function of the XXZ model. The zeros qualitatively show the cross-over from an Ising-like regime to an XY-like regime, making this measurement ideally suitable in a NISQ environment. If time permits, I will discuss our demonstration of how topological properties of physical systems can be measured on quantum computers. We leverage the holonomy of the wavefunctions to obtain a noise-free measurement of the Chern number, which we apply to an interacting fermion model.

Despite tremendous progress in precision cosmology, several core mysteries remain, including the nature of dark energy, dark matter, and gravity. Galaxy surveys, which observe the positions and shapes of galaxies across large areas of the sky, are able to map a significant fraction of our cosmic volume, providing one of the most powerful probes of the Universe. I will describe how we use millions (or billions) of galaxies to measure both galaxy clustering and weak gravitational lensing, and how we learn about the Universe by analyzing these statistics. A significant challenge is that most of the Universe is “dark,” and we must infer from the roughly 5% visible fraction what the remaining 95% is doing. I will focus on the current state of the art, the Year 3 cosmology results from the Dark Energy Survey. I will also preview the exciting future of the field, with the Rubin Observatory and the Dark Energy Spectroscopic Instrument.

No matter your passion, interests, previous entrepreneurial experiences or ambitions, the Center for Entrepreneurship (CFE) has specialized opportunities that will expose you to new ways of thinking and support your unique goals. Since its inception in 2008, The Center has developed a rich and diverse set of offerings that cater first to the needs of students. Please come join us in learning about these offerings.

Enrico Fermi's 1934 paper proposing the weak interaction suggested that we should try to measure the neutrino mass via the endpoints of nuclear beta decays. 87 years later, we are still trying to do it; the world's largest electrostatic spectrometer, KATRIN, recently showed that m < 0.8 eV---still far from the scale suggested by neutrino-oscillation mass splittings (0.05-0.008 eV). The Project 8 collaboration is using radio-frequency cyclotron radiation, rather than traditional spectrometers, to detect nuclear beta decay electrons (including, recently, a small-scale tritium endpoint measurement.) In this talk, I'll survey the current science of neutrino mass measurement and show how Project 8 is planning a campaign to study an atomic tritium source with 0.05 eV neutrino mass sensitivity.

Atomic Fountain Interferometry (AFI) is a disruptive technology for the measurement of gravitational gradients and accelerations with remarkable precision. AFI is based on the manipulation of atom cloud in a free-fall-tower using laser pulses to create a superposition of two momentum space pathways. The interferometric signal contrast is limited by variations in the initial velocity of the atoms in the cloud and variations in the laser amplitude over the cross-section of the cloud. A robust pulse scheme must provide separation, mirroring, and recombination of the atoms to high precision over a realistic range of these variations. In our work we apply optimal control theory to design suitable pulse sequences to improve the efficiency of the AFI device. Our methodology relies on the simulation of the interferometer's full quantum dynamics. We test the efficacy of the proposed pulse schemes using adiabatic passage with frequency-chirped pulses and explore numerical optimal control theory to generate robust pulse schemes and formulate the most general control conditions for the implementation of an interferometer.

Applying optimal control theory for the efficient generation of N-atom non-classical states and their use for atom interferometry and quantum metrology are also discussed. As an example, we design a novel pulse sequence that drives an ensemble of cold trapped atoms into an optimal squeezed state. These states have a fundamental precision scaling proportional to the inverse of the number of atoms, known as the Heisenberg limit.

To be announced

]]>Emergent physical phenomena and broken symmetries can be linked through one-dimensional objects and their dot or cross products. Eight kinds of one-dimensional (1D) objects (four; vector-like, the other four; director-like) and their dot/cross products are defined in terms of symmetry. The dot or products form certain mathematical groups. Those 1D objects are associated with characteristic physical phenomena, and when a 3D system has identical or lower (but not higher) symmetries than an 1D object with particular phenomena, the 3D system can exhibit the phenomena. Using this straightforward concept, we can understand and also predict numerous new emergent phenomena in known materials or new complex materials for desired phenomena.

Ab initio molecular simulation provides a sub-atomic understanding of chemical reactions and properties. However, due to the exponential scaling of many-body simulations, one can only obtain approximate solutions to the electronic Schrodinger equation. Although, in principle one can increase the sophistication of an approximation arbitrarily until a desired accuracy is reached, more sophisticated calculations rapidly become intractable for even the largest supercomputers. Quantum computers provide a promising route to bypass these limitations in molecular simulation. However, current and near term quantum devices suffer from environmental noise and gate errors, thus limiting the calculations able to be achieved. In this talk, I will discuss basic ideas concerning the simulation of molecules on quantum devices, and describe two different algorithmic directions our collaborative team has recently devised for trying to extract the most computational work out of noisy devices.

To be announced

]]>Recent years have witnessed enormous progress toward the development of quantum computers---novel devices that exploit quantum mechanics to perform tasks far beyond the reach of the world’s best supercomputers. Qubits based on semiconductor spins are particularly promising because of their long coherence times and prospects for scaling up to large processors by leveraging the existing semiconductor electronics infrastructure. However, many fundamental challenges related to decoherence, controllability, and device architecture remain. I will describe our efforts to address these challenges on multiple fronts using smart control schemes, bath-state engineering with Floquet physics, and entanglement generation between remote spins.

CM-AMO Seminar