WILLIAM T. REDMAN

I am a Dynamical Neuroscience (DYNS) PhD student at UC Santa Barbara, where I work in Prof. Michael Goard's systems neuroscience lab as a Chancellor's Fellow. I am also a research intern at AIMdyn, working under Prof. Igor Mezíc and Dr. Maria Fonoberova. Prior to being at UCSB, I got my bachelor's degrees in mathematics and physics from NYU. I am broadly interested in problems in neuroscience, Koopman operator theory, statistical physics, and machine learning. My CV.

UCSB is home to an exciting and growing neuroscience community, and DYNS is a wonderfully unique program for people interested in engaging with the many different facets of the field. Feel free to send any questions about DYNS (or UCSB in general) my way!

IMG_4890_edited.jpg
 

RESEARCH

I have two broad research interests that guide my work. The first is studying the stability of the neural code (i.e. how the brain represents, stores, maintains, and retrieves information). The second is examining how ideas from dynamical systems theory, statistical physics, and machine learning are connected, and what tools from one field may be able to tell us about problems in another. See below, and my Google Scholar and ResearchGate for more information. 

full_field_GFP_edited_edited_edited.jpg

OPTICALLY ACCESSING THE MOUSE HIPPOCAMPUS

The hippocampus (HPC) is known to play a critical role in episodic and spatial memory. However, the neural computations underlying these functions are not well understood. Part of this gap in knowledge stems from current technologies being limited in their ability: 1) to record from multiple HPC subfields in the same animal; 2) to record response dynamics and coordination across the HPC; 3) to identify and distinguish between different neural subtypes; 4) to allow for the chronic recording of the same cells in multiple HPC regions; and 5) to measure structural changes from finer scale structures, such as spine dynamics on apical dendrites. To address these challenges, part of my work in the Goard Lab has been helping to develop a novel in vivo imaging technique for optically accessing the HPC. Using this method, we are able to visualize and record from all three major HPC subfields of the same mice, and definitively record from the same cells across multiple days. This method will allow for a greater interrogation of the HPC as a circuit, and how structural changes (e.g. spine turnover) affect functional properties of the HPC (e.g. the stability of place cells). For more details, see our recent preprint.

Image from iOS (12).jpg

KOOPMAN OPERATOR THEORY, THE RENORMALIZATION GROUP, AND MACHINE LEARNING

That dynamical systems theory is connected to the renormalization group (RG - a powerful tool in theoretical physics) and to a variety of algorithms (e.g. gradient descent, the QR eigenvalue algorithm) are well known facts. But despite these recognized connections, physicists and computer scientists have historically utilized methods outside of dynamical system theory to study their problems of interest.


Work of mine has focused on the exploring these connections in more detail by using Koopman operator theory (a dynamical systems framework). More precisely, I showed that the RG is a Koopman operator, which allowed for the fast computation of RG critical exponents (even for non-translationally invariant systems). This approach, the Koopman RG, also offers the promise of finding non-trivial RG fixed points in a data-driven manner. Building on this work, my friend and collaborator Akshunna S. Dogra and I showed that the training of neural networks (NN) can be sped up by using Koopman operator theory to evolve NN weights and biases

In the past few months, I have been lucky enough to start two collaborations with some fantastic researchers: Prof. Igor Mezić - the founder of modern Koopman operator theory - and his team at AIMdyn (Drs. Maria Fonoberova and Ryan Mohr), and Prof. Zhangyang Wang and his student Tianlong Chen at UT Austin. With both groups, I have been working on extending the above ideas to the field of sparse machine learning. For more details see my newest pre-prints down below. 

 

PUBLICATIONS

PEER REVIEWED:

4) W. T. Redman, On Koopman Mode Decomposition and Tensor Component Analysis. Chaos Fast Track (2021).

3) A. S. Dogra*, and W. T. RedmanOptimizing Neural Networks via Koopman Operator Theory. Advances in Neural Information Processing Systems 33 (NeurIPS 2020) (* contributed equally)

2) W. T. RedmanRenormalization group as a koopman operator. Physical Review E Rapid Communication (2020) 

1) W. T. RedmanAn O(n) method of calculating Kendall correlations of spike trains. PLoS One (2019)

IN PROGRESS:

5) W. T. Redman, T. Chen, A. S. Dogra, and Z. Wang, Universality of Deep Neural Network Lottery Tickets: A Renormalization Group PerspectiveSubmitted. arXiv

4) W. T. Redman, M. Fonoberova, R. Mohr, Y. Kevrekidis, and I. Mezić, An Operator Theoretic View on Pruning Deep Neural Networks. Submitted. arXiv

3) W.T. Redman, N.S. Wolcott, L. Montelisciani, G. Luna, T.D. Marks, K.K. Sit, C.-H. Yu, S.L. Smith, and M.J. Goard, Long-term Transverse Imaging of the Hippocampus with Glass Microperiscopes. Under review. bioRxiv

2) E.R.J. Levy, E.H. Park, W.T. Redman, and A.A. Fenton, A neuronal code for space in hippocampal coactivity dynamics independent of place fields. Under review. bioRxiv

1) A. S. Dogra, and W. T. RedmanLocal error quantification for efficient neural network dynamical system solvers. Under review. arXiv 


 
IMG_5302.JPG

ABOUT

I grew up in Hopewell, New Jersey, a little town outside of Princeton and Trenton. I found math class repulsive and, until I had a rather sudden change of heart my senior year, spent most of the period working on ways to avoid paying attention. Outside of research, I enjoy hiking and cycling (both of which are complicated by my innately poor sense of direction), trying new beers, and "running" a (satirical) art collective.