WILLIAM T. REDMAN

I am a Dynamical Neuroscience (DYNS) PhD student at UC Santa Barbara, where I work in Prof. Michael Goard's systems neuroscience lab as a Chancellor's Fellow. I am also a researcher at AIMdyn, working under Prof. Igor Mezíc and Dr. Maria Fonoberova. Prior to being at UCSB, I got my bachelor's degrees in mathematics and physics from New York University (NYU). I am broadly interested in problems in neuroscience, Koopman operator theory, statistical physics, and machine learning. My CV.

UCSB is home to an exciting and growing neuroscience community, and DYNS is a wonderfully unique program for people interested in engaging with the many different facets of the field. Feel free to send any questions about DYNS (or UCSB in general) my way!

NEWS: I'll be at the Sparsity in Neural Networks (SNN) Workshop, the International Conference in Machine Learning (ICML), and the Third Symposium on Machine Learning and Dynamical Systems. 

IMG_4890_edited.jpg
 

RESEARCH

I have two broad research interests that guide my work. The first is studying the stability of the neural code (i.e. how the brain represents, stores, maintains, and retrieves information). The second is examining how ideas from dynamical systems theory, statistical physics, and machine learning are connected, and what tools from one field may be able to tell us about problems in another. See below, and my Google Scholar and ResearchGate for more information. 

elife-75391-fig1-v1_edited.jpg

OPTICALLY ACCESSING THE MOUSE HIPPOCAMPUS

The hippocampus (HPC) is known to play a critical role in episodic and spatial memory. However, the neural computations underlying these functions are not well understood. Part of this gap in knowledge stems from current technologies being limited in their ability: 1) to record from multiple HPC subfields in the same animal; 2) to record response dynamics and coordination across the HPC; 3) to identify and distinguish between different neural subtypes; 4) to allow for the chronic recording of the same cells in multiple HPC regions; and 5) to measure structural changes from finer scale structures, such as spine dynamics on apical dendrites. To address these challenges, part of my work in the Goard Lab has been developing a novel in vivo imaging technique for optically accessing the HPC. Using this method, we are able to visualize and record from all three major HPC subfields of the same mice, and definitively record from the same cells across multiple days. This method will allow for a greater interrogation of the HPC as a circuit, and how structural changes (e.g. spine turnover) affect functional properties of the HPC (e.g. the stability of place cells). For more details, see our recent eLife paper

Image from iOS (12).jpg

KOOPMAN OPERATOR THEORY, THE RENORMALIZATION GROUP, AND MACHINE LEARNING

That dynamical systems theory is connected to the renormalization group (RG - a powerful tool in theoretical physics) and to a variety of algorithms (e.g. gradient descent, the QR eigenvalue algorithm) are well known facts. But despite this, physicists and computer scientists have historically utilized methods outside of dynamical system theory to study their problems of interest.


Work of mine has focused on the exploring these connections in more detail by using Koopman operator theory (a dynamical systems framework). I have showed that the RG is a Koopman operator, which allowed for the fast computation of RG critical exponents (even for non-translationally invariant systems). This approach, called the Koopman RG, also offers the promise of finding non-trivial RG fixed points in a data-driven manner. 


Building on this work, my friend and collaborator Akshunna S. Dogra and I showed that the training of deep neural networks (DNNs) can be sped up by using Koopman operator theory to evolve DNN weights and biases. A collaboration with Igor Mezić, Yannis Kevrekidis, Maria Fonoberova, and Ryan Mohr extended this idea, showing that dynamical information captured by Koopman mode decomposition (a key technique used in Koopman operator theory) could be used to prune DNNs. This provides new algorithms for sparsifying deep neural networks and new theoretical tools for probing the nature of sparse DNNs.

I have also been interested in unsupervised tensor decompositions, which are a class of machine learning methods that have become increasingly popular in neuroscience (and a variety of other fields). I showed that, in certain cases, unsupervised tensor decompositions can be equivalent to the Koopman mode decomposition. In addition to being a connection of academic interest, this work motivates new understanding of when to use which method.


Lastly, I've recently been excited about connecting the RG to iterative magnitude pruning (IMP), a popular method for pruning DNNs. In particular, with Atlas Wang, Tianlong Chen, and Akshunna S. Dogra, I showed that IMP is an RG scheme, and elements of RG theory can be used to gain insight into the nature of sparse DNNs. 

 

PUBLICATIONS

PEER REVIEWED:

8) W. T. Redman, M. Fonoberova, R. Mohr, Y. Kevrekidis, and I. Mezić, "Algorithmic (Semi-)Conjugacy viaKoopman Operator Theory". Accepted. IEEE Conference on Control and Decision (CDC 2022)

7) W.T. Redman, N.S. Wolcott, L. Montelisciani, G. Luna, T.D. Marks, K.K. Sit, C.-H. Yu, S.L. Smith, and M.J. Goard, Long-term Transverse Imaging of the Hippocampus with Glass Microperiscopes. eLife (2022)

6) W. T. Redman, T. Chen,  Z. Wang, and A. S. Dogra Universality of Winning Tickets: A Renormalization Group Perspective. International Conference on Machine Learning (ICML 2022). 

5) W. T. Redman, M. Fonoberova, R. Mohr, Y. Kevrekidis, and I. Mezić, An Operator Theoretic View on Pruning Deep Neural Networks. International Conference on Learning Representations (ICLR 2022).

4) W. T. Redman, On Koopman Mode Decomposition and Tensor Component Analysis. Chaos Fast Track (2021).

3) A. S. Dogra*, and W. T. RedmanOptimizing Neural Networks via Koopman Operator Theory. Advances in Neural Information Processing Systems 33 (NeurIPS 2020) (* contributed equally)

2) W. T. RedmanRenormalization group as a koopman operator. Physical Review E Rapid Communication (2020) 

1) W. T. RedmanAn O(n) method of calculating Kendall correlations of spike trains. PLoS One (2019)

IN PROGRESS:

2) E.R.J. Levy, E.H. Park, W.T. Redman, and A.A. Fenton, A neuronal code for space in hippocampal coactivity dynamics independent of place fields. Under review. bioRxiv

1) A. S. Dogra, and W. T. RedmanLocal error quantification for efficient neural network dynamical system solvers. Under review. arXiv 


 

ABOUT

I grew up in Hopewell, New Jersey, a little town outside of Princeton and Trenton. I found math class repulsive and, until I had a rather sudden change of heart my senior year, spent most of the period working on ways to avoid paying attention. Realizing I had made a "huge mistake", I taught myself calculus, passing the AP BC calculus exam and developing an unexpected interested in math. While studying math and physics at NYU, I found myself in a computational neuroscience class and fell in love with the infinitude of open questions and the presence of all my favorite tools from physics. At UCSB, I simultaneously took a class on Koopman Operator Theory and a class on the Renormalization Group. Connecting ideas from the two has led me on a series of adventures, and I have spent the past few years living a split life, studying both the hippocampus and the Koopman operator.


Outside of research, I enjoy hiking and cycling (both of which are complicated by my innately poor sense of direction), trying new beers, and "running" a (satirical) art collective.

IMG_5302.JPG
IMG_2842.JPG
IMG_8359.jpg
IMG_1757.jpg