top of page

WILLIAM T. REDMAN

I am a Senior Research Scientist at Johns Hopkins Applied Physics Lab (APL), working on problems in the intersection of machine learning, neuroscience, complex systems, and robotics in the Intelligent Systems Center (ISC). My CV.

​

Before joining APL, I was in a Dynamical Neuroscience (DYNS) PhD student at UC Santa Barbara (UCSB), where I worked in Prof. Michael Goard's systems neuroscience lab as a Chancellor's Fellow. Prior to being at UCSB, I got my bachelor's degrees in mathematics and physics from New York University (NYU).

​

UCSB is home to an exciting and growing neuroscience community, and DYNS is a wonderfully unique program for people interested in engaging with the many different facets of the field. Feel free to send any questions about DYNS my way!

​

NEWS: I have 3 papers accepted to NeurIPS 2024 (including one Spotlight). I'll be in Vancouver in December and happy to meet!

​

I have recently been invited to serve as an Action Editor for Transactions in Machine Learning Research (TMLR). I'm especially interested in supporting work in the intersection of Koopman operator theory and machine learning - feel free to reach out with any questions about submitting to TMLR!

IMG_4890_edited.jpg

RESEARCH

My research interests cover four broad areas: 1) neural representations underlying spatial navigation; 2) dynamics of learning; 3) learning of dynamics; 4) sparse machine learning. 

 

See below, and my Google Scholar and ResearchGate for more information. 

elife-75391-fig1-v1_edited.jpg

Neural Representations Underlying Spatial Navigation

Spatial navigation is a critical cognitive process that is supported by the hippocampal formation. Despite decades of research, what shared and distinctive roles the subfields of the hippocampus (CA1, CA3, DG) remain unclear. This is in part due to the challenge in recording from multiple hippocampal subareas simultaneously. With my PhD advisor, Prof. Michael Goard (UCSB), I developed a novel microprism approach for optically accessing the transverse hippocampal circuit. This allowed, for the first time, 2-photon imaging of CA1, CA3, and DG simultaneously. See our 2022 eLife paper for more details. 

​

The hippocampus is interconnected with medial entorhinal cortex (MEC), which contains neurons that exhibit periodic firing fields (grid cells). The organization of grid cells into discrete modules, where grid properties are conserved within, but not between, modules, has played a major role in guiding the fields understanding of the computations grid cells perform. However, an assumption often made in the computational neuroscience community, that grid cells in an individual module are the same (up to translation), has not been rigorously tested. Analyzing large-scale MEC recordings, we found evidence for small, but robust, heterogeneity in grid properties within a single module. We showed that this variability can be beneficial in enabling a single grid module to encode local spatial information, broadening our perspective on what computational capacity of individual grid modules is. See our 2024 eLife paper for more details.

​

​MEC is believed to play an important role in path integration (the ability to update an estimate of position based on movement direction and speed). In many ethologically relevant settings, it is important not only to keep track of our own movements, but also the movements of others (e.g., pursuit, competition over resources). Despite this, little work has been done on understanding the computations performed by the MEC in multi-agent environments. With Prof. Nina Miolane (UCSB), I extended a recurrent neural network (RNN) model, which had previously been shown to develop properties similar to neurons in MEC when trained to perform single agent path integration. By training this extended RNN to path integrate two agents, we showed that representations different from grid cells emerged, and that the RNN learned to perform computations in a relative reference frame. Our RNN model makes direct predictions that can be tested in systems neuroscience experiments. See our 2024 NeurIPS paper for more details.

CNN_training_phases_epoch_1.jpg

Dynamics of learning

Unlike systems neuroscientists, machine learning researchers have access to every weight, activation, and input to the networks they train. This offers great potential for understanding, at great detail, exactly how learning occurs in deep neural networks (DNNs). However, the complexity of modern DNN architectures make this challenging, and tools for studying the high-dimensional dynamics that occur during training are lacking. With Akshunna Dogra (Imperial College London), I showed that Koopman operator theory, a data-driven dynamical systems theory framework, captured properties associated with DNN training. The inherent linearity of the Koopman operator enabled us to accelerate training, reducing training time in small DNNs by 10x-100x. See our 2020 NeurIPS paper for more details. 

​

To obtain a complete picture of DNN training, it is necessary to have a method by which equivalent dynamics can be identified and distinguished from other, non-equivalent dynamics. To-date, such a method does not exist. Having found that we could compactly represent DNN training dynamics with Koopman operator theory, I showed, with Profs. Igor Mezic (UCSB) and Yannis Kevrekidis (JHU), that we could identify when the training of two DNNs have the same dynamics by comparing the eigenvalues associated with the Koopman operator. This enabled us to compare the early training dynamics of different convolutional neural network (CNN) architectures, as well as Transformers that do and do not undergo grokking (i.e., delayed generalization). See our 2024 NeurIPS spotlight paper for more details.​​

EDMD_with_memory_schematic.jpg

Learning of dynamics

Many of the real-world dynamical systems we care most about predicting are non-autonomous (i.e., their underlying equations change with time). To deal with this challenge, numerical approaches for Koopman operator theory are often applied along a sliding window, restricting the model to temporally local episodes. While these methods have found strong success in predicting disease, climate, and traffic, they are limited by their explicit "forgetting" of the past. This can restrict their performance in cases where dynamical regimes repeat, as the new dynamics have to be learned anew. To mitigate this, along with Prof. Igor Mezic (UCSB), I developed a new algorithm that "recalls" previous points in time where similar dynamics occurred. This is achieved by comparing Koopman representations of the current dynamics, with those saved from the past. Because this approach remembers discrete episodes, we call it "Koopman Learning with Episodic Memory". On synthetic and real-world data, we find that our method can provide a significant increase in performance, with little computational cost. See our 2024 arXiv paper (under review at Chaos) for more details. 

IMP_RG.png

Sparse machine learning

Efforts to reduce the size and computational costs associated with modern DNNs not only provide the potential for improving efficiency, but also the potential of gaining insight on the core components that make a given DNN model successful. One such sparsification approach, iterative magnitude pruning (IMP), has shown its ability to extract subnetworks, embedded within large DNNs, that are 1-5% the size of the original network, yet can be trained to achieve similar performance. Despite its success, an understanding of how IMP discovers good subnetwork remains unclear. With Prof. Zhangyang "Atlas" Wang (UT Austin), I showed that IMP performs a function analogous to the renormalization group (RG), a tool from statistical physics that performs iterative coarse-graining to identify relevant degrees of freedom. This enabled us to leverage the rich RG literature to explain when a sparse subnetwork found by IMP on one task would generalize to another, related task. See our 2022 ICML paper for more details. 

​

IMP has recently been shown to discover local receptive fields (RFs) in fully-connected neural networks. These local RFs are known to be good inductive biases for DNNs, and exist in primary visual cortex in the mammalian brain. How IMP identifies these local RFs has been an unresolved question, whose answer may shed light on the general success of IMP. With Profs. Sebastian Goldt (SISSA), Alessandro Ingrosso (Radboud University), and Zhangyang "Atlas" Wang (UT Austin), I provided evidence for the hypothesis that IMP maximizes the non-Gaussian statistics present in the representation of the fully-connected neural network at each round of pruning. This amplification of non-Gaussian statistics leads to stronger localization of the remaining weights. See our upcoming CPAL submission for more details. 

PUBLICATIONS

PEER REVIEWED:

​

13)  W. T. Redman, J. M. Bello-Rivas, M. Fonoberova, R. Mohr, I. G. Kevrekidis, and I. Mezić, Identifying Equivalent Training Dynamics. NeurIPS Spotlight (Top 5% of accepted papers) (2024)

12) W. T. Redman, F. Acosta, S. Acosta-Mendoza, and N. Miolane, Not so griddy: Internal representations of RNNs path integrating more than one agent. NeurIPS (2024)

11) F. Acosta, F. Dinc, W. T. Redman, M. Madhav, D. Klindt, and N. Miolane, Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems.  NeurIPS (2024)

10) W. T. Redman*, S. Acosta-Mendoza*, X.X. Wei, and M. J. Goard, Robust Variability of Grid Cell Properties Within Individual Grid Modules Enhances Encoding of Local Space.  eLife (2024) (* contributed equally)

9) E. R. J. Levy, S. Carrillo-Segura, E. H. Park, W. T. Redman, J. Hurtado, S. Y. Chung, A. A. Fenton, "A manifold neural population code for space in hippocampal coactivity dynamics independent of place fields". Cell Reports (2023)

8) W. T. Redman, M. Fonoberova, R. Mohr, Y. Kevrekidis, and I. Mezić, "Algorithmic (Semi-)Conjugacy via Koopman Operator Theory".  IEEE Conference on Control and Decision (CDC 2022)

7) W.T. Redman, N.S. Wolcott, L. Montelisciani, G. Luna, T.D. Marks, K.K. Sit, C.-H. Yu, S.L. Smith, and M.J. Goard, Long-term Transverse Imaging of the Hippocampus with Glass Microperiscopes. eLife (2022)

6) W. T. Redman, T. Chen,  Z. Wang, and A. S. Dogra Universality of Winning Tickets: A Renormalization Group Perspective. International Conference on Machine Learning (ICML 2022). 

5) W. T. Redman, M. Fonoberova, R. Mohr, Y. Kevrekidis, and I. Mezić, An Operator Theoretic View on Pruning Deep Neural Networks. International Conference on Learning Representations (ICLR 2022).

4) W. T. Redman, On Koopman Mode Decomposition and Tensor Component Analysis. Chaos Fast Track (2021).

3) A. S. Dogra*, and W. T. RedmanOptimizing Neural Networks via Koopman Operator Theory. Advances in Neural Information Processing Systems 33 (NeurIPS 2020) (* contributed equally)

2) W. T. RedmanRenormalization group as a koopman operator. Physical Review E Rapid Communication (2020) 

1) W. T. RedmanAn O(n) method of calculating Kendall correlations of spike trains. PLoS One (2019)

IN PROGRESS:

3. W. T. Redman, Z. Wang, A. Ingrosso, and S. Goldt, "Sparsity Enhances Non-Gaussian Data Statistics During Local Receptive Field Formation. 

2. W. T. Redman, D. Huang, M. Fonoberova, and I. Mezić, Koopman Learning with Episodic Memory

1. N. S. Wolcott, W. T. Redman, M. Karpinska, E. G. Jacobs, and M. J. Goard, The estrous cycle modulates hippocampal spine dynamics, dendritic processing, and spatial coding.

​

ABOUT

I grew up in Hopewell, New Jersey, a little town outside of Princeton and Trenton. I found math class repulsive and, until I had a rather sudden change of heart my senior year, spent most of the period working on ways to avoid paying attention. Outside of research, I enjoy hiking and cycling (both of which are complicated by my innately poor sense of direction), reading, and trying new beers.

IMG_5302.JPG
IMG_2842.JPG
IMG_8359.jpg
IMG_1757.jpg
bottom of page