I am a final-year PhD student at the University of Edinburgh. My research interests lie at the interface between mathematics, machine learning and computational neuroscience.
I develop mathematical tools for better data-driven modeling of the brain, which I in turn use to test fundamental hypotheses about the geometry and dynamics of neural computations. In particular, I combine concepts from dynamical systems, differential geometry and tensors to probe for low-dimensional neural manifolds and model dynamics on them.
Learning relies on coordinated synaptic changes in recurrently connected populations of neurons. Therefore, understanding the collective evolution of synaptic connectivity over learning is a key challenge in neuroscience and machine learning. In particular, recent work has shown that the weight matrices of task-trained RNNs are typically low rank, but how this low rank structure unfolds over learning is unknown. To address this, we investigate the rank of the 3-tensor formed by the weight matrices throughout learning. By fitting RNNs of varying rank to large-scale neural recordings during a motor learning task, we find that the inferred weights are low-tensor-rank and therefore evolve over a fixed low-dimensional subspace throughout the entire course of learning. We next validate the observation of low-tensor-rank learning on an RNN trained to solve the same task. Finally, we present a set of mathematical results bounding the matrix and tensor ranks of gradient descent learning dynamics which show that low-tensor-rank weights emerge naturally in RNNs trained to solve low-dimensional tasks. Taken together, our findings provide insight on the evolution of population connectivity over learning in both biological and artificial neural networks, and enable reverse engineering of learning-induced changes in recurrent dynamics from large-scale neural recordings.
Disentangling Mixed Classes of Covariability in Large-Scale Neural Data
Arthur Pellegrino, Heike Stein , and N Alex Cayco-Gajic
Recent work has argued that large-scale neural recordings are often well described by patterns of co-activation across neurons. Yet, the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here, we argue that task-relevant variability in neural data can also co-fluctuate over trials or time, defining distinct covariability classes that may co-occur within the same dataset. To demix these covariability classes, we develop a new unsupervised dimensionality reduction method for neural data tensors called sliceTCA. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multi-region recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure.