Akshay Gautam

Machine Learning & Computational Neuroscience Researcher

About

Research

I'm a PhD student at the Institute of Machine Learning, University of Edinburgh, working on applying machine learning methods to understand how the brain processes information.

My research addresses a fundamental challenge in neuroscience: when we record from neural circuits, we observe a noisy mixture of external inputs, internal dynamics, and measurement error. How do we disentangle these components to understand what the circuit is actually computing?

I develop identifiable generative models to recover the latent inputs and internal dynamics shaping neural activity.Currently, this involves developing low-rank input decompositions for linear dynamical systems and control-theoretic approaches for nonlinear recurrent networks. More broadly, I'm interested in inverse problems: given high-dimensional measurements from a complex system, what underlying structure or process generated them?

Entrepreneurship & Applications

Beyond research, I'm interested in building things that matter. I'm drawn to entrepreneurship and applying ML methods to solve real problems, whether in neurotechnology, computational tools, or other domains where these techniques can create practical impact. I'm motivated by opportunities to work across different contexts and contribute to projects with real-world value, from early-stage startups to grassroots initiatives.

Profile photo
Expertise
Machine Learning for Complex Systems
Research Focus
Dynamics and Geometry of Neural Computation
Methods & Tools
Probabilistic Modelling and Dynamical Systems using JAX/PyTorch

Projects

Recent work in computational neuroscience and machine learning

Neural Unmixing: Decoding Recurrent Dynamics and Structured Inputs

Implementation and evaluation of structured Multivariate Autoregressive (MVAR) and Latent Dynamical System (LDS) models with low-rank inputs for analyzing neural time series data. Developed tailored parameter estimation algorithms and a nested cross-validation framework to robustly recover latent dynamics and structured variability across trials. This work forms the foundation for my ongoing PhD research.

NumPySciPy
Learn More

Disentangling Input-Driven and Intrinsic Neural Variability

Integration of two neural dynamics modeling frameworks, Low Tensor Rank RNNs and iLQR-VAE to analyze learning-induced changes in neural connectivity by inferring inputs and connectivity changes in nonlinear dynamical systems from neural data. Work involved wrestling with fundamental identifiability challenges in separating intrinsic dynamics from external forcing in non-linear systems. This work forms the foundation for my ongoing PhD research.

JAXPyTorch
Learn More

Multi-Label Safety Classification with Active Learning

Multi-label safety classification system for imbalanced datasets, integrating active learning with human annotator feedback. Fine-tuned transformer models (T5, LLaMA, BERT) using LoRA and 8-bit quantization, comparing generative multi-label (seq2seq) and discriminative binary classification approaches. Exploration of LLM-assisted annotation pipelines (LLMaAA methodology) combining prompt engineering with active learning strategies for scalable data collection.

PyTorchHuggingFace TransformersLoRAT5LLaMABERT
Learn More