I am a PhD student in the Computer Science department at the University of Oxford, advised by Prof. Varun Kanade and Prof. Phil Blunsom. My PhD is generously supported by Google DeepMind. I am broadly interested in the intersection of empirical/scientific and theoretical understanding of deep learning models. Most of my research focuses on analyzing the expressiveness and algorithmic learning abilities of neural network architectures to gain insights that can help us develop more effective models.
Interests: Sequence Modelling Architectures (Transformers, RNNs/SSMs, etc.); Pretraining LLMs; AI Safety and Verification
Last year, I was a student researcher at Google in Sunnyvale, where I worked on improving LLM agents. Before that, I interned at Cohere twice, where I worked on pretraining LLMs with non-Transformer architectures. Before joining Oxford, I spent two amazing years as a Research Fellow at Microsoft Research India, where I worked with Dr. Navin Goyal. I did my undergrad in BITS Pilani, India, in 2019. If you are interested to chat about research or anything else, feel free to drop me an email.
Provably Learning Attention with Queries
, Kulin Shah, Michael Hahn, Varun Kanade
ICML 2026
pdf
abstract
Automata Learning and Identification of the Support of Language Models
, Michael Hahn, Varun Kanade
ICLR 2026
pdf
abstract
Separations in the Representational Capabilities of Transformers and Recurrent Architectures
, Michael Hahn, Phil Blunsom, Varun Kanade
NeurIPS 2024
pdf
abstract
Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions
, Arkil Patel, Phil Blunsom, Varun Kanade
ICLR 2024 Oral
pdf
code
abstract
On the Ability and Limitations of Transformers to Recognize Formal Languages
, Kabir Ahuja, Navin Goyal
EMNLP 2020
pdf
code
abstract
On the Practical Ability of RNNs to Recognize Hierarchical Languages
Best Short Paper Award
, Kabir Ahuja, Navin Goyal
COLING 2020
pdf
code
abstract