I’m a post-doctoral researcher at MIT, supervised by Ev Fedorenko, and a future Assistant Professor at Stanford Linguistics (starting fall 2024). I did my PhD in computational linguistics at Ohio State, where I was advised by William Schuler and Micha Elsner.
I use computational and experimental methods to study language and the mind, particularly (1) the cognitive processes that allow us to understand the things we hear and read so quickly, (2) the learning signals that we leverage as children to acquire language from the environment, and (3) the role played by real-time information processing constraints in shaping language learning and comprehension.
I often build deep learning models to investigate these questions, and I’m actively developing machine learning techniques to help scientists understand complex dynamical systems like the human mind and brain. My work intersects machine learning, cognitive science, neuroscience, artificial intelligence, natural language processing, statistics, and (psycho)linguistics.
I’ve also done some linguistic analysis of Guaraní (spoken in Paraguay) and Iyasa (spoken in Cameroon).
- PNASLarge-scale evidence for logarithmic effects of word predictability on reading timeProceedings of the National Academy of Sciences to appear
- Open MindWord Frequency and Predictability Dissociate in Naturalistic ReadingOpen Mind to appear
- Open MindA deep learning approach to analyzing continuous-time cognitive processesOpen Mind to appear
- Cer CortNo evidence of theory of mind reasoning in the human language networkCerebral Cortex 2023
- J NeuroRobust effects of working memory demand during naturalistic language comprehension in language-selective cortexJournal of Neuroscience 2022
- CoNLLBest Paper AwardAcquiring language from speech by learning to remember and predictIn Proceedings of the 24th Conference on Computational Natural Language Learning 2020
- NpsyfMRI reveals language-specific predictive coding during naturalistic sentence comprehensionNeuropsychologia 2020