My undergrad was at Dartmouth College, where I mostly did Computer Science and Engineering, but sparked an interest in the connection between AI and neuroscience. This led me in 2014 to a PhD program in Computer Science at the University of Rochester, where I quickly discovered that making “brain inspired AI” means first understanding “brains.” I transferred to the Brain and Cognitive Science department in 2015, where I did my main PhD work on Bayesian Inference in low-level visual perception, graduating in fall 2020.
Everyone keeps talking about optimal Bayesian brains (myself included). What does this really mean? How far can this metaphor take us? When, why, and how do optimal agents use probability to reason about the world? And, finally, what can all of this really tell us about the brain?
Along the way, thinking about causal models, modular neural networks, and philosophy of computation.