I am interested in the real-time incremental processes by which humans produce and (particularly) understand language. Language is fascinatingly complicated—yet somehow, we generally manage to learn it without really trying, and use it efficiently for communication. How do we do that, and what are we really doing when we do? How do the patterns of language use relate to its underlying structure and complexity? I am interested in how computationally implemented models of comprehension can begin to answer such questions. In my current work I use tools from deep learning and ideas from the study of inference algorithms to inform models of incremental language processing.

(I also wrangle this website. Let me know if there’s something that needs editing!)