We are a group of scientists working at the intersection of neuroscience, artificial intelligence, and causality. Our lab has become a leading voice in critically examining how causality works in complex systems where randomization isn’t possible—a fundamental challenge in both neuroscience and AI. We’re particularly focused on the credit assignment problem: understanding how brains and AI systems determine which components are responsible for success or failure. Our approach views deep learning through the lens of cultural evolution, arguing that “nothing makes sense in deep learning, except in the light of evolution.”
Our current work centers on three interconnected themes: causality in data science, artificial intelligence and deep learning, and democratizing scientific education. We’re pursuing ambitious projects including simulating the complete C. elegans nervous system—an effort to reverse-engineer an entire organism’s neural computation. This connects to our broader NeuroAI agenda, where we explore how insights from neuroscience can improve AI and vice versa, guided by our perspective that “nothing makes sense in deep learning, except in the light of evolution”.
The lab has pioneered several major educational initiatives that have transformed the scientific landscape. Neuromatch Academy, co-founded by Konrad, has trained thousands of students globally in computational neuroscience through free, accessible online programs. The Community for Rigor (C4R), launched in 2023 with NIH/NINDS support, provides open-source educational modules addressing research biases, logical fallacies around causality, and experimental design. Additionally, Konrad co-organizes Neuro4Pros with Gunnar Blohm—a summer school for young computational neuroscience professors focusing on rigorous science, mentoring, and lab management.
Recent projects span from developing methods for causal inference in high-dimensional neural data to exploring how evolution shapes both biological and artificial neural networks. We’ve shown that cultural evolution provides a powerful framework for understanding why certain deep learning architectures succeed while others fail. Our work increasingly focuses on the credit assignment problem—both how brains solve this fundamental learning problem and how we can better assign credit in real-world causal systems. The lab’s approach to AI and causality has been featured in recent talks at Stanford and Wharton.
Our lab is a wonderful spot for anyone who is super driven by curiosity and likes to learn/move through ideas quickly. Instead of one big “lab project”, everyone is generally the chief of their own individual projects.
Since our lab includes several fields, we don’t have big lab meetings with everyone. Instead, we engage in a number of practices to facilitate good communication in the lab. Currently these include
Every week, more or less, we chat about current lab practices and sometimes vote on new things.