Prof. Michael Hahn, Ph.D.

- Academic Fellow
- Trans-disciplinary Applications
Hahn is a tenure-track professor at Saarland University. He received his PhD from Stanford University in 2022. His research focuses on theoretical foundations of large language models (LLMs). He has established theoretical results on the abilities and limitations of machine learning architectures underlying LLMs in performing reasoning. He also develops methods for interpreting the inner workings of LLMs, and applications of machine learning to cognition and neuroscience. His research has been published in venues such as PNAS, NeurIPS, ICLR, TACL, and received a Best Paper Award at ACL 2024.
Most influential publication:
Theoretical limitations of self-attention in neural sequence models (Michael Hahn)
Transactions of the Association for Computational Linguistics, 2020.
This was the first study to rigorously formalize inherent limitations of the computational expressivity of the transformer architecture. It theoretically predicted reasoning limitations that remain relevant even to current large language models.