The Normativity Lab

Lab Members


Gillian Hadfield

Website | LinkedIn | Bluesky | X

Gillian K. Hadfield is the Bloomberg Distinguished Professor of AI Alignment and Governance at the Whiting School of Engineering and the School of Government and Policy at Johns Hopkins University.

Research Interests: Computational models of human normative systems; Legal, regulatory, and technical systems for AI; Human and AI normative alignment; Multi-Agent RL systems


Rebekah Gelpí

Post Doctoral Researcher
Website | Email | Bluesky | LinkedIn | Google Scholar | GitHub

Rebekah Gelpí is a post doctoral researcher at Johns Hopkins University and the Schwartz Reisman Institute for Technology and Society at the University of Toronto.

Research Interests: Social Norms, Coordination Problems, Social learning, Belief Revision, Multi-Agent RL


Maria Ryskina (she/they)

CIFAR AI Safety Postdoctoral Fellow
Website | Email | Bluesky

Maria Ryskina is a CIFAR AI Safety Postdoctoral Fellow at the Vector Institute. She holds a PhD in Language and Information Technologies from Carnegie Mellon University, an MSc in Information Technology from Skolkovo Institute of Science and Technology, and BSc & MSc in Applied Mathematics and Physics from Moscow Institute of Physics and Technology. Their research bridges natural language processing and AI safety.

Research Interests: Natural Language Processing, Computational Linguistics and Cognitive Sciences, Normative Reasoning for AI


Harsh Satija

Vector Distinguished Postdoctoral Fellow
Website | Email | X

Harsh Satija is a post doctoral researcher at the Vector Institute. He obtained his PhD from McGill University.

Research Interests: AI Alignment, Collective Intelligence, Sequential Decision-Making Under Uncertainty, Reasoning In Large Language Models


Alexander Bernier

Visiting Doctoral Student (JHU)
Email

Alexander Bernier has a SJD (in progress) (University of Toronto); LLM (University of Toronto); JD (McGill University); BCL (McGill University).

Research Interests: AI Alignment, Normative Institutions, Institutional Economics, Open Science, Privacy Law


Matthew Renze

Doctor of Engineering Student
Website | Email | LinkedIn

Matthew Renze is a doctor of engineering student at the Johns Hopkins University. He has a MS in Artificial Intelligence (Johns Hopkins University); BS in Computer Science (Iowa State University); BA in Philosophy (Iowa State University).

Research Interests: Alignment of AI Agents


Austen Liao

PhD Student
Email | Website

Austen Liao is a PhD student at Johns Hopkins University. He has a B.A. in Computer Science from the University of California, Berkeley.

Research Interests: AI Alignment, Scalable Oversight


Binze Li

PhD Student
Email

Binze Li is a PhD student at Johns Hopkins University. She has a B.S. in Statistics & Data Science and Cognitive Science from UCLA.

Research Interests: AI Safety, Alignment, Social Norms, Multi-Agent Systems


Kuleen Sasse

PhD Student
Website | Email | X

Kuleen Sasse is a PhD student at Johns Hopkins University. He has a B.S Computer Science from Johns Hopkins University; B.S Applied Mathematics and Statistics from Johns Hopkins University.

Research Interests: AI Safety, Responsible AI, Large Language Models


Andrea Wynn

PhD Student
Website | Email | LinkedIn

Andrea Wynn is a PhD student at Johns Hopkins University Department of Computer Science and JHU Institute for Assured Autonomy (IAA). She has a MSE in Computer Science from Princeton University and a BS in Computer Science & Mathematics from the Rose-Hulman Institute of Technology.

Research Interests: AI safety, AI alignment, human-AI collaboration, Cognitive Science


Shuhui Zhu

PhD Candidate
Website | LinkedIn | Email

Shuhui Zhu is a Ph.D. Candidate at University of Waterloo and the Vector Institute.

Research Interests: Cooperative AI, Reinforcement Learning, LLMs, Mechanic Design, Game Theory


Seokhyun Baek

Undergraduate Student
LinkedIn | Email

Seokhyun Baek is an undergraduate student at Johns Hopkins University.

Research Interests: Multi-Agent Reinforcement Learning, Alignment of AI agents, Large Language Models, AI Safety, AI Governance


Andrew Gold

Communications Associate
Email | LinkedIn

Andrew Gold is a communications associate working with the Normativity Lab at Johns Hopkins.


Chris LaRosa

Normativity Lab Research Program Manager, Chief of Staff to Gillian Hadfield
Email | LinkedIn

Chris LaRosa is a research program manager working with the Normativity Lab at Johns Hopkins.


Muhamed Sulejmanagic

AI Policy Researcher
Email | LinkedIn

Muhamed Sulejmanagic is an AI policy researcher working with the Normativity Lab at Johns Hopkins.