Research

Advancing AI alignment, multi-agent cooperation, and governance frameworks for frontier AI systems—bridging computer science, law, economics, and social theory.

Research Overview

My research addresses fundamental questions about how to build AI systems that can operate safely, ethically, and beneficially within human normative frameworks. This work spans three interconnected areas: normative AI alignment, multi-agent cooperation, and governance mechanisms for advanced AI.

I draw on legal theory, economics, game theory, and computer science to develop both theoretical frameworks and practical solutions. Central to this work is the insight that effective AI governance and alignment require understanding how humans construct and maintain normative order—from informal social norms to formal legal systems.

01

Normative AI & Alignment

Building AI that understands and operates within human normative systems

How do we build AI systems that can navigate the complex, context-dependent normative systems that govern human behavior? I explore how insights from economics and evolutionary theory can inform AI alignment—focusing on how AI can learn to recognize, reason about, and operate within human rules, norms, and values. This includes developing training environments and architectures that enable AI to become "normatively competent."

Key Questions

  • How can AI systems learn to understand and operate in human normative frameworks?
  • What role can legal reasoning and contract theory play in AI alignment?
  • How do we train AI to recognize normative infrastructure in its environment?
  • What makes rules "legible" to learning agents, and why do silly rules help?
02

Multi-Agent Systems & Cooperation

How to make AI agents that interact, cooperate, and coordinate

As AI systems become more numerous and autonomous, understanding how they interact becomes critical. I study cooperation, coordination, and conflict among AI agents. This work draws on law, game theory, and mechanism design to address how we move beyond current approaches that lead to misaligned agent behavior—preventing harmful collusion while enabling productive cooperation in multi-agent systems.

Key Questions

  • How can we build AI systems capable of grounded argumentation?
  • How can we build AI systems that cooperate effectively with each other, and humans?
  • What risks emerge from multi-agent interactions, and how can we mitigate them?
03

AI Governance & Safety

Regulatory frameworks and institutions for advanced AI

How do we govern AI systems that evolve faster than traditional regulatory frameworks can adapt? I develop novel governance mechanisms—including regulatory markets—that can keep pace with rapid AI advancement while managing risks. This work addresses the current shortcomings with brittle approaches to regulation that cannot keep pace with frontier AI model development.

Key Questions

  • How can regulatory markets enable adaptive governance of rapidly evolving AI?
  • What international institutions are needed to coordinate AI safety efforts globally?
  • What technical infrastructure is necessary to enable safe, democratic AI development?

Explore Publications

View publications organized by research area, with full abstracts, citation information, and links to papers.

Browse Publications