Details

Normativity and the AI Alignment Problem
The alignment problem in AI is currently framed in a variety of ways: it is the challenge of building AI systems that do as their designers intend, or as their users prefer, or as would benefit society. In this talk I’ll connect the AI alignment problem to the far more general problem of how humans organize cooperative societies. From the perspective of an economist and legal scholar, alignment is *the* problem of how to organize society to maximize human well-being—however that is defined. I’ll argue that “solving” the AI alignment problem is better thought of as the problem of how to integrate AI systems, especially agentic systems, into our human normative systems. I’ll present results from work that begins the study of how to build normatively competent AI systems--AI that can read and participate in human normative systems--and normative infrastructure that can support AI’s normative competence.