Skip to yearly menu bar Skip to main content


Poster

Position Paper: Social Choice for AI Ethics and Safety

Vincent Conitzer · Rachel Freedman · Jobstq Heitzig · Wesley H. Holliday · Bob Jacobs · Nathan Lambert · Milan Mosse · Eric Pacuit · Stuart Russell · Hailey Schoelkopf · Emanuel Tewolde · William Zwicker


Abstract:

Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes, or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans’ expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But which humans get to provide the feedback or principles? And how is their potentially diverging input aggregated into consistent data about “collective” preferences or otherwise used to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on [redacted for anonymous review].

Live content is unavailable. Log in and register to view live content