Skip to yearly menu bar Skip to main content


Poster

LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models

Tianci Liu · Haoyu Wang · Shiyang Wang · Yu Cheng · Jing Gao


Abstract:

Large language models (LLMs) have achieved impressive performance on various natural language generation tasks.Nonetheless, they suffer from generating negative contents that are biased against certain demographic group (e.g., female), raising severe fairness concerns. As remedies, prior works intervened the generation by removing attitude or demographic information, inevitably degrading the generation quality and resulting in notable \textit{fairness-fluency} trade-offs. However, it is still under-explored to what extent the fluency \textit{has to} be affected in order to achieve a desired level of fairness. In this work, we conduct the first formal study from an information-theoretic perspective. We show that previous approaches are excessive for debiasing and propose LIDAO, a universal framework to debias a (L)LM at a better fluency provably. We further robustify LIDAO in adversarial scenarios, where a carefully-crafted prompt may stimulate LLMs exhibiting instruction-following abilities to generate texts with fairness issue appears only when the prompt is taken into account. Experiments on three LMs ranging from 0.7B to 7B parameters demonstrate the superiority of our method.

Live content is unavailable. Log in and register to view live content