Skip to yearly menu bar Skip to main content


Poster

Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics

Debjani Saha · Candice Schumann · Duncan McElfresh · John P Dickerson · Michelle Mazurek · Michael Tschantz

Virtual

Keywords: [ Fairness, Equity, Justice, and Safety ] [ Social Good Applications ] [ Fairness, Equity and Justice ] [ Accountability, Transparency and Interpretability ]


Abstract:

Bias in machine learning has manifested injustice in several areas, such as medicine, hiring, and criminal justice. In response, computer scientists have developed myriad definitions of fairness to correct this bias in fielded algorithms. While some definitions are based on established legal and ethical norms, others are largely mathematical. It is unclear whether the general public agrees with these fairness definitions, and perhaps more importantly, whether they understand these definitions. We take initial steps toward bridging this gap between ML researchers and the public, by addressing the question: does a lay audience understand a basic definition of ML fairness? We develop a metric to measure comprehension of three such definitions--demographic parity, equal opportunity, and equalized odds. We evaluate this metric using an online survey, and investigate the relationship between comprehension and sentiment, demographics, and the definition itself.

Chat is not available.