A Harm Reduction Framework for Algorithmic Fairness
This article recognizes the profound effects that algorithmic decision-making can have on people’s lives and propose a harm-reduction framework for algorithmic fairness. The authors argue that any evaluation of algorithmic fairness must take into account the foreseeable effects that algorithmic design, implementation, and use have on the well-being of individuals. They further demonstrate how counterfactual frameworks for causal inference developed in statistics and computer science can be used as the basis for defining and estimating the foreseeable effects of algorithmic decisions. Finally, they argue that certain patterns of foreseeable harms are unfair. An algorithmic decision is unfair if it imposes predictable harms on sets of individuals that are unconscionably disproportionate to the benefits these same decisions produce elsewhere. Also, an algorithmic decision is unfair when it is regressive, i.e., when members of disadvantaged groups pay a higher cost for the social benefits of that decision. Read more here.
You might also like
-
New paper: Exploring Emerging AI as Subject and Object in Democracy-Focused Evaluation
-
Podcast: Assessing evidence on the effectiveness of humanitarian AI use cases with Humanitarian AI Today
-
New brief: Artificial intelligence in the humanitarian sector
-
REvaluation week podcast episode: “Does AI really save work in evaluation?”