A Harm Reduction Framework for Algorithmic Fairness
This article recognizes the profound effects that algorithmic decision-making can have on people’s lives and propose a harm-reduction framework for algorithmic fairness. The authors argue that any evaluation of algorithmic fairness must take into account the foreseeable effects that algorithmic design, implementation, and use have on the well-being of individuals. They further demonstrate how counterfactual frameworks for causal inference developed in statistics and computer science can be used as the basis for defining and estimating the foreseeable effects of algorithmic decisions. Finally, they argue that certain patterns of foreseeable harms are unfair. An algorithmic decision is unfair if it imposes predictable harms on sets of individuals that are unconscionably disproportionate to the benefits these same decisions produce elsewhere. Also, an algorithmic decision is unfair when it is regressive, i.e., when members of disadvantaged groups pay a higher cost for the social benefits of that decision. Read more here.
You might also like
-
Event Recap: Evaluating the Climate & Socio-Environmental Impact of Data Centers
-
No Impact Without Engagement: Towards Standardised Product Metrics for Social and Behaviour Change Chatbots
-
Join us on May 28: Building a GenAI Sexual and Reproductive Health Chatbot in Senegal and Kenya – Technical and Operational Learnings
-
Safety by Design: When AI Finds the Cracks, Who Falls Through Them?
