Looking at tech based solutions to problems of algorithmic fairness


Posted on July 11, 2017  /  0 Comments

Big data is a team sport. We have people with different skill sets in our team. I can’t code, but I sit in on meeting where arcane details of software are discussed. Our coders spend most of their time on analytics, but think about broader issues such as fairness. So here is a snippet that had the eye of Lasantha Fernando:

If you’ve ever applied for a loan or checked your credit score, algorithms have played a role in your life. These mathematical models allow computers to use data to predict many things — who is likely to pay back a loan, who may be a suitable employee, or whether a person who has broken the law is likely to reoffend, to name just a few examples.

Yet while some may assume that computers remove human bias from decision-making, research has shown that is not true. Biases on the part of those designing algorithms, as well as biases in the data used by an algorithm, can introduce human prejudices into a situation. A seemingly neutral process becomes fraught with complications.

For the past year, University of Wisconsin–Madison faculty in the Department of Computer Sciences have been working on tools to address unfairness in algorithms. Now, a $1 million grant from the National Science Foundation will accelerate their efforts. Their project, “Formal Methods for Program Fairness,” is funded through NSF’s Software and Hardware Foundations program.

Comments are closed.