Presented by Prof. Rohan Samarajiva at University of Moratuwa on 1st February 2019
The NYT piece suffers from peculiar worldview of American and European journalists who think all good innovations come from their part of the world (Singapore pioneered congestion pricing for road use in 1975), but let’s focus on the positive: the drawing out of lessons from Thaler and Springsteen about the need to address hardwired perceptions of fairness: Technology is making “variable” or “dynamic” pricing — the same strategies that ensure a seat on an airplane, a hotel room or an Uber car are almost always available if you’re willing to pay the price — more plausible in areas with huge social consequences. Dynamic pricing of electricity could help bring down pollution, reduce energy costs and make renewable energy more viable. Constantly adjusting prices for access to highways and congested downtowns could make traffic jams, with all the resulting wasted time and excess emissions, a thing of the past. Any sector where supplies tend to be fixed but demand fluctuates — the water supply, health care — would seem like prime candidates for variable pricing.
Big data is a team sport. We have people with different skill sets in our team. I can’t code, but I sit in on meeting where arcane details of software are discussed. Our coders spend most of their time on analytics, but think about broader issues such as fairness. So here is a snippet that had the eye of Lasantha Fernando: If you’ve ever applied for a loan or checked your credit score, algorithms have played a role in your life.
In the context of LIRNEasia’s big data work, we intend to wrestle with these issues. If we are not getting our hands dirty with the data and the stories we extract from them, I fear the conversation will be sterile. First, students should learn that design choices in algorithms embody value judgments and therefore bias the way systems operate. They should also learn that these things are subtle: For example, designing an algorithm for targeted advertising that is gender-neutral is more complicated than simply ensuring that gender is ignored. They need to understand that classification rules obtained by machine learning are not immune from bias, especially when historical data incorporates bias.