AI in governance

Posted on August 7, 2017  /  2 Comments

Our colleague Nalaka Gunawardene has written a Facebook post where he asks “Robots in politics? Why not?”

This provides a gateway for a substantive discussion on the role of technology in governance.

First, we have to rephrase the question. I understand politics to be the art of contributing in various ways to governance.

In a democracy, a government cannot satisfy the demands of just one stakeholder group, even in the unlikely event that it is 100 percent right. It definitely cannot satisfy the demands of a stakeholder group that is intent on destroying the very existence of another legitimate stakeholder. Its decisions must be based on the public interest, in terms of ensuring the best possible curative services for today’s consumers of the services provided by the taxpayer-funded hospitals across the breadth of the country. Its decisions must take into account the future curative and palliative care needs of a rapidly aging population.

I wrote the above in the context of a protracted struggle by the government doctors’ trade union and university student organizations in an attempt to shut down a private medical university in Sri Lanka.

We should not be disappointed by the actions of politicians, because they are driven by imperatives of gaining and retaining power. But governance is a different matter.

Now we come to robots. What politicians who reach positions of power in government under democracy must do is work out compromise solutions. Can a robot do this? Yes, using algorithms (pre-defined rules) or artificial intelligence (AI), where the rules are developed by the system. So the other features of robots, such as ability to move around, speak, etc. are irrelevant. What matters is AI in the robot, not the robot itself.

What is unlikely is that AI will be successful in developing compromises that are acceptable to all stakeholders. The emotional work of listening, addressing unstated concerns etc are not likely to be the strong suite of AI.

As one of the executives responsible for Apple’s streaming music service said to the Guardian in 2015:

“Algorithms are really great, of course, but they need a bit of a human touch in them, helping form the right sequence. Some algorithms wouldn’t know that Rock Steady could follow Start Me Up, y’know. That’s hard to do,” says Iovine.

“You have to humanise it a bit, because it’s a real art to telling you what song comes next. Algorithms can’t do it alone. They’re very handy, and you can’t do something of this scale without ‘em, but you need a strong human element.”


  1. GIGO – Garbage In Garbage Out. AI reliability depends on how it learns (supervised or unsupervised). The influence and training can make AI systems be biased in their reasoning; just like humans; i.e. algorithm will propose one governance method for dark skinned vs. white skinned because of the profiling data they are fed. After all the data is generated by human instruments. AI neural weights are derived on happiness (positive) and sadness (negative) factors. Therefore, like politicians, AI is also inclined on “waasi-peththata hoiya” (වාසි පැත්තට හොය්යා).