Our colleague Nalaka Gunawardene has written a Facebook post where he asks “Robots in politics? Why not?”
This provides a gateway for a substantive discussion on the role of technology in governance.
First, we have to rephrase the question. I understand politics to be the art of contributing in various ways to governance.
In a democracy, a government cannot satisfy the demands of just one stakeholder group, even in the unlikely event that it is 100 percent right. It definitely cannot satisfy the demands of a stakeholder group that is intent on destroying the very existence of another legitimate stakeholder. Its decisions must be based on the public interest, in terms of ensuring the best possible curative services for today’s consumers of the services provided by the taxpayer-funded hospitals across the breadth of the country. Its decisions must take into account the future curative and palliative care needs of a rapidly aging population.
I wrote the above in the context of a protracted struggle by the government doctors’ trade union and university student organizations in an attempt to shut down a private medical university in Sri Lanka.
We should not be disappointed by the actions of politicians, because they are driven by imperatives of gaining and retaining power. But governance is a different matter.
Now we come to robots. What politicians who reach positions of power in government under democracy must do is work out compromise solutions. Can a robot do this? Yes, using algorithms (pre-defined rules) or artificial intelligence (AI), where the rules are developed by the system. So the other features of robots, such as ability to move around, speak, etc. are irrelevant. What matters is AI in the robot, not the robot itself.
What is unlikely is that AI will be successful in developing compromises that are acceptable to all stakeholders. The emotional work of listening, addressing unstated concerns etc are not likely to be the strong suite of AI.
As one of the executives responsible for Apple’s streaming music service said to the Guardian in 2015:
“Algorithms are really great, of course, but they need a bit of a human touch in them, helping form the right sequence. Some algorithms wouldn’t know that Rock Steady could follow Start Me Up, y’know. That’s hard to do,” says Iovine.
“You have to humanise it a bit, because it’s a real art to telling you what song comes next. Algorithms can’t do it alone. They’re very handy, and you can’t do something of this scale without ‘em, but you need a strong human element.”
2 Comments
Nuwan Waidyanatha
GIGO – Garbage In Garbage Out. AI reliability depends on how it learns (supervised or unsupervised). The influence and training can make AI systems be biased in their reasoning; just like humans; i.e. algorithm will propose one governance method for dark skinned vs. white skinned because of the profiling data they are fed. After all the data is generated by human instruments. AI neural weights are derived on happiness (positive) and sadness (negative) factors. Therefore, like politicians, AI is also inclined on “waasi-peththata hoiya” (වාසි පැත්තට හොය්යා).
https://psmag.com/news/artificial-intelligence-will-be-as-biased-and-prejudiced-as-its-human-creators
Nalaka Gunawardene
Interesting discussion here:
https://theconversation.com/can-we-replace-politicians-with-robots-56683
LIRNEasia CEO Helani Galpaya at UNDP conference on New Ways of Governing
LIRNEasia CEO Helani Galpaya attended UNDP’s New Ways of Governing Conference in Oslo on 28–29 October 2025, contributing to discussions on AI and data governance. Her session drew on LIRNEasia’s research on data-governance policies across Asia and the organisation’s ongoing work on responsible AI.
LIRNEasia Insights on Disaster Management: The Resilience of ICT Infrastructure During Disasters
Natural disasters and humanitarian crises often create disorder and panic. While basic needs such as food, clean water, and shelter often take priority, access to accurate information helps calm societal turbulence.
LIRNEasia Research Fellow Ashwini Natesan at Media Forward 2025
Media Forward 2025 was held from 24–26 November 2025 in Colombo, organised by UNDP Sri Lanka in collaboration with the Sri Lanka Broadcasters’ Guild, Hashtag Generation, Factum, Verité Media and Politics, the Media Law Forum, the Free Media Movement, and the Sri Lanka Digital Journalists’ Association. LIRNEasia Research Fellow Ashwini Natesan joined as a panelist for the first session of the event, titled ‘Strengthening Coordination and Shared Accountability in Digital Spaces’.
Links
User Login
Themes
Social
Twitter
Facebook
RSS Feed
Contact
9A 1/1, Balcombe Place
Colombo 08
Sri Lanka
+94 (0)11 267 1160
+94 (0)11 267 5212
info [at] lirneasia [dot] net
Copyright © 2025 LIRNEasia
a regional ICT policy and regulation think tank active across the Asia Pacific