On July 22, LIRNEasia, in collaboration with the United States–Sri Lanka Fulbright Commission, hosted a thought-provoking Roundtable Dialogue on the Ethics of Explainable AI at its premises.
The session featured Dr. Robert T. Pennock, University Distinguished Professor at Michigan State University, who is affiliated with Lyman Briggs College, the Departments of Philosophy and the Departments of Computer Science & Engineering, and the Ecology, Evolution, and Behavior program. With a PhD in the History and Philosophy of Science, Dr. Pennock’s interdisciplinary work focuses on the intersection of science, ethics, and education, particularly on how scientific values and virtues shape responsible research and innovation.

Dr. Robert T. Pennock sharing insights on the ethics of explainable AI during the Roundtable Dialogue
The discussion centered on the theme “Ethics of Explainable AI: Design and Policy Guidance from Philosophy of Science and Vocational Virtue Theory.” Many AI systems, especially large language models, are often described as “black boxes” because their inner workings are not fully understood, even by the people who build them. This raises a variety of concerns. Dr. Pennock brought up the popular Silicon Valley slogan “move fast and break things” but added the thought, “If we move too fast, what might we break?” During his presentation, he suggested two values that must be embedded into AI systems: 1) Explainability – the ability to map the ways in which AI models deliver predictions; 2) Escape Key – the ability to easily transfer control of a system from an AI to a human (e.g., a manual override for self-driving cars).
Dr. Pennock shared his work on vocational virtue theory, stating that professionals should aspire 1) to be excellent in their vocation and 2) to practice their vocation with integrity. In this context, he suggested that an important virtuous responsibility for AI engineers should be to explain the cause of failure in an AI model, citing Canada’s Order of the Engineer as a good model to follow. The participants echoed the sentiment that explainability is essential for ensuring accountability, fairness, and public trust in AI. Dr. Pennock brought up the example of the EU, which recently proposed legal requirements for explainability. However, he noted that the term itself is still not clearly defined, especially in a legal and regulatory context. Pennock also argued that several of the existing guardrails for AI systems should be approached with caution and healthy skepticism.
The event brought together a small but engaged group of attendees from the private and non-profit sectors, with expertise in policy, law, data science, and technology to reflect on this subject.
They considered who explainability is meant for (e.g., developers, policymakers, or end users), what should be explained, to what extent, and how deeply it should go. A central concern was the need to protect users from being trapped in opaque automated systems without any recourse. This is where an “escape route,” meaning the ability to question, appeal, or opt out of AI-driven decisions, is required.
There was lengthy debate on whether demanding full explainability of AI systems was unduly tech-phobic. In many use cases, AI-embedded systems may empirically lead to better outcomes than purely human ones. For example, they may exhibit less bias in hiring practices or pre-trial release and sentencing decisions. Meanwhile, there are inherent technical constraints that may prohibit explainability for the foreseeable future. This raises a complicated ethical dilemma.
Should we forego a system that may cause less harm simply because we cannot completely explain its reasoning? How is this different from relying on human agents? After all, given our limited knowledge of neurology, isn’t the human brain also a black box? In the legal context, attendees noted that the difference between an AI system and a human judge is that the latter must always provide stated legal reasoning. Even if we cannot map the thoughts and hidden motives inside a judge’s brain, the only thing that matters is the reasoning that is presented to the courtroom. Thus, those affected by decisions have clearly recorded reasoning which they can scrutinize as required. It is not clear whether these standards can be transferred to an AI-embedded system.
The common sentiment was that the advancement of AI is valuable but needs to be paired with virtuous practices, responsible governance, and continuous assessment of its social impacts. There was a consensus on the value of embedding an escape key within AI systems or keeping a human-in-the-loop.


Comments are closed.