Reflections on a panel discussion concerning AI and information disorder
Across the globe, digital media platforms have exacerbated the intentional and unintentional spread of misinformation. Misinformation, which can be spread both intentionally and unintentionally, has contributed to increased polarization, hateful rhetoric and the deterioration of democratic systems. The development of AI systems may exacerbate these issues, while also creating opportunities to combat the problem.
Seasoned voices from the South Asian and African disinformation landscape came together for a panel discussion, held in Colombo, organized by LIRNEasia, titled “Use of AI to Counter the Information Disorder” on July 3, 2025, united by a single question: Can AI assist in solving the very problems it creates?
The session, moderated by Merl Chandana (Research Manager and Team Lead of Data, Algorithms, and Policy), focused on the intersection of AI and information integrity in the context of misinformation, especially during elections. The panelists were Aldu Cornelissen, PhD (Co-founder, Murmur Intelligence), Darshatha Gamage (Co-Founder, IMPACT VOICES), Dhara Mungra (Co-founder, SimPPL), and Uthayasanker Thayasivam, PhD (Senior Lecturer, Department of Computer Science & Engineering, University of Moratuwa).
AI is a double-edged sword
While generative AI creates realistic and scalable disinformation, it also opens ways to identify and counter disinformation. Whether it’s deepfake videos, synthetic voice messages, altered images, or AI-powered chatbots, public perception is now easier to manipulate than ever. Perception manipulation is no longer far-fetched. In recent elections, AI made it possible to project political figures as younger or healthier, influencing the populace through fabricated messages and reinforcing their extreme beliefs through bots.
However, as many speakers pointed out, traditional methods were still the most damaging in a lot of places. During the 2024 election cycles in South Africa, misinformation spread predominantly through offline modes such as phone calls, printed leaflets, and rumor networks. Aldu Cornelissen put it succinctly, “everyone freaked out about AI, but it was the old-school tactics that moved the needle.”
Yet, the panel agreed that there’s no long-term solace. There’s bound to be a rise in the already sophisticated misinformation campaigns in fragile democracies as people’s ability to leverage AI tools grows. How deeply rooted the effects of AI will be in the democratic societies will largely depend on how adept they are at handling its risks. “Truth has no place here. What matters is what people believe,” Aldu Cornelissen grimly put it.
Using AI to fight back
The speakers were all in agreement that AI can serve as a powerful tool to fight misinformation if designed and implemented in a responsible manner. AI technology is already actively used in identifying disinformation campaigns, content moderation, and assisting fact-checkers to sift through and organize the dizzying amount of digital content.
For example, tools created to safeguard “information integrity” enable tracking who is talking where, and if the same people are trying to push the same narrative across different geographies. In Bangladesh, a coordinated political attack on social media was investigated through AI. The tools were used to try and surface the types of narratives that gain traction and the networks that amplify them, to explain why the attacks were very successful.
Attendees were concerned that smaller civil society organizations are faced by a myriad of challenges, such as limited resources, a lack of technical know-how on advanced AI technologies, and the rapid pace of change. Issues regarding governance kept cropping up: Who determines what civil tool is ethical? And what needs to be done so civil society is not disadvantaged technologically one-on-one those who seek to disseminate disinformation?
Language and data: the missing pieces
One of the concerns for the Global South, more specifically in Sri Lanka, is the lack of multilingual representation and scarcity of suitable information. Low resource languages like Tamil, Sinhala and many others are frequently populated with danger in filling the void where faster locally circulating information is, far from global monitoring systems.
Uthayasanker Thayasivam, who is working on Tamil NLP (Natural Language Processing), described the difficulty of building and benchmarking AI tools without sufficient training data. While the technical capabilities exist, there simply isn’t enough linguistic or contextual data to train accurate models. The lack of infrastructure and trust to support secure data-sharing further compounds the issue. While some efforts have begun to form research alliances for ethical data collaboration, these remain early-stage and difficult to scale.
As Dhara Mungra surmised: “If you have data, you can build anything. If you don’t, you’re just guessing.”
Moving from panic to progress
AI alone cannot fix the information crisis. Governance, regulation, public education, and cross-sector partnerships all have essential roles to play. The panel made it clear that without supportive policy frameworks and well-resourced civil society actors, even the best tools may fail to make an impact.
Still, there was optimism. Researchers and practitioners alike pointed to promising collaborations, open-source initiatives, and locally grounded innovation that are slowly shifting the landscape. The panel identified several key priorities for the future:
- Investing in local language NLP
- Supporting civil society’s access to ethical AI tools
- Building cross-sector coalitions for data sharing and research
- Developing robust governance structures to guide responsible AI use
Final thoughts
This panel was a necessary reminder that AI’s role in the information ecosystem is complex. It is not a villain nor a savior, but a tool whose impact depends entirely on how it is built, deployed, and governed.
For countries like Sri Lanka, the challenge is steep: multilingual populations, limited resources, and uneven platform accountability all create unique vulnerabilities. But there is also a real opportunity to shape AI in ways that are grounded in local context, responsive to public needs, and aligned with democratic values.
If the past few years have taught us anything, it’s that information disorder is not just about facts and fakes. It’s about power. Who wields the narrative, who is heard, and how “truth” even begins to be determined in a digital universe.
The question isn’t simply “Can AI clean up the mess it helped create?”, it’s “who are we going to trust to build the systems that influence the reality that we experience?”
For those interested, the full panel discussion can be viewed below.
By Vishmila Fernando (Junior Researcher, LIRNEasia) and Ranushi Ediritillekege (Data Science Researcher, LIRNEasia)
Vishmila and Ranushi are members of the Data, Algorithms and Policy (DAP) team at LIRNEasia – which participates in the policy dialogue around our algorithmically inclined society by conducting research and developing data science solutions.

Comments are closed.