Lessons from LIRNEasia’s Telecom Policy & Regulatory Environment work for other fields


Posted on April 8, 2016  /  0 Comments

stats_icons04I spent the past two days immersed in a new subject: elections management. We have been engaged in one of the hardest public policy puzzles, that of improving Sri Lanka’s electoral system, since early 2015. As a result of that engagement, I was invited to participate in a vulnerability assessment of Sri Lanka’s election management system.

Seventeen aspects of the election management system ranging from the way counting was done to legislation governing elections were discussed in detail. External experts had interviewed various stakeholders (interlocutors, they were called) and prepared a report. For each of the aspects, a vulnerability score (10/10 being the highest/worst) and an impact score had been assigned. The higher the vulnerability, the higher the urgency of remedial action. The two meeting was to improve the text, remove errors, and possibly change the scores.

This is not different in essence from what we tried to do with the Telecom Policy and Regulatory Environment assessment. But quite different in the details. In the TRE, the score was the result of a weighted average of the opinions of a relative large and heterogeneous group of informed stakeholders. Here, it was the opinion of the author(s) of a particular chapter, based on who was interviewed. It appeared, for example, that many civil society activists from the Colombo Bubble had been interviewed and few political-party representatives.

In the TRE, we placed the least value on inter-country comparison, but the media latched on to them. Because we kept going back to more or less the same panel in respondents, we found value in comparing scores over time. Of course, that could not be done the first time we did it. But the greatest value was in comparing scores for different aspects within the same assessment. This was another area where I felt the election management system vulnerability assessment could be improved.

Despite these shortcomings, I felt that the approach of working up a draft report that included subjective scores and then running a workshop with enough time to extract the reactions of the stakeholders is productive. The real test will be the revised text and scores.

Image source

Comments are closed.