The Affordability Index: spurious results that can derail a worthy goal?


Posted on December 23, 2013  /  0 Comments

The Alliance for Affordable Internet (AA4I) launched their Affordability Report to showcase their newly developed Affordability Index. LIRNEasia’s CEO Helani Galpaya was invited by AA4I to participate in a panel at the launch of the report at the ICTD conference that took place in Cape Town, South Africa in December 2013.

Composite indices that compare and rank countries have value because they get the attention of policy makers and regulators – those at the bottom of the table are indignant (and are hopefully moved to improve matters on the ground so they do better next year) and those at the top use it for publicity. In both cases, it is a good way for research organizations or others to start a meaningful dialog with a government or a regulatory agency, to start identifying what actions are needed for the country to perform better (and rise up in the ranking) in the future.

The value of these indices increases when they cover a large number of countries. But specifically because of such wide coverage, they are often imperfect – the quality of data available across a range of countries is hardly of even quality. The Affordability Index suffers from a serious data quality problem, primarily because, like everyone else, they fall back on poor-quality ITU data. But this is a minor issue compared to other problems we see in the design of the index. Below are some of the points highlighted by Helani Galpaya during the AA4I panel in Cape Town:

  1. (Unavoidable?) poor-quality objective data: The first type of data the Affordability Index uses are ‘objective’ data ‘such as ‘Broadband subscribers per 100 population’, and ‘Cost of Fixed band per capita’ as reported by the ITU. We have commented in the past about the problems with timeliness and accuracy of ITU data, but let’s accept that in the absence of an alternative comprehensive data set to cover a majority of the world’s economies, ITU data are the best AA4I can get (but as RIA has shown for African data, it is not inevitable that everyone has to fall back on ITU data; if A4AI is serious they will put some money behind the improvement of data quality). As a colleague who has spent his entire career working on indicators said to us:

    I have lost track of the number of alliances and initiatives saying they are going to benchmark and do this or that for developing countries. Sadly, I have not seen any of them collect their own data or even commission someone to do it but rather rebrand ITU data and estimates. . . . I prefer the regions (so far I have seen for Africa, South America and the one you did for the Asia) where academics take a low user basket and populate with data they collect. How much could be done if the members of such an alliances actually funded some of that collection

    But even then, the presentation of the report leaves much to be desired – for example, Annex C which lists the indicators used has curious things such as ‘Cluster of ITU indicators’ which is taken from ‘ITU Eye’. Which ITU indicators? There were over 100, the last time we looked.

  2. Possible biases in the responses to the survey (i.e. overly subjective ‘subjective data’): The second type of data the Affordability Index uses come from answers given to a previously conducted expert opinion survey that led to the Web Index. The Affordability Index picks and choses some questions (from the much longer Web Index questionnaire). For example, Q54 ‘To what extent does the government ICT regulator perform its functions according to public and transparent rules and precedents’ is one of the questions asked in the survey. The problem is that it appears in most cases there is just one person (at most two) respondents per country. Depending on who you ask (i.e. the interests, incentives and experiences of the respondents), the answer to a question such as the one above could vary significantly. We know, because we used a similar (though much shorter) opinion survey we use for the Telecom Policy and Regulatory Environment (TRE) tool we developed. In order to avoid such biases, we insisted that in each country it was mandatory to have a minimum of 45 respondents spread across 3 groups (service providers in the ICT sector such as operators who are directly impacted by the regulator’s actions; those that have broad interest in surveying the regulatory environment such as researchers, journalists, industry analysts; those that represent the public interest such as consumer groups). We also insisted that a minimum of 15 respondents per group is needed, and allocated equal weight to the opinions of each group. Without such measures, we have seen first-hand how varied (or biased) the responses can be. We realize this is a resource intensive activity, but broader solicitation of survey responses than done today for the Index is certainly warranted.
  3. Lack of clarity of ‘experts’ reviewing regional responses: During the panel discussion in Cape Town, one of the authors mentioned ‘strict quality control procedures’, where a regional experts reviewed all country-level responses to make sure they made sense. Presumably these experts then made corrections (i.e. change the country-experts responses, which would have resulted in a change in a country’s rankings). In theory, this is one (though imperfect) way to lessen the bias problem discussed above. But the report is really unclear about how much was changed by these experts. For credibility, the before and after results should be published in an annex.
  4. Discrepancy between what’s on paper and what happens in practice: Another quality control methodology employed by the index is insist that the respondents to the survey upload evidence – e.g. documents, policy statements and such – that can justify their responses to the survey questions. This in indeed an excellent practice. However the problem is that regulation and policy making (at least in less developed economies) does not always follow what’s written on paper. There is significant discretion in the system, allowing for policy makers, regulators and even private sector to act outside of what is written down. By insisting on a clinical set of ‘document’s, the survey is likely to miss capturing the complex realities of the regulatory and policy environments faced by the sector actors. In LIRNEasia’s TRE Survey, we therefore insisted that respondents evaluate the regulatory environment based on ‘what really happens’, irrespective of what’s on paper. The experience of each respondent (and their response) can therefore vary significantly – which is why we went to so much trouble to ensure a minimum number of respondents per sector and so on as mentioned earlier.
  5. Combining outcomes with and factors contributing to the outcomes – is this an index of ‘Affordability’?: The index sounds as if it is a way to rank countries on some measure of broadband affordability. Yet instead of just taking affordability measures (or, taking affordability, infrastructure and access measures, as is claimed), it combines data related to actions that influence the affordability, such as the availability of spectrum and the allocation of that spectrum. In other words, it’s combining the sector’s desired outcome (i.e. affordability) with factors that may or may not influence that outcome (i.e. availability of spectrum, quality of regulatory decision making and so on). The result is not really a picture of affordability, but of something else (what, we aren’t sure). We therefore would recommend the Index think about separating out these things.
  6. Conclusions that seem to not be stemming from the Index or data in the Index: We already mentioned that one of the biggest problems is that the Index conflates affordability (an output/outcome in the telecom sector) with factors that may/may not contribute to that output/outcome. But the problems go further. The report appears to imply that some of the ‘best practices’ or conclusions it recommends are derived from the data itself. For example, it concludes that ‘competition is not a silver bullet’. Yes, it’s not a sufficient condition, we agree. But how is that conclusion derived from the data (as the report implies)? Helani Galpaya asked if at least simple regressions were run between competition measures (e.g. HHIs, number of players or other measures of a competitive BB market) and the level of affordability (or the ranking of countries within the index), and the answer given by the authors was ‘no’. This is again pointing to the meaningless of attempting to combine outcomes (affordability) with actions that may or may not cause affordability (derived from the survey questionnaire). Other best practices/conclusions in the report are clearly drawn from in-depth case studies. This makes sense. But why then conflate ‘causes’ and outcomes in the first place within the Index?
  7. Impositions of a mental model of what regulatory/policy actions are necessary for the outputs: Combining possible ‘causes’ of affordability with indicators of affordability itself is one problem. The selection of such ‘causes’ is another. It appears that the designers of the Index had a pre-conceived mental model of what policy and regulatory actions contribute to affordability of broadband, and then asked experts to evaluate each country on how well it carries out such actions. So for example, it assumes that a Universal Service Fund (USF) leads (in part) to affordability, and proceeds to ask 3 questions (Q71,Q 72 and Q73) on the design of the USF. But why the bias towards a fund? Why not broader questions about the efficacy of a country’s Universal Service POLICY (which may include a USF, but doesn’t have to – e.g., Sri Lanka is doing well on affordability despite not spending USF money). The danger in this type of bias is that it’s possible that some governments view a USF fund as an essential recommendation and create one, when there’s evidence to show that they are often incapable of dispensing the money effectively (a la India until recently). The problem of mental models is illustrated by Bangladesh, which has the lowest mobile prices in the world and almost no “good” regulatory practices to speak of. There is something to be learned from Bangladesh if one approaches the question with an open mind.

LIRNEasia’s raison d’etre is bringing evidence into the policy process. Our presence in the Cape Town panel is hopefully the first of many engagements with AA4I, where we contribute towards improving the credibility of the Index, so that it becomes a useful tool in spurring action towards affordable Internet for all.

Comments are closed.