Reflection on systematic reviews after ICTD 2015


Posted on July 21, 2015  /  0 Comments

It’s been almost two months since ICTD. For some reason, this summary which had been written immediately after we concluded our session on SRs, had not been posted.

There is a degree of discomfort among some academics about systematic reviews (SRs). Researchers worry about the limited resources available from funders being channeled to SRs and quantitative research, at the expense of alternative methods.

We believe that the debate will be more productive if people fully understand the subject under discussion. With that in mind, LIRNEasia organized this session that provided an evidentiary basis for an informed debate. Rather than battling phantoms, people can talk about the actual practice of research.

Following a brief introduction from LIRNEasia and IDRC and DFID, two funding agencies who had supplied funding for SRs, four SRs conducted by LIRNEasia were presented by the team leaders responsible. At the conclusion, Dr Alison Gillwald of Research ICT Africa presented a critique. This was followed by several rounds of questions from the audience of around 50 who had made it to a difficult-to-find room.

In each of the studies, thousands of studies in the broadly defined area had been systematically located. They had been then narrowed down to a handful using a number of filters. The end objective is to actually construct a single synthetic study though this does not happen in all cases because of heterogeneity of even the filtered studies. Understandably, many concerns were expressed about the filtering process, especially because the filtering criteria are explicitly stated. For example, after seeing that one of the filters was the research study being in English, the question was raised about its fairness. It was pointed out that this was in many cases an implicit criterion. The difference with SRs is that the criteria are explicitly stated. Anyone who wants to undertake a new study that fills the gap can know exactly what has been done or not done. The SRs meet the basic criterion of science which is replicability.

The SR team leaders acknowledged that quantitative studies were privileged, principally because it was difficult to assess the quality of qualitative studies. However, Dr Sujata Gamage who was leading the SR on ICTs in the classroom, emphasized the value of a meta-analysis of a study of a subset of qualitative studies being done in parallel. All the team leaders emphasized the need to develop a theory of change or a causal mechanism when looking at impacts. This was clearly a move beyond the simple filtering and aggregation.

Most of the discussion focused on the research method, as the organizers intended. However, the session triggered several discussions on new research projects. It was also noteworthy that some participants who came from non-academic contexts appeared to have gained some impression that the evidence on impacts was not strong, when in fact the evidence was very strong especially with regard to mobile financial services and access to mobile networks in rural areas. This poses a challenge for effective communication of SR findings to non-academic audiences who are acculturated to the unqualified claims characteristic of management consultants.

Comments are closed.