eight common problems with scientific literature reviews and how to solve them

Researchers regularly review the literature generated by others in their field. It is an integral part of daily research: to find relevant research, read and digest the most important findings, make a summary of the various articles and draw conclusions about the evidence base as a whole.

However, there is a significant difference between short, narrative approaches to summarizing a selection of studies and the attempt to summarize an evidence base, reliably and comprehensively to support policy-making and practice.

So-called & quot; evidence-informed decision-making & quot; relies on strictly systematic approaches to synthesize the evidence. Systematic review has become the highest standard of evidence synthesis. it is well established in the pipeline from research to practice in various fields, including health, the environment and social policy. Strict systematic reviews are crucial to decision making, as they help provide the strongest evidence that a policy is likely to work (or not). It also helps to avoid costly or dangerous mistakes in choosing policies.

But systematic review has not yet replaced the traditional methods of literature review. These traditional reviews may be prone to bias and may ultimately lead to wrong conclusions. This is of particular concern when reviews discuss important policy and practice issues.

The good news is that the limitations of traditional approaches to literature review can improve relatively easily with a few key procedures. Some of these are not overly expensive in terms of skill, time or resources. This is particularly important in African contexts, where resource constraints are a daily reality, but should not jeopardize the continent’s need for rigorous, systematic and transparent evidence to inform policy.

In our recent article in Nature Ecology and Evolution, we highlighted eight common problems with traditional literature review methods. We have given examples for each problem from the field of environmental management and ecology. Finally, we set out practical solutions.


These are the eight problems we identified in our newspaper.

First, traditional literature reviews may not be relevant. This is because limited stakeholder involvement can lead to a review that is limited for decision makers.

Second, reviews that do not publish their methods a priori (meaning that they are published before the review work begins) may suffer from mission creep. In our paper we give the example of a 2019 review which initially stated that it would look at all population trends among insects. Instead, it ended up focusing only on studies that showed the decline in the insect population. This could be prevented by publishing and adhering to the methods set out in a protocol.

Third, a lack of transparency and repeatability in the review methods may mean that the review can not be repeated. Repeatability is a core principle of the scientific method.

Preference for selection is another common problem. Here, the studies included in a literature review are not representative of the evidence base. Lack of comprehensiveness, which stems from an inappropriate search method, can also mean reviews end with the wrong evidence for the question at hand.

Traditional reviews can also be excluded gray literature. It is defined as any document

produced at all levels of government, academics, business and industry in print and electronic formats, but not controlled by commercial publishers, ie where publishing is not the primary activity of the producing body.

This includes organizational reports and unpublished dissertations or other studies. Traditional reviews also cannot test for evidence of publication bias; both of these issues can lead to wrong or misleading conclusions. Another common mistake is to consider all evidence as equally valid. The reality is that some research studies are more valid than others. This must be taken into account in the synthesis.

Inappropriate synthesis is another common issue. It involves methods such as the counting of votes, which refers to the compilation of studies based on their statistical significance. Finally, a lack of consistency and error checking (as would happen if a reviewer works alone) can set errors and biases as a single judge makes decisions without consensus.

However, all of these common problems can be solved. Here’s how.


Stakeholders can be identified, mapped and contacted for feedback and inclusion without the need for extensive budgets. Best practice guidelines for this process already exist.

Researchers can carefully design and publish an a priori protocol that details planned methods of search, screening, data extraction, critical assessment, and synthesis. Organizations such as the Collaboration for Environmental Evidence have existing protocols from which people can draw.

Researchers must also be explicit and use high quality guidance and standards for review behavior and reporting. Several such standards already exist.

Another useful approach is to carefully design a search strategy with an information specialist; to test the search strategy against a benchmark list; and to use multiple bibliographic databases, languages ​​and sources of gray literature. Researchers must then publish their search methods in an a priori protocol for peer review.

Researchers should consider carefully planning and testing a critical assessment tool before embarking on the process in full. existing robust critical assessment tools. Critical assessment is the carefully planned assessment of all possible risks of bias and possible confusion in a research study. Researchers must carefully choose their synthesis method based on the analyzed data. Voice counting should never be used instead of meta-analysis. Formal methods for narrative synthesis should be used to summarize and describe the evidence base.

Finally, at least two judges should examine a subset of the evidence base to ensure consistency and shared understanding of the methods before proceeding. Ideally, judges make all decisions separately and then consolidate.


Collaboration is crucial to address the issues with traditional review processes. Authors need to do more careful reviews. Editors and peer reviewers need to be stricter. The community of methodologists needs to better support the broader research community.

In collaboration, the academic and research community can build and maintain a strong system of rigorous, evidence-informed decision-making in conservation and environmental management – and ultimately in other disciplines.

Neal Robert Haddaway, Research Fellow, African Center for Evidence, University of Johannesburg


Leave a Reply

Your email address will not be published. Required fields are marked *