Study: Studying the studies, reviewing the reviews, and analyzing the analyses
The purpose of this study was to study the studies about the studies. (Got that?)
There are studies, and there are systematic reviews of studies, where you use certain set standards, cull through studies, weed out the poor quality ones, pull together the best ones and then analyze and write about them. You eventually get to a point where there are enough reviews that it’s time to review the reviews themselves rather than studies. That’s especially important when a major professional body wants to update its standards, or “Preferred Practice Patterns” for a disease.
In this study, the authors graded reviews to see whether they are in good enough shape for that process to take place.
Ouch. One third flunked - were deemed unreliable amongst 98 reviews of corneal diseases including dry eye.
Conclusion (my translation): For best results, follow the damn rules when conducting systematic reviews!
Reliability of the Evidence Addressing Treatment of Corneal Diseases: A Summary of Systematic Reviews. Saldanha et al, JAMA Ophthalmol. 2019 May 9.
Patient care should be informed by clinical practice guidelines, which in turn should be informed by evidence from reliable systematic reviews. The American Academy of Ophthalmology is updating its Preferred Practice Patterns (PPPs) for the management of the following 6 corneal diseases: bacterial keratitis, blepharitis, conjunctivitis, corneal ectasia, corneal edema and opacification, and dry eye syndrome.
To summarize the reliability of the existing systematic reviews addressing interventions for corneal diseases.
The Cochrane Eyes and Vision US Satellite database.
In this study of published systematic reviews from 1997 to 2017 (median, 2014), the Cochrane Eyes and Vision US Satellite database was searched for systematic reviews evaluating interventions for the management of any corneal disease, combining eyes and vision keywords and controlled vocabulary terms with a validated search filter.
DATA EXTRACTION AND SYNTHESIS:
The study classified systematic reviews as reliable when each of the following 5 criteria were met: the systematic review specified eligibility criteria for inclusion of studies, conducted a comprehensive literature search for studies, assessed risk of bias of the individual included studies, used appropriate methods for quantitative syntheses (meta-analysis) (only assessed if meta-analysis was performed), and had conclusions that were supported by the results of the systematic review. They were classified as unreliable if at least 1 criterion was not met.
MAIN OUTCOMES AND MEASURES:
The proportion of systematic reviews that were reliable and the reasons for unreliability.
This study identified 98 systematic reviews that addressed interventions for 15 corneal diseases. Thirty-three of 98 systematic reviews (34%) were classified as unreliable. The most frequent reasons for unreliability were that the systematic review did not conduct a comprehensive literature search for studies (22 of 33 [67%]), did not assess risk of bias of the individual included studies (13 of 33 [39%]), and did not use appropriate methods for quantitative syntheses (meta-analysis) (12 of 17 systematic reviews that conducted a quantitative synthesis [71%]). Sixty-five of 98 systematic reviews (66%) were classified as reliable. Forty-two of the 65 reliable systematic reviews (65%) addressed corneal diseases relevant to the 2018 American Academy of Ophthalmology PPPs; 33 of these 42 systematic reviews (79%) are cited in the 2018 PPPs.
CONCLUSIONS AND RELEVANCE:
One in 3 systematic reviews addressing interventions for corneal diseases are unreliable and thus were not used to inform PPP recommendations. Careful adherence by systematic reviewers and journal editors to well-established best practices regarding systematic review conduct and reporting might help make future systematic reviews in eyes and vision more reliable.