Dear Editor,
I have read with great interest the editorial by Agostoni et al1 on the pros and cons concerning the recent exponential increase in published systematic reviews and meta-analyses focusing on interventional cardiology topics. Despite the important insights provided by the authors, we believe the ongoing debate2 on the role of systematic reviews and meta-analyses could benefit by taking into account a few other key issues.
First, meta-analyses and systematic reviews are not synonymous. It should be made clear that a meta-analysis should not be pursued outside of a careful systematic review (and indeed currently performed meta-analyses are simply a specific subset of systematic reviews). For a pertinent case study, please refer to the TIMI 11B-ESSENCE meta-analysis which lacked a systematic review design.3
Second, no systematic review should be considered flawed, per se, just because it includes flawed primary studies, as long it uses sound methods for evidence search, selection, abstraction and appraisal. One of the most important strengths of systematic reviews and meta-analyses lies, indeed, in its ability to disclose important underlying signals, sometimes stemming exactly from flawed primary studies. For a case study, please refer to the systematic review by Barnes et al,4 exploiting low quality primary studies on passive smoking to demonstrate the biasing effect of funding and conflicts of interests with tobacco companies.
Third, the stance that systematic reviews and meta-analyses should be avoided when there is not yet a pertinent randomised trial is equivocal. Systematic reviews and meta-analyses cost 10 to 100 times less than clinical trials, can be conducted quite quickly, pose no major ethical issue, and can help in designing a study (especially for sample size computations).5 Thus, it could be instead recommended that a systematic review should always be attempted before designing and conducting a costly randomised trial, as already suggested by several funding bodies.
Fourth, systematic reviews and meta-analyses are among the most quoted and read article types, even toppling randomised clinical trials.6 Thus, editors themselves are very unlikely to discourage submission and publication of these research endeavours, as they offer a unique and quick recipe for increasing circulation, readership and quotations.
Fifth and foremost, the diffusion of the skills and tools needed to perform a systematic review and meta-analysis (indeed the pathogenetic mechanism underlying the current «meta-analytic rage») should be seen as an opportunity, rather than a curse. Meta-analyses are no longer a secret weapon of obscure experts or international opinion leaders, but rather an almost everyday tool for the busy clinician with a good working knowledge of evidence-based research methods.
In conclusion, criticising the current generous production and diffusion of systematic reviews and meta-analyses in the interventional cardiology arena is unlikely to be successful and is going to miss the opportunity offered by the concurrent dissemination of expertise in evidence-based medicine methods. Rather, stronger emphasis should be posed on appropriate reporting of these articles by authors7 and usage by health care practitioners,8 similarly to what is already recommended for other research study designs.
Reply to the Letter to the Editor by G. Biondi-Zoccai
Dear Editor,
We welcome the letter of Biondi-Zoccai1 commenting on our recent editorial on the epidemic outburst of meta-analyses in the field of interventional cardiology2.
We believe that many of the statements made by Biondi-Zoccai are not at odds with what we discussed in our editorial, but actually complement the issues that we have raised. We certainly agree that meta-analyses and systematic reviews should be seen as instruments and not as «goals», that all the methods behind these instruments should be adequately exploited, that the statistical technique should be appropriate and include testing for heterogeneity, inconsistency, sensitivity analyses, internal and external validity, bias description and publication bias. It is well understood that meta-analyses do rely on the data that are available at a given point in time and may actually indicate the need for larger trials, and provide indications regarding sample size or design issues.
We disagree, however, with Biondi-Zoccai when he suggests that one can build reliable evidence by pooling weak or equivocal studies. There are many examples of «positive» meta-analyses that were not confirmed by proper, adequately sized and designed studies, or by later meta-analyses on larger patient groups; to mention only a few studies, for instance: on the value of magnesium for treatment of acute infarction, on the results of carotid stenting or on the safety and performance of drug-eluting coronary stents. The main issue is that many investigator-driven clinical studies included in meta-analyses, let alone registries, do not reach the level of quality that would be required for high-level clinical research due to inappropriate randomisation processes, lack of blinding, investigator bias, lack of proper data control, in-house analyses, etc. Including these studies in meta-analyses raises their status, but degrades the value of the conclusions of the analysis. As stated by F. Messerli, meta-analyses are much like a bouillabaisse: no matter how much fresh seafood is added, one tainted fish will spoil the pot!3
The goal of our editorial was not to question the mission or the usefulness of organisations such as the one founded by our colleague, rather the opposite. Perhaps these analyses should only be performed by highly recognised groups, independently from the investigators involved with individual studies.
In conclusion, our editorial is calling for a more critical appraisal of the meta-analyses and systematic reviews of all kinds that are submitted for publication. There is no question that too many meta-analyses of poor or questionable quality are presently passing the filter of peer review and are eventually published. There are many duplicate analyses, often contradicting each other. As already mentioned, presenting conclusions that will be proven erroneous by larger trials or subsequent analyses will not advance the field. Our first goal should be to build robust evidence by the design, execution, analysis and publication of adequate clinical trials. These are the endeavours that should require, in priority, all the talent, energy and resources of young, bright investigators, such as Dr. Biondi-Zoccai.