The first and last meta-analysis I did was published in the Journal of Invasive Cardiology in 1997. Upon the request of the pharmaceutical industry, I did a meta-analysis on trapidil, which was still at that time an effective PDGF inhibitor in the battle with restenosis, using three Japanese trials all of which were positive, with a p-value of 0.51. The results of this meta-analysis, demonstrating trapidil’s efficacy, laid the foundation for the randomised TRAPIST study in which 312 patients were randomised to either placebo or trapidil at 21 centres in nine countries. However, the results of the TRAPIST study were negative – administration of trapidil 600 mg daily for six months did not reduce in-stent hyperplasia2. Disappointed by the results of the randomised trial, I revisited my meta-analysis and came to the conclusion that the positive result of the meta-analysis was the consequence of removing protocol violations from the outcome. My experience illustrated to me that a faulty meta-analysis could be the trigger for a significantly incorrect assessment of treatment in clinical practice.
Over the last 15 years, meta-analyses have become very common and in many instances are performed by junior physicians worldwide. The reason for this success is that the systematic procedure of the meta-analysis on study populations is quite well standardised and codified, naturally with the assistance of dedicated software. At its simplest, a single individual can make a screenshot of the results presented at a Late Breaking session, rush to his laptop and immediately generate a meta-analysis using the latest results of a trial. Let’s be candid here, a fast food analogy for this would not be inappropriate. Irrespective of the Bayesian or frequentist viewpoint, the whole issue is the interpretation of the result. The concern is the overall study population and not at patient level, in other words the results can, at best, be presented as a forest plot without Kaplan-Meier curves, to show the evolution in time of the outcome.
So, perhaps the time has come for editors of journals to establish together the rule of engagement for meta-analyses in the world. Ideally, the principal investigator (PI) of each study should be in the report of the meta-analysis based on patient-level data which implies a tremendous collaboration to create a database using common parameters and matrices of outcome. Naturally, this process would not take place overnight, as primarily the collected data would have to be fully transparent to the group of PIs who contribute their data from their own trials which at the same time automatically improves the quality of the data within the database. Secondly, knowing that good trials can take up to 5-10 years of work, this element cannot be digested as fast food by juniors taking snapshots and running to laptops upon the presentation of first results of RCTs.
This basic concept should be the topic of an academic debate and would put an end to the unbridled explosion of meta-analyses basically made by people not involved in trials. Today on both sides of the Atlantic and also in the Far East the family of journal editors increases; nonetheless, it should be their responsibility to create a codex on how to and who should perform a meta-analysis. Some of us have even suggested a minimum time for reflection before planning a meta-analysis, since sometimes the final interpretation of the trial goes beyond the first reports of data, like a kind of scientific rush.