We are grateful for the insightful comments of Lozano et al1 mentioning that a reasonable approach would be a combination of the time-to-first-event method (primary endpoint) and additional appropriate analyses as sensitivity analyses. We agree with their opinion. However, only the time-to-first-event method is commonly applied to the analysis of composite endpoints in current clinical trials. We recently applied the multiple statistical methods for composite endpoints (time-to-first-event, negative binomial regression, Andersen-Gill, win-ratio, and weighted composite endpoint methods) to the GLOBAL LEADERS trial23. The GLOBAL LEADERS trial investigated aspirin-free antiplatelet treatment (experimental arm: 1-month dual antiplatelet therapy [DAPT] followed by 23-month ticagrelor monotherapy vs reference arm: 12-month DAPT followed by 12-month aspirin monotherapy) in an all-comers population. The results were consistent in that ticagrelor monotherapy reduced ischaemic and bleeding events by 5-8%. However, only the results of negative binomial regression and Andersen-Gill analyses demonstrated the statistically significant risk reduction (p-values less than 0.05), while others (time-to-first-event, win-ratio, and weighted composite endpoint methods) did not. We would propose pre-specifying the details for additional methodological analyses in a statistical analysis plan (SAP) to avoid any arbitrariness. First, the methods for counting repeated events should be clarified. In other words, how to handle a sequence of adverse events should be defined. For example, if a patient suffered myocardial infarction and died the next day, should this sequence of events be counted as one event (cardiovascular death) or two events (non-fatal myocardial infarction and cardiovascular death)? Along the same lines, if myocardial infarction caused heart failure on the same day, should these be counted as one event or two events? The method of event counting could influence the results, especially in the analyses of negative binomial regression and Cox-based models for recurrent events (Andersen-Gill and the Wei-Lin-Weissfeld models). Second, the weights of cardiovascular events in previous research are not consistent; the consensus for event severity and weight has not been achieved yet. Event severity and weight could be dependent on patient characteristics and perspectives. The impact of percutaneous coronary intervention could be different in patients with and without previous coronary stenting. Furthermore, an examination of patients’ perspectives regarding composite endpoints reported that disabling stroke was more severe than death4, although death is treated as the most severe event in clinical trials. Therefore, event severity and weight should be discussed in each trial based on the patient’s background, and the event severity and weight should be prespecified. As Lozano et al point out, the sample size calculation in the novel methodological methods is more complex than that in the time-to-first-event analysis. However, dedicated codes for sample size calculation in the novel methodological methods have been developed56, which would support analyses using novel methodological methods in future clinical trials. Applying not only the time-to-first-event method but also other prespecified statistical methods could emphasise the multiple facets of a trial and could result in more appropriate analyses.
Conflict of interest statement
H. Hara reports a grant for studying overseas from the Japanese Circulation Society, a Grant-in-Aid for JSPS Fellows and a grant from the Fukuda Foundation for Medical Technology. The other authors have no conflicts of interest to declare.
Supplementary data
To read the full content of this article, please download the PDF.