A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against one of its fiercest academic critics, but in addition illustrates many of the pitfalls facing researchers on the topic and those – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published inside the Lancet Respiratory Medicine and co-authored by Stanton Glantz, director from the Center for Tobacco Control Research and Education at the University of California, San Francisco, in addition to a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but fails to enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare and contrast the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: in other words, to find out whether usage of e-cigs is correlated with success in quitting, which could well mean that vaping can help you quit smoking. To get this done they performed a meta-analysis of 20 previously published papers. That is certainly, they didn’t conduct any new research entirely on actual smokers or vapers, but alternatively tried to blend the outcomes of existing studies to find out if they converge over a likely answer. This can be a common and well-accepted approach to extracting truth from statistics in many fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with by the university, is the fact that vapers are 28% less likely to avoid smoking than non-vapers – a conclusion which would claim that vaping is not only ineffective in smoking cessation, but actually counterproductive.
The end result has, predictably, been uproar through the supporters of E-Cigs in the scientific and public health community, specifically in Britain. One of the gravest charges are those levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and by Carl V. Phillips, scientific director in the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) within the United states, who wrote “it is apparent that Glantz was misinterpreting the info willfully, instead of accidentally”.
Robert West, another British psychologist as well as the director of tobacco studies with a centre run by University College London, said “publication of this study represents an important failure in the peer review system in this particular journal”. Linda Bauld, professor of health policy at the University of Stirling, suggested the “conclusions are tentative and quite often incorrect”. Ann McNeill, professor of tobacco addiction within the National Addiction Centre at King’s College London, said “this review is not scientific” and added that “the information included about two studies which i co-authored is either inaccurate or misleading”.
But what, precisely, would be the problems these eminent critics discover in the Kalkhoran/Glantz paper? To reply to some of that question, it’s essential to go under the sensational 28%, and examine what was studied and exactly how.
Meta-analysis is really a seductive idea. If (say) you may have 100 separate studies, each of 1000 individuals, why not combine these to create – in effect – one particular study of 100,000 people, the results that should be significantly less prone to any distortions that might have crept into someone investigation?
(This might happen, for example, by inadvertently selecting participants having a greater or lesser propensity to stop smoking because of some factor not considered from the researchers – an instance of “selection bias”.)
Obviously, the statistical side of a meta-analysis is quite more sophisticated than merely averaging out your totals, but that’s the typical concept. As well as from that simplistic outline, it’s immediately apparent where problems can arise.
Whether its results are to be meaningful, the meta-analysis has to somehow take account of variations in the design of the patient studies (they might define “smoking cessation” differently, as an example). If this ignores those variations, and tries to shoehorn all results in to a model that a number of them don’t fit, it’s introducing their own distortions.
Moreover, when the studies it’s based on are inherently flawed in any way, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This can be a charge produced by the Truth Initiative, a U.S. anti-smoking nonprofit which normally takes an unwelcoming view of e-cigarettes, in regards to a previous Glantz meta-analysis which will come to similar conclusions for the Kalkhoran/Glantz study.
In a submission last year for the United states Food and Drug Administration (FDA), addressing that federal agency’s demand comments on its proposed e-cigarette regulation, the reality Initiative noted that it had reviewed many studies of e-cigs’ role in cessation and concluded that they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of these have already been included in a meta-analysis [Glantz’s] that claims to show that smokers who use e-cigarettes are more unlikely to give up smoking compared to those who tend not to. This meta- analysis simply lumps together the errors of inference from the correlations.”
In addition, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate as well as the findings of the meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and be prepared to receive an apple pie.
Such doubts about meta-analyses are far away from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the Truth Initiative’s points as he wrote within the Lancet Respiratory Medicine – exactly the same journal that published this year’s Kalkhoran/Glantz work – the studies contained in their meta-analysis were “mostly observational, often without control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis could only be as effective as the research it aggregates, and drawing conclusions as a result is simply valid when the studies it’s based on are constructed in similar ways to the other person – or, a minimum of, if any differences are carefully compensated for. Of course, such drawbacks also affect meta-analyses which can be favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms from the Kalkhoran/Glantz work exceed the drawbacks of meta-analyses generally speaking, while focusing on the specific questions posed by the San Francisco researchers and also the ways they attempted to respond to them.
One frequently-expressed concern has become that Kalkhoran and Glantz were studying the incorrect people, skewing their analysis by not accurately reflecting the real quantity of e-cig-assisted quitters.
As CASAA’s Phillips points out, the e-cigarette users within the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes once the studies on their own quit attempts started. Thus, the research by its nature excluded those who had started vaping and quickly abandoned smoking; if these people appear in large numbers, counting them would have made e-cigarettes seem a much more successful way to smoking cessation.
An alternative question was raised by Yale’s Bernstein, who observed which not all vapers who smoke are attempting to quit combustibles. Naturally, those who aren’t trying to quit won’t quit, and Bernstein observed that if these individuals kndnkt excluded from your data, it suggested “no effect of e-cigarettes, not too electronic cigarette users were less likely to quit”.
Excluding some who did manage to quit – and after that including those who have no aim of quitting anyway – would likely manage to impact the result of research purporting to measure successful quit attempts, although Kalkhoran and Glantz argue that their “conclusion was insensitive to a wide range of study design factors, including whether or not the research population consisted only of smokers considering smoking cessation, or all smokers”.
But there is also a further slightly cloudy area which affects much science – not only meta-analyses, and not simply these types of researchers’ work – and, importantly, is frequently overlooked in media reporting, along with by institutions’ public relations departments.