For the average citizen evaluating the claims made for cure all – or even improve all – health products and procedures has always been difficult. Not only is it an area in which we have minimal expertise but most of us have a vested interest in finding a miracle intervention that will solve our health problems.
Faced with extravagant or competing claims, the level headed will look for some sort of evidence to support that product or intervention’s claims. This will usually take the form of research – one or more trials that have ‘scientifically’ evaluated the product or intervention and come up with a reliable conclusion. But sadly, such results can, for a multiplicity of reasons, be as misleading as the original claims.
An article in a recent Townsend Letter by Erik Peper, PhD, BCB and Richard Harvey, PhD of the Institute for Holistic Health Studies at San Francisco State University looks at this issue in some detail – and suggests a series of criteria that you should apply when evaluting ‘evidence’. To support their contention that ‘evidence’ should never be taken at face value they quote the eminent editors of both the New England Journal of Medicine, Dr. Marcia Angell (2009):
“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.”
And the Editor in Chief of the Lancet, Richard Horton (2015)
“A lot of what is published is incorrect … much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
Headline concerns
You should definitely read the full article and note its warnings but here are just a few of the points they make:
Placebo interactions.
The placebo (and the nocebo) effect is well recognised and assessing how much of the apparent effect that a drug or procedure has had was triggered by the drug or procedure itself and how much was triggered by its attendant placebo response is very difficult.
Clinical trials are prohibitively expensive.
The clinical trial evidence required by regulatory authorities to bring a drug to market cost millions if not billions of dollars. This cost puts such trials way beyond the pockets of any but the largest companies. As a result smaller drug companies and or non drug approaches will not be able to afford the trials that constitute ‘proper scientific evidence’.
Human beings are not rats, mice, or monkeys.
Animals not human beings, so what affects them may not affect a human – and vice versa. For example, Thalidomide was approved for use in Germany to treat morning sickness as it had been shown not to harm animals but in humans Thalidomide interfered with embryonic and fetal development.
Statistical significance does not necessarily mean clinical improvement.
‘A successful study that demonstrated lowering of patients’ systolic pressure by 5 mm from 175 mm/Hg to 170mm/Hg may be statistically significant, but is not clinically meaningful, since, a resting systolic blood pressure of 170 mm/Hg is still a cause for concern.’
Number of people need to be treated for one person to benefit.
If you have to treat 100 people over a 5 year period with a cholesterol lowering drug with its attendant side effects for just one person to benefit by avoiding a heart attack , does that make sense?
Funding
Ask who would financially benefit from the product or service? Do the researchers have a financial interest in the product. Who has funded the research – do they have an interest in the outcome? Do ‘independent’ research bodies actually have financial links to the funding organisations?
Research reviews are highly selective
Meta reviews often exclude large numbers of studies which do not meet their quite narrow criteria but which might none the less be relevant.
For much more detailed advice and guidance, do read the full article in the Townsend Letter.