Evidence Based Medicine, EBM, is the current calling card of conventional medical treatment. Just the sound of it carries the cachet of high quality medicine. Who wouldn’t want health treatments to be based on evidence?
The reality, though, is that EBM is fraudulent. It gives the impression of proof for efficacy of medical treatment, but is largely a smokescreen designed to sell medical products.
We should first take a hard look at the reality behind so many of the studies evoked to support EBM treatments. It’s becoming a well-known scandal that much of what passes for science in medical journals is at best useless, and at worst is devastatingly harmful—as demonstrated in the mass drugging of women going through the natural process of menopause.
Marcia Angell, a former editor-in-chief of the New England Journal of Medicine (NEJM), has written extensively on the topic. In “Big Pharma, Bad Medicine“, she wrote:
The authors of that paper had so many financial ties to drug companies … that a full-disclosure statement would have been about as long as the article itself … The lead author, who was chairman of the department of psychiatry at Brown University (presumably a full-time job), was paid more than half a million dollars in drug-company consulting fees in just one year. Although that particular paper was the immediate reason for the editorial, I wouldn’t have bothered to write it if it weren’t for the fact that the situation, while extreme, was hardly unique.
Her point is that it comes down to money. As she quoted one person in that article:
Is academic medicine for sale? No. The current owner is very happy with it.
That, of course, is the bottom line. It’s why the term EBM is invoked—to give the impression that medical treatments are based on meaningful research. The purpose isn’t to produce research that benefits patients. The purpose is to produce research that benefits the pockets of Big Pharma and Big Medicine.
Statistics can be a great tool when properly applied. But when we see them applied in a wholesale manner in defining the treatment to be used on individuals, we are seeing them used in a thoroughly inappropriate manner. In “Evidence Based Medicine: Neither Good Evidence Nor Good Medicine“, the authors Steve Hickey, PhD, and Hilary Roberts, PhD, offer a clever analogy to demonstrate the inherent flaw, paraphrased here:
The state of New York decides to measure the foot size of every person in the state. From those figures, they calculate the average shoe size. Then, they manufacture enough shoes for every person in the state, making every shoe the average size.
It’s obvious that only a tiny fraction of the people would receive shoes of the proper size. This, though, is exactly how conventional medicine operates. The differences in individuals are ignored in a one-size-fits-all paradigm.
Statistics are routinely trotted out in studies that purport to provide the evidence for medical treatments, and then that “evidence” is used to press a treatment, most often a drug, to push on everyone who can possibly be given the diagnosis for which that drug was developed. Then, of course, once the FDA has approved it, the pharmaceutical company sets about to find other potential uses for its drug, using the same statistical lying techniques.
Medical trials routinely exclude evidence. Of course, it’s done either secretly or to give the impression that its purpose is not to skew the results.
Imagine that you were one of those people who could be affected by a trial, but found yourself excluded—possibly because you are one of those automatically excluded for responding to the initial dose of placebo, as routinely happens, or perhaps because you’re outside the age range or have a particular health problem. Then, the one person whose results would most matter to you – yourself! – wouldn’t be counted.
Good data means including all data. Yet, that’s precisely what most trials don’t do. Even meta analyses routinely exclude data by excluding studies. Even if a study is skewed, it doesn’t mean that the data is.
Very large scale trials, ones that gather subjects from many clinics across a country and even around the world, are becoming more and more common. These often find small and frankly unimportant results. They are, however, inflated by applying relative statistics.
We often hear that a study has shown a particular drug extends life in, for example, 50% of the subjects. That sounds truly impressive. However, reviewing the results would more likely show something akin to lengthening life by 2 days in 2 out of 10,000 people taking one drug, as opposed to 1 out of 10,000 people taking a different drug.
The large scale manages to hide significant factors:
Even if the reported result is real, the chance that any individual will have the same result is obviously very limited. When adverse effects are also considered, the benefit may actually be less than the minuscule amount found.
Most huge trials are performed by huge pharmaceutical corporations. They’re extremely expensive, running into millions of dollars, euros, or pounds. Such trials are virtually never repeated. The sponsoring corporation has no reason to do so, and the costs hinder anyone else from doing so. Therefore, we virtually never see these trials repeated.
Lack of repeatability is one of the primary tenets of real science. A study should be easily repeatable. If the cost is too great, then it’s highly unlikely that will happen. Thus, much of the evidence produced is done in trials that will not be repeated. That, alone, should raise eyebrows and keep them from being used to justify approval of the drug or product being trialed.
P-values are often trotted out to demonstrate that a study’s results are significant. It gives an impression that the results are, therefore, important. However, in the context of medical studies, the two terms, significant and important, have little in common.
The purpose of a p-value is to estimate the likelihood of a particular result happening by chance. The value will always be greater than 0 and less than 1. A value of p < .01 (p is less than .01) means that there is less than 1 chance in 100 that the same result would be achieved by chance. P < .05 (p is less than .05) means that the same result would be obtained by chance less than 5 times out of a hundred.
The lower the number, the less likely it is that the result would be achieved by chance. Therefore the lower the p-value, the greater the significance of the study’s results.
That certainly sounds important, and you’d think it would mean that a low p-value means that a study’s results are important. That, though, is not necessarily the case.
Imagine, for example, a study on a blood pressure drug. It’s trialed in subjects taken from a population of one million people, and the results are compared with a placebo group. The identical study is done twice. One includes 100 subjects in the drug and placebo groups, while the other includes 10,000 subjects in each group.
If all things were equal (such as sex, race, age, or health factors), the only thing affecting the p-value would be the size of the group being sampled. Therefore, the smaller study of 100 subjects would have a much higher p-value, let’s say <.05, and the larger study of 10,000 subjects would have a much smaller p-value, let’s say <.01.
Now, let’s say that the smaller study shows systolic blood pressure being lowered by 10 points, and the larger study shows systolic blood pressure lowered by 1.5 points. Which is more important? The study showing that blood pressure is lowered by 10 points would likely seem more important to you, since 1.5 points wouldn’t count for much. Thus, the importance of the result is not the same as its significance.
In large trials, highly significant results are often of no importance clinically. Keep that in mind whenever you see the term, significance, trotted out to give an impression that a study’s results are important.
As demonstrated, Evidence Based Medicine is predicated on several fatal flaws:
Ultimately, the real question is what medicine is about—and that isn’t statistics. Medicine is about people. Individuals. Single cases.
A person is not simply one of a class of one hundred, ten thousand, or a million. A person isn’t simply a body part or health marker. A person is a conglomeration of interrelated parts, each of which is composed of millions and trillions of interrelated cells, ions, hormones, and a host of other substances. All these thing operate as a unit, and that unit is informed by the individual’s history and life experiences. And all of that is affected by whatever’s going on in the person’s life at that point. None of that is, and none of that can be, accounted for in the current holy grail of EBM.
At best, Evidence Based Medicine is pseudo science or junk science. It’s a fraud designed to give the impression that statistics derived from studies can possibly tell us much of value about how to deal with or treat an individual human.
With great appreciation to Orthomolecular Medicine’s article,
by Steve Hickey, PhD, and Hilary Roberts, PhD.
Tagged big pharma, conventional medicine, ebm, ebm fraud, evidence based medicine fraud, evidence-based medicine, junk science, modern medicine, pharmaceutical drugs, pharmaceuticals, pseudo-science, pseudoscience