Vaccines and Infant Mortality Rates: Fallacies in a Critique

person holding test tubes

Photo by Polina Tankilevitch on Pexels.com

Two compelling studies showed a correlation between vaccination and infant mortality and hospitalization. These were attacked by David Gorski in 2011 but his argument is riddled with fallacies and has very little substance:

  1. Gorski notes that one of the authors of the studies is an, “independent researcher, and the second author, Gary S. Goldman, is described as an “independent computer scientist.” And that “this is not a promising start”. He’s right, this is ad hominemif a doctor doesn’t say it, it can’t be true, evidently. It doesn’t have anything to do with the legitimacy of the paper.
  2. Gorski claims there is a conflict of interest by the authors because “National Vaccine Information Center (NVIC) donated $2500 and Michael Belkin donated $500 (in memory of his daughter, Lyla) for open access to the journal article (making it freely available to all researchers).” They didn’t pay for the study—they paid for it to be public, but Gorski claims this is a conflict of interest. Of course, most studies that prove the efficacy of any given vaccine are funded by the vaccine producer. And Gorski himself is funded by the Department of Defense, National Institute of Health, and Bayer HealthCare, which all benefit from vaccine usage. Claiming a loose conflict of interest while suffering from one himself is a blatant double standard.
  3. “Basically, Miller and Goldman went to The World Factbook maintained by, of all organizations, the Central Intelligence Agency.” Basically, Gorski implies that information obtained from the CIA is somehow untrustworthy. Perhaps the authors should have taken their information from one of Gorski’s bankrolls, the Department of Defense. This is the genetic fallacy.
  4. Gorski notes the use of just 2009 data, which is certainly curious, especially since they had data for 1990-2010. This in and of itself doesn’t constitute a legitimate critique, but it does make one wonder. Good thing in Goldman and Miller’s subsequent paper, they included data from the enter data set. This was ignored by Gorski.
  5. Gorski claims the authors, “here seem to have pulled the same trick that J.B. Handley and crew like to pull when trying to convince people that U.S. infants are “overvaccinated” by artificially pumping up the apparent number of vaccine doses by counting multivalent vaccines as more than one. For instance, the MMR and DTaP are counted as three each because each vaccine is trivalent; i.e., containing vaccines against three different diseases.” Why isn’t counting trivalent vaccines as three doses legitimate? This isn’t explained. It makes sense to me.
  6. Gorski again asks why they used this specific data set. Why not include data from countries with worse IMR? This is a good question. Instead of assuming this disproves the study, however, Gorski should compile data from those countries with worse IMR.
  7. Gorski quotes Bernadine Healy showing the problem with comparing IMR across countries: “The United States counts all births as live if they show any sign of life, regardless of prematurity or size. This includes what many other countries report as stillbirths.” This could indeed affect the study but this needs to be analyzed to show how many of these births occur. It’s not enough to say there are discrepancies without providing specific data on those discrepancies.

Conclusion

Gorski brings up some good questions about the studies showing a correlation between vaccination rate and infant mortality, but these are not enough to summarily discount the studies and his valid concerns are overshadowed by numerous fallacies.

Original article:

The anti-vaccine movement is a frequent topic on the Science-Based Medicine blog. There are a number of reasons for this, not the least of which being that the anti-vaccine movement is one of the most dangerous forms of pseudoscience, a form of quackery that, unlike most forms of quackery, endangers those who do not partake of it by breaking down herd immunity and paving the way for the resurgence of previously vanquished diseases. However, anti-vaccine beliefs share many other aspects with other forms of quackery, including the reliance on testimonials rather than data. Even so, although the intelligentsia (and I do use the term loosely) of the anti-vaccine movement realizes and exploits the power of anecdotes and testimonials and how human beings tend to value such stories over dry scientific data, leaders of the anti-vaccine movement realize that science is overwhelmingly against them and that testimonials alone are not adequate to counter that science in the realm of public policy and relations.

That’s why, over the years, various anti-vaccine “scientists” (and I use that term very loosely as well) have produced poor quality, sometimes even fraudulent studies, which are then touted as evidence that vaccines cause autism or at least as evidence that there is actually still a scientific controversy when in fact from a scientific standpoint the vaccine-autism hypothesis is pining for the fjords. Examples abound, including the work of Mark and David Geier, whose studies led the to use chemical castration to treat autistic children; Andrew Wakefield, whose small case series almost certainly included fraudulent data; a truly incompetent “phone survey” commissioned by Generation Rescue designed to compare “vaxed versus unvaxed” children; and an even more incompetent “study” in which Generation Rescue used a cherry picked group of nations to try to argue that nations that require more vaccines have higher rates of infant mortality. These efforts continue. For example, last year Generation Rescue requested $809,721 from the Airborne settlement to set up a “vaxed versus unvaxed” study, despite the known difficulties with such a study and the low likelihood of finding anything without huge numbers of children.

Last week, they were at it again.

The return of the revenge of the claim that more vaccines equal more infant mortality

Over the last week or so, anti-vaccine activists have been busy touting two “studies” or “reports,” one I can write about now, one that will have to wait. I’ll start with the one that has to wait first:

Investigators and Families of Vaccine-Injured Children to Unveil Report Detailing Clear Vaccine-Autism Link Based on Government’s Own Data

Report Demands Immediate Congressional Action

Directors of the Elizabeth Birt Center for Autism Law and Advocacy (EBCALA), parents and vaccine-injured children will hold a press conference on the steps of the U.S. Court of Federal Claims (717 Madison Place, NW in Washington, DC) on Tuesday, May 10 at 12:00 PM to unveil an investigation linking vaccine injury to autism. For over 20 years, the federal government has publicly denied a vaccine-autism link, while at the same time its Vaccine Injury Compensation Program (VICP) has been awarding damages for vaccine injury to children with brain damage, seizures and autism. This investigation, based on public, verifiable government data, breaks new ground in the controversial vaccine-autism debate.

The investigation found that a substantial number of children compensated for vaccine injury also have autism. The government has asserted that it “does not track” autism among the vaccine-injured. Based on this preliminary investigation, the evidence suggests that autism is at least three times more prevalent among vaccine-injured children than among children in the general population.

Stay tuned. This appears to be the same “study” that anti-vaccine activist Robert F. Kennedy, Jr. was originally going to announce in front of the White House back in April, but his press conference was ultimately canceled. Apparently, this publication was to appear in the Pace University Law School journal, which, of course, the sort of venue that is always preferable to the peer-reviewed scientific literature, at least to cranks.

Whatever the announcement turns out to be, the second example is indeed a study that somehow made it into the peer-reviewed literature. I found out about it from two sources, first, you our readers, several of whom have sent me links to the study, and, second, the ever-popular all-purpose quackery website, NaturalNews.com, which announced triumphantly last week that nations requiring the most vaccines tend to have the worst infant mortality rates:

A new study, published in Human and Experimental Toxicology (http://het.sagepub.com/content/earl…), a peer-reviewed journal indexed by the National Library of Medicine, found that nations with higher (worse) infant mortality rates tend to give their infants more vaccine doses. For example, the United States requires infants to receive 26 vaccines — the most in the world — yet more than six U.S. infants die per every 1000 live births. In contrast, Sweden and Japan administer 12 vaccines to infants, the least amount, and report less than three deaths per 1000 live births.

Before we get to the study itself—which, as you might imagine, has…flaws—let’s take a look at the authors. The first author, Neil Z. Miller, is described as an “independent researcher, and the second author, Gary S. Goldman, is described as an “independent computer scientist.” This is not a promising start, as neither of them appear to have any qualifications that would lead a reader to think that they have any special expertise in epidemiology, vaccines, or science. Still, I suppose one could look at the fact that these two somehow managed to get a paper published in a peer-reviewed journal as being pretty strong evidence for the democratic nature of science, where you don’t necessarily have to be affiliated with a university or a biotech or pharmaceutical company in order to publish in the scientific literature. On the other hand, even though it is stated that this was not funded by any grants or companies, I still see a conflict of interest. Specifically, the NaturalNews.com article points out that the “National Vaccine Information Center (NVIC) donated $2500 and Michael Belkin donated $500 (in memory of his daughter, Lyla) for open access to the journal article (making it freely available to all researchers).” The NVIC, as you recall, was founded by Barbara Loe Fisher and is one of the oldest and most influential anti-vaccine groups in the U.S., having recently teamed up with Joe Mercola to promote anti-vaccine views.

No, most definitely not a promising start.

It’s also not surprising. I did a bit of Googling, as is my wont whenever I encounter someone whose name I don’t recognized, and I found abundant evidence in his Wikipedia entry that Miller has a long history of anti-vaccine activism, having written books with titles like Vaccine Roulette: Gambling With Your Child’s Life, Immunization Theory vs Reality: Expose on Vaccinations, and Vaccines: Are They Really Safe and Effective?, among others. But that’s not all; he’s also the director of the ThinkTwice Global Vaccine Institute and in fact is hosting a copy of this study on his website. Gary S. Goldman is even more interesting. It turns out that he is the President and Founder of Medical Veritas, a rabidly anti-vaccine “journal” that is into HIV/AIDS denialism, having published dubious “reanalyses” of autopsy results of victims of AIDS, such as Eliza Jane Scovill. He also notes at his website that he’s written books entitled The Chickenpox Vaccine: A New Epidemic of Disease and Corruption.

Even less promising.

Still, one might wonder why I pointed this out. Isn’t that an ad hominem attack? Not at all. I’m not arguing that this latest paper is wrong because its authors are clearly members of the anti-vaccine fringe. Who knows? They might be on to something. I’m merely pointing out that what’s good for the goose is good for the gander when it comes to pointing out conflicts of interest (COIs) and, as Harriet has recently discussed COIs do not necessarily have to be financial. As I’ve pointed out time and time again, COIs do not necessarily mean that a study is in error, poorly done, or out-and-out wrong. They merely demand a bit more skepticism, particularly when they are not disclosed, which they are not in the actual paper, which fails to list the connection to NVIC, Medical Veritas, and ThinkTwice. Why didn’t Miller list himself as editor or founder of ThinkTwice or Goldman as founder and editor of Medical Veritas? One wonders, one does. Knowing that these two hold those positions is every bit as relevant as knowing when a pharmaceutical company publishes a study about its latest blockbuster drug.

But who knows? Maybe I’m wrong. Well, actually, I don’t think I am, but it will take delving into the actual paper to show why.

Infant mortality as a function of number of vaccines

The first thing you need to know is that this is a really, really simple paper. In fact, I’d go so far as to say it is simple-minded more than just simple. Basically, Miller and Goldman went to The World Factbook maintained by, of all organizations, the Central Intelligence Agency. Noting that in 2009 the U.S. ranked 34th in infant mortality, they looked up the infant mortality rates from the U.S. and all the nations that have lower infant mortality rates than the U.S. and then compared them to the number of vaccine doses each nation require. They then graphed the infant mortality rate as a function of vaccine dose, and this resulted in Figure 1:

That’s it. That really is the “meat” of the paper, such as it is.

Whenever I see a paper like this, I ask myself: What would I say about it if it had been sent to me as a peer reviewer. This graph leads to a number of questions. First, why did the authors use 2009 data? The cited reference notes that the data were accessed back in April 2010. That’s over a year ago. Did it really take over a year between submission and publication. Be that as it may, whenever I see investigators trying to correlate two variables like infant mortality and the number of vaccines I ask: What is the rationale? It’s the “storks deliver babies” fallacy all over again. In fact, it’s amazingly simple to find spurious “correlations,” as has been demonstrated time and time again as bloggers find correlations between unrelated things, such as vaccines and automobile deaths and the rise in global temperatures and the number of pirates.

I also note that the authors here seem to have pulled the same trick that J.B. Handley and crew like to pull when trying to convince people that U.S. infants are “overvaccinated” by artificially pumping up the apparent number of vaccine doses by counting multivalent vaccines as more than one. For instance, the MMR and DTaP are counted as three each because each vaccine is trivalent; i.e., containing vaccines against three different diseases. In fact, the authors of this gem do this very thing in spades, as Catherina explains:

There are a number of things wrong with this procedure – first of all, the way Miller and Goldman are counting vaccines is completely arbitrary and riddled with mistakes.

Arbitrary: they count number of vaccines in US bins (DTaP is one, hib is separate) and non-specific designations (some “polio” is still given as OPV in Singapore), rather than antigens. If they did that, Japan, still giving the live bacterial vaccine BCG, would immediately go to the top of the list. That wouldn’t fit the agenda, of course. But if you go by “shot” rather than by antigen, why are DTaP, IPV, hepB and hib counted as 4 shots for example in Austria, when they are given as Infanrix hexa, in one syringe?

Mistakes: The German childhood vaccination schedule recommends DTaP, hib, IPV AND hepB, as well as PCV at 2, 3 and 4 months, putting them squarely into the 21 – 23 bin. The fourth round of shots is recommended at 11 to 14 months, and MenC, MMR and Varicella are recommended with a lower age limit of 11 months, too, which means that a number of German kids will fall into the highest bin, at least as long as you count the Miller/Goldman way.

Having used dubious and error-ridden methods for counting the required vaccines and correlated those numbers to infant mortality rates, the authors then move on. After pointing out that the U.S. has a poor infant mortality rate (IMR) relative to its wealth and what it spends on health care, the authors state:

There are many factors that affect the IMR of any given country. For example, premature births in the United States have increased by more than 20% between 1990 and 2006. Preterm babies have a higher risk of complications that could lead to death within the first year of life.6 However, this does not fully explain why the United States has seen little improvement in its IMR since 2000.7

Nations differ in their immunization requirements for infants aged less than 1 year. In 2009, five of the 34 nations with the best IMRs required 12 vaccine doses, the least amount, while the United States required 26 vaccine doses, the most of any nation. To explore the correlation between vaccine doses that nations routinely give to their infants and their infant mortality rates, a linear regression analysis was performed.

This is known as starting with a reasonable observation and then switching to a hypothesis with little or no scientific justification, in essence pulling it out of thin air. The second question I would have is: Why a linear relationship? No justification is given for performing a linear regression analysis. My third question would be: Why this data set?

Actually, this third question is probably the most interesting of all. Miller and Goldman only looked at one year’s data. There are many years worth of data available; if such a relationship between IMR and vaccine doses is real, it will be robust, showing up in multiple analyses from multiple years’ data. Moreover, the authors took great pains to look at only the United States and the 33 nations with better infant mortality rates than the U.S. There is no statistical rationale for doing this, nor is there a scientific rationale. Again, if this is a true correlation, it will be robust enough to show up in comparisons of more nations than just the U.S. and nations with more favorable infant mortality rates. Basically, the choice of data analyzed leaves a strong suspicion of cherry picking. Were I reviewing this paper, I would insist on the use of one or two other data sets. For example, I would ask for different years and/or perhaps the use of the rankings by the United Nations Population Division, which can be found in the Wikipedia entry containing the list of countries by infant mortality rate. And I would insist on doing the analysis so that it includes several nations with worse IMRs than the U.S. Indeed, since the focal point of the analysis seems to be the U.S., which, according to Miller and Goldman, requires more vaccine doses than any other nation, then it would make sense to look at the 33 nations with worse IMRs than the U.S.

Be that as it may, I looked at the data myself and played around with it One thing I noticed immediately is that the authors removed four nations, Andorra, Liechenstein, Monaco, and San Marino, the justification being that because they are all so small, each nation only recorded less than five infant deaths. Coincidentally, or not, when all the data are used, the r2=.426, whereas when those four nations are excluded, r2 increases to 0.494, meaning that the goodness of fit improved. Even so, it’s not that fantastic, certainly not enough to be particularly convincing as a linear relationship. More dubiously, for some reason the authors, not content with an weak and not particularly convincing linear relationship in the raw data, decided to do a little creative data manipulation and divide the nations into five groups based on number of vaccine doses, take the means of each of these groups, and then regraph the data. Not surprisingly, the data look a lot cleaner, which was no doubt why this was done, as it was a completely extraneous analysis. As a rule of thumb, this sort of analysis will almost always produce a much nicer-looking linear graph, as opposed to the “star chart” in Figure 1. Usually, this sort of data massaging is done when a raw scatterplot doesn’t produce the desired relationship.

Finally, it’s important to remember that IMRs are very difficult to compare across nations. In fact, the source I most like to cite to illustrate this is, believe it or not, an article by Bernadine Healy, the former director of the NIH who has over the last three or four years flirted with the anti-vaccine movement:

First, it’s shaky ground to compare U.S. infant mortality with reports from other countries. The United States counts all births as live if they show any sign of life, regardless of prematurity or size. This includes what many other countries report as stillbirths. In Austria and Germany, fetal weight must be at least 500 grams (1 pound) to count as a live birth; in other parts of Europe, such as Switzerland, the fetus must be at least 30 centimeters (12 inches) long. In Belgium and France, births at less than 26 weeks of pregnancy are registered as lifeless. And some countries don’t reliably register babies who die within the first 24 hours of birth. Thus, the United States is sure to report higher infant mortality rates. For this very reason, the Organization for Economic Cooperation and Development, which collects the European numbers, warns of head-to-head comparisons by country.

Infant mortality in developed countries is not about healthy babies dying of treatable conditions as in the past. Most of the infants we lose today are born critically ill, and 40 percent die within the first day of life. The major causes are low birth weight and prematurity, and congenital malformations. As Nicholas Eberstadt, a scholar at the American Enterprise Institute, points out, Norway, which has one of the lowest infant mortality rates, shows no better infant survival than the United States when you factor in weight at birth.

It’s ironic that Bernadine Healy, who’s associated herself so heavily with the anti-vaccine movement, to the point of having been named Age of Autism’s Person of the Year in 2008, provided such a nice, concise explanation about why it’s so problematic to compare infant mortality rates between nations. Miller and Goldman claim that they tried to correct for these differences in reporting for some of the nations who do not use reporting methods consistent with WHO guidelines, but they do not say how they did so or what data source they used to do so. Note that these children who die within the first day of life also tend to be the ones who have either received no vaccines yet or only the birth dose of the hepatitis B vaccine (here in the U.S.). Given that infant mortality is defined as the fraction of children who die before one year of age and many infants lost die very early, many of them have had few or no vaccines, given that the bulk of the U.S. vaccine schedule does not really start until two months of age. In other words, no effort was made to determine if there was actually any sort of correlation between vaccine dose number whether the infants who died actually died at an age where they would be expected to have received most of the vaccines required within the first year. Worse, no real attempt was made to correct for many potential confounding factors. Not that that stops the authors from asking:

Among the 34 nations analyzed, those that require the most vaccines tend to have the worst IMRs. Thus, we must ask important questions: is it possible that some nations are requiring too many vaccines for their infants and the additional vaccines are a toxic burden on their health? Are some deaths that are listed within the 130 infant mortality death categories really deaths that are associated with over-vaccination? Are some vaccine-related deaths hidden within the death tables?

Never mind that the authors present no real data to justify such a speculation. They do speculate, however. Oh, how they speculate! The spend two whole pages trying to link vaccines to sudden infant death syndrome and argue that SIDS deaths, hinting at some sort of conspiracy to cover up the number of SIDS deaths by reclassifying them and then cite old studies that suggested a correlation between vaccination and SIDS while neglecting the more recent data that show that the risk of SIDS is not increased after immunization and that, if anything, vaccination is probably protective against SIDS. Indeed, one of the studies the authors discuss is an abstract presented in 1982, not even a paper published in a peer-reviewed journal.

Finally, there is the issue of ecological fallacy. The ecological fallacy can occur when an epidemiological analysis is carried out on group level data rather than individual-level data. In other words, the group is the unit of analysis. Clearly, comparing vaccination schedules to nation-level infant mortality rates is the very definition of an ecological analysis. Such analyses have a tendency to magnify any differences observed, as Epiwonk once described while analyzing–surprise, surprise!–a paper by Mark and David Geier:

To make this jump from group-level to individual-level data is The Ecological Fallacy, which can be defined simply as thinking that relationships observed for groups necessarily hold for individuals.

The ecological fallacy was first described by the psychologist Edward Thorndike in 1938 in a paper entitled, “On the fallacy of imputing the correlations found for groups to the individuals or smaller groups composing them.” (Kind of says it all, doesn’t it.) The concept was introduced into sociology in 1950 by W.S. Robinson in 1950 in a paper entitled, “Ecological correlations and the behavior of individuals,” and the term Ecological Fallacy was coined by the sociologist H.C. Selvin in 1958. The concept of the ecological fallacy was formally introduced into epidemiology by Mervyn Susser in his 1973 text, Causal Thinking in the Health Sciences, although group-level analyses had been published in public health and epidemiology for decades.

To show you one example of the ecological fallacy, let’s take a brief look at H.C. Selvin’s 1958 paper. Selvin re-analyzed the 1897 study of Emile Durkheim (the “father of sociology”), Suicide, which investigated the association between religion and suicide. Although it’s difficult to find Selvin’s 1958 paper, the analyses are duplicated in a review by Professor Hal Morgenstern of the University of Michigan. Durkheim had data on four groups of Prussian provinces between 1883 and 1890. When the suicide rate is regressed on the percent of each group that was Protestant, an ecologic regression reveals a relative risk of 7.57, “i.e. it appears that Protestants were 7½ times as likely to commit suicide as were other residents (most of whom were Catholic)….ln fact, Durkheim actually compared suicide rates for Protestants and Catholics living in Prussia. From his data, we find that the rate was about twice as great among Protestants as among other religious groups, suggesting a substantial difference between the results obtained at the ecologic level (RR = 7.57) and those obtained at the individual level (RR = 2).” Thus, in Durkheim’s data, the effect estimate (the relative risk) is magnified by 4 by ecologic bias. In a recent methodological investigation of bias magnification in ecologic studies, Dr. Tom Webster of Boston University shows that effect measures can be biased upwards by as much as 25 times or more in ecologic analyses in which confounding is not controlled.

The bottom line is that Miller and Goldman’s ecological analysis virtually guaranteed overestimating any relationship found, the way some studies of radiation hormesis have done. Given that the difference between the highest and lowest IMR is only around two-fold, in essence, given this data set it is highly unlikely that there is any relationship there. This is particularly true given that the authors cannot possibly have controlled for the major confounders. Add to that the fact that they only used one data set and didn’t even include nations with higher IMRs than that of the U.S., and I declare this paper to be utterly worthless. It’s an embarrassment to Human and Experimental Toxicology that its peer reviewers didn’t catch all these problems and that an editor let this paper see print. The Editor-in-Chief Kai Savolainen and the Editor for the Americas A. Wallace Hayes ought to be ashamed of themselves.

Conclusion

The current study joins a long list of poorly planned, poorly executed, poorly analyzed studies that purport to show that vaccines cause autism, neurological diease, or even death. It is not the first, nor will it be the last. The question is: How do we respond to such studies? First off, we as skeptics have to be very careful not to become so jaded that knee-jerk hostility predominates. As unlikely as it is, there is always the possibility that there might be something worth taking seriously there. Next off, we have to be prepared to analyze these studies and explain to parents, when appropriate (which is the vast majority of the time) exactly why it is that they are bad science or why their conclusions are not supported by the data presented. Finally, we have to be prepared to provide these analyses fast. The Internet is speed. Already, if you Google the terms “infant mortality” and “vaccine,” anti-vaccine blogs gloating over Miller and Goldman’s study and the study itself appear on the very first page of search results.

Such is the power of a bad study coupled with the reach of the Internet and the naivete of peer reviewers and journal editors who don’t realize when they’re being played.