Published on [Permalink]
Reading time: 5 minutes
Posted in:

Here's why.

There was something particularly irksome about a USA Today article from a few days ago — it prompted 3, count them, three tweets posts Xs from me — and I wanted to figure out what bothered me so. Here is the headline:

Left or right arm: Choosing where to get vaccinated matters, study suggests. Here’s why

No, it’s not the typography, although they should either not have had a full sentence in their headline, or else should have finished it with a full stop. But then they would have lost the chance for the click-baity Here’s why as a prelude to an article OK, this can get real confusing real fast since there are two articles I am writing about: the USA Today’s newspaper article, and the research article to which it refers. So, let’s use article for the newspaper, and manuscript for the research article. Because why not? about Real Science™ which — color me astonished — takes a hypothesis-generating study and presents the hypotheses it generated as the final results.

To its credit, the article starts of with a link to the manuscript and the name of the journal where it was published, which is eBioMedicine, part of the proliferating Lancet family, impact factor 11.1. Although, you know what they say about impact factors.Good! They also invited an independent researcher to comment. And I am sure that his comments were similar to mine, although of course most of what he said (or more likely wrote in an email) didn’t make it. What ended up on the page were two blurbs about precise vaccination from the director of a Precision Vaccines program. Gasp.

But these are all side attractions. The biggest problem is this: scientists want to compare people who had a two-dose vaccine shot in the same arm to those who had it in different arms; in the manuscript, these were called ipsilateral and contralateral groups. They aren’t randomizing people to one versus the other, What they describe as randomization isn’t really so, but that’s a rabbit hole we better not get into. but with these being generally healthy people, and with the participants not having a choice as to where they will get a vaccine, that is not too much of an issue. Then they ask them some questions about vaccine side effects and draw some blood. The questions are about side effects and the blood is to check for “the strength of the immune response”.

Note that they don’t say at the outset that the groups would be different, and how. Would the opposite arm have fewer side effects? Better immune response? If so, in what way? More antibody? Stronger antibody? A different subtype of antibody? Better or worse cellular immunity? Which cell (among dozens)? More cells, stronger cells, or different cells? Or maybe the same side would be better?

The beauty of hypothesis-generating research (for the researcher) is that it doesn’t matter. Whatever you get, you will get it published, sometimes in a double-digit impact factor publication. I’ve sat on many a lab meeting where things like this were proposed and always, always, the comment is that “the results will be interesting whatever they are”. And they are right! But you will not know — cannot know — whether the results you got are based on an underlying physiology, or occurred purely by chance. That is where confirmatory studies come in.

Neither the manuscript nor the article recognize this. Among the many things they looked at, the researchers found two things that were different between the two groups: those who had the vaccine in the same arm had “more” of a certain type of immune cell than the other, and the opposite-arm group had increased expression of a certain marker on yet another type of immune cell. “More” is in quotes because even that is more subjective than it appears — another rabbit hole — but even if true in this sample, it is at best a hypothesis that should lead to another, possibly smaller study, where you focus on these cells, with different operators counting them, and doing additional hypothesis-generating analyses on the side to figure out the why of it, which would lead into yet another confirmatory study… You get the idea.

This is not what the manuscript authors propose. Instead they take their result at face value and concoct a mechanism out of thin air that would explain the result. The journalist then takes the mechanism and presents it as the main research result, the Here’s why of that clickbait headline. There is a high bar for calling anything in science conclusive and the article does have the usual disclaimer that “more research and data is needed”. But the phrase has been repeated so much that it has lost all meaning, something you say to mark yourself as a “believer in science” while with a wink and a nudge you act as if the results were indisputable.

Fortunately, science is a strong-link problem: those who know what they are doing will adjust their beliefs accordingly, and down the line confirm or falsify these preliminary findings. Unfortunately, science doesn’t operate in a vacuum. If its covering of science is indicative, journalism, the fourth estate, is in a hole and digging deeper, taking others with them.

✍️ Reply by email

✴️ Also on Micro.blog