Posts in: science

Here's why.

There was something particularly irksome about a USA Today article from a few days ago — it prompted 3, count them, three tweets posts Xs from me — and I wanted to figure out what bothered me so. Here is the headline:

Left or right arm: Choosing where to get vaccinated matters, study suggests. Here’s why

No, it’s not the typography, although they should either not have had a full sentence in their headline, or else should have finished it with a full stop. But then they would have lost the chance for the click-baity Here’s why as a prelude to an article OK, this can get real confusing real fast since there are two articles I am writing about: the USA Today’s newspaper article, and the research article to which it refers. So, let’s use article for the newspaper, and manuscript for the research article. Because why not? about Real Science™ which — color me astonished — takes a hypothesis-generating study and presents the hypotheses it generated as the final results.

To its credit, the article starts of with a link to the manuscript and the name of the journal where it was published, which is eBioMedicine, part of the proliferating Lancet family, impact factor 11.1. Although, you know what they say about impact factors.Good! They also invited an independent researcher to comment. And I am sure that his comments were similar to mine, although of course most of what he said (or more likely wrote in an email) didn’t make it. What ended up on the page were two blurbs about precise vaccination from the director of a Precision Vaccines program. Gasp.

But these are all side attractions. The biggest problem is this: scientists want to compare people who had a two-dose vaccine shot in the same arm to those who had it in different arms; in the manuscript, these were called ipsilateral and contralateral groups. They aren’t randomizing people to one versus the other, What they describe as randomization isn’t really so, but that’s a rabbit hole we better not get into. but with these being generally healthy people, and with the participants not having a choice as to where they will get a vaccine, that is not too much of an issue. Then they ask them some questions about vaccine side effects and draw some blood. The questions are about side effects and the blood is to check for “the strength of the immune response”.

Note that they don’t say at the outset that the groups would be different, and how. Would the opposite arm have fewer side effects? Better immune response? If so, in what way? More antibody? Stronger antibody? A different subtype of antibody? Better or worse cellular immunity? Which cell (among dozens)? More cells, stronger cells, or different cells? Or maybe the same side would be better?

The beauty of hypothesis-generating research (for the researcher) is that it doesn’t matter. Whatever you get, you will get it published, sometimes in a double-digit impact factor publication. I’ve sat on many a lab meeting where things like this were proposed and always, always, the comment is that “the results will be interesting whatever they are”. And they are right! But you will not know — cannot know — whether the results you got are based on an underlying physiology, or occurred purely by chance. That is where confirmatory studies come in.

Neither the manuscript nor the article recognize this. Among the many things they looked at, the researchers found two things that were different between the two groups: those who had the vaccine in the same arm had “more” of a certain type of immune cell than the other, and the opposite-arm group had increased expression of a certain marker on yet another type of immune cell. “More” is in quotes because even that is more subjective than it appears — another rabbit hole — but even if true in this sample, it is at best a hypothesis that should lead to another, possibly smaller study, where you focus on these cells, with different operators counting them, and doing additional hypothesis-generating analyses on the side to figure out the why of it, which would lead into yet another confirmatory study… You get the idea.

This is not what the manuscript authors propose. Instead they take their result at face value and concoct a mechanism out of thin air that would explain the result. The journalist then takes the mechanism and presents it as the main research result, the Here’s why of that clickbait headline. There is a high bar for calling anything in science conclusive and the article does have the usual disclaimer that “more research and data is needed”. But the phrase has been repeated so much that it has lost all meaning, something you say to mark yourself as a “believer in science” while with a wink and a nudge you act as if the results were indisputable.

Fortunately, science is a strong-link problem: those who know what they are doing will adjust their beliefs accordingly, and down the line confirm or falsify these preliminary findings. Unfortunately, science doesn’t operate in a vacuum. If its covering of science is indicative, journalism, the fourth estate, is in a hole and digging deeper, taking others with them.


Not a day after his EconTalk episode, Adam Mastroianni wrote a most delightful essay about why people just can’t get each other: “Sorry, pal, this woo is irreducible”.

Well, most of it is delightful. The fifth paragraph is absolutely horrifying (you have been warned).


Nassim Taleb has updated his essay against IQ, and I don’t know if Figure 1 there is new or I haven’t been paying attention before, but it is a true eye-opener. It shows how meaningless correlation is in the absence of symmetry, and medicine is full of asymmetries. I shudder to think how much medical literature consists entirely of physicians-cum-naïve statisticians pouring through medical charts gathering data to calculate such correlations. Counting the official and semi-official guidelines based on such flawed papers would be a nice side project.


"Freakonomics and global warming: What happens to a team of 'rogues' when there is no longer a stable center to push against?"

Andrew Gelman writes, under a typographical nightmare of a headline:

Back in the day, Steven Levitt was a “rogue economist,” a genial rebel who held a mix of political opinions (for example, in 2008 thinking Obama would be “the greatest president in history” while pooh-poohing concerns about recession at the time), along with some soft contrarianism (most notoriously claiming that drunk walking was worse than drunk driving, but also various little things like saying that voting in a presidential election is not so smart). Basically, he was positioning himself as being a little more playful and creative than the usual economics professor. A rogue relative to a stable norm.

I wonder how the Freakonomics team feels now, in an era of quasi-academic celebrities such as Dr. Oz and Jordan Peterson, and podcasters like Joe Rogan who push all sorts of conspiracy theories, and not just nutty but hey-why-not ideas such as UFO’s and space aliens but more dangerous positions such as vaccine denial.

Being a contrarian’s all fun and games when you’re defining yourself relative to a reasonable center, maybe not so much when you’re surrounded by crazies.

The parallel in medicine is John Ioannidis, who went from a praised debunker of bad science to a controversial “covid minimizer” without really changing his M.O. So, have their ways always been suspect, have their methods become faulty with the changing circumstances, or are they in the right even now, and it is the hyper-polarized environment to blame for our skewed perspective of their current work? I haven’t a clue.

Or you can look at it through the lens Venkatesh Rao’s field guide to the new culture wars, which is relevant even without a post-covid revision. The academic iconoclasts like Ioannidis, Levitt, et al. are useful in peacetime to serve as an internal control, strengthening the ranks or whatnot — I am not familiar enough with military culture to pick a better analogy — but become useful fools at best and traitorous collaborators at worst when the opposing sides gain in strength and start an offensive. But then, you still need some contrarians around to keep you in check. So how do you square that circle?

Well, Elon Musk helped when he destroyed Twitter. Criticizing one’s academic elders is a centuries-long tradition, tolerated and at times promoted, provided one didn’t do it out in the open. But saying anything to “civilians” — only there are no civilians in the culture wars, just potential alias and foes — is… was… is unseemly.

For better or worse, the barricades are on their way back, the indisciminate co-mingling of ideologies is waning, and (a different set of) iconoclasts will again find their groove. So it goes…


An update on room-temperature superconductivity from Derek Lowe:

I am guardedly optimistic at this point. […] This is by far the most believable shot at room-temperature-and-pressure superconductivity the world has seen so far, and the coming days and weeks are going to be extremely damned interesting.

Hurray for interesting times.


It isn’t every day that a podcast goes from my Testing to the Regular playlist, so I have to mark the moment. “Reason is Fun” by Lulie Tanett and David Deutsch is, well, fun and thought-provoking throughout, even if (because?) I often disagree with either or both of the hosts.


This room-temperature superconductor news has potential to be either really big, or just another footnote in the history of physics, but either way the number of hits I got about it from different sources was interesting:

  • RSS feeds: 3
  • Everything else: 0

RSS wins! Again.


🧟 Beware of the zombie trials:

More than one-quarter of a subset of manuscripts describing randomized clinical trials submitted to the journal Anaesthesia between 2017 and 2020 seemed to be faked or fatally flawed when their raw data could be examined, editor John Carlisle reported. He called these ‘zombies’. But when their raw data could not be obtained, Carlisle could label only 1% as zombies.

Good thing that science is a strong link problem, because too many of the links are just sawdust and dreams. (via Derek Lowe)


Sometimes, that small print does matter

There is predatory, and then there is predatory:

When Björn Johansson received an email in July 2020 inviting him to speak at an online debate on COVID-19 modeling, he didn’t think twice. “I was interested in the topic and I agreed to participate,” says Johansson, a medical doctor and researcher at the Karolinska Institute. “I thought it was going to be an ordinary academic seminar. It was an easy decision for me.”

All the scientists interviewed by Science say Ferensby’s initial messages never mentioned conference fees. When one speaker, Francesco Piazza, a physicist now at the University of Florence, directly asked Ferensby whether the organizers would request a fee, Ferensby replied, “No, we are talking about science and COVID-19.”

But after the events, the speakers were approached by a conference secretary, who asked them to sign and return a license agreement that would give Villa Europa—named in the document as the conference organizer—permission to publish the webinar recordings. Most of the contracts Science has seen state that the researcher must pay the company €790 “for webinar debate fees and open access publication required for the debate proceedings” plus €2785 “to cover editorial work.” These fees are mentioned in a long clause in the last page of the contract, and are written out in words rather than numbers, without any highlighting.

What an absolute nightmare. Predatory journals at least have the decency to ask you for them money up front.

And let’s take a moment to contemplate the ridiculousness of the current academic publishing and conference model. Note that there is nothing unusual in academic conferences requiring attendance fees from speakers. If you have an scientific abstract accepted for oral or poster presentation at ASCO, let’s say, you will still have to pony up for the registration fee. And publication fees for a legitimate open access journal can be north of $3,000. So how is a judge to know whether the organizer’s claims are legitimate?

The difference, of course, is that the good ones — both journals and conferences — don’t solicit submissions; you have to beg them to take your money. Which only makes the situation more ridiculous, not less.


The definition of cancer, with a few side notes on impact factor

A group of cancer researchers proposes an updated definition of cancer:

While reflecting past insights, current definitions have not kept pace with the understanding that the cancer cell is itself transformed and evolving. We propose a revised definition of cancer: Cancer is a disease of uncontrolled proliferation by transformed cells subject to evolution by natural selection.

I like it!

Side note: the opinion came out in Molecular Cancer Research. It is a journal published by a reputable organization with an impact factor of 5.2. No shame in that, but… Many predatory journals now have IFs that are the same or even higher See also: Goodhart’s law. And also, this is a good example of why a metric becomes meaningless over time without context, or at least a denominator.so unless the impact factor is mid-to-high double digits, it no longer carries much information on the journal’s credibility or readership.

A side note to the side note: a paper published in a journal listed as predatory is the second-highest cited of any I co-authored: 100 and counting. I also think it is a very good paper, although a review article getting that many citations is a sign that too many people are not citing primary literature, which is bad! And my most highly cited paper is also a review! This is embarassing for me, but speaks even worse for the people doing all that review-citing. But maybe having a journal listed as predatory no longer carries much information on the articles there not being worth a read?