Posts in: science

To increase trust in science, button it up

1

Two academics discuss science communication over BBQ and reach the wholly unoriginal conclusion that for increased trust in science, the American research community needs to:

  • acknowledge uncertainty
  • create meaningful participation
  • increase transparency
  • recognize broader concerns

These seemed redundant, as we have been marching towards more openness in science of every kind since at least the early 2000s. Would any scients be able to say, with a straight face, that their average peer projects more certainty, advocates for more gatekeeping, promotes reduced transparency and does not acknowledge controversy as much now compared to the 1950s?

One could in fact, if they were less charitable, blame this newly found openness for the collapse in trust. On one hand you see scientists fighting for clout on social networks, calling each other names, and blowing up small arguments — on the level of angels dancing on the head of a pin — into debates of the century. On the other, everyone and anyone, homeschooled child geniuses and crackpots alike, now has open access to much of specialized scientific literature, and to preprint servers for some samizdat science.

So maybe it is time to own it: yes, openning the kimono has lead to decreased trust in the estabilshment. But was that not the widely understood part of the bargain? I imagine Paul Feyerabend would have been proud of these recent developments.

2

How did the fellows above come up with the idea that more of the same would help shore up trust? Being academics, they have a reference — to the work of Sheila Jasanoff whose work on “civic epistemiology” is described thusly:

Jasanoff’s research identifies distinctive features of how Americans evaluate scientific claims:

Public Challenge: Americans tend to trust knowledge that has withstood open debate and questioning. This reflects legal traditions where competing arguments help reveal the truth.

Community Voice: There’s a strong expectation that affected groups should participate in discussions about scientific evidence that impacts them, particularly in policy contexts.

Open Access: Citizens expect transparency in how conclusions are reached, including access to underlying data and reasoning processes.

Multiple Perspectives: Rather than relying on single authoritative sources, Americans prefer hearing from various independent institutions and experts.

But of course this is hopelessly outdated, if it were ever true to begin with. Jasanoff herself cautions in the chapter of her book “Designs on Nature” where she describs the concept, that the framework offers conceptual clarity at enormous risk of reductionism, as it does not account for differences across social strata, through time, etc. The book is from 2005 and the research it is based on is even older. The “Americans” described above no longer exist.

Jasanoff’s civic epistemilogies were tied to countries. In the last twenty years these countries have lost ground as unifying social forces to a variety of cultures and subcultures. Her descripton of 2005 America may today better apply to the upper-middle-class across a subset of countries more so than a single nation. In each country, the different epistemiologies are becoming more and more opposed. How could we possibly trust each other?

3

There may be no way to return the trust in scence to the 20th century levels. But if we were to try, the most obvious method would be a return to gatekeeping. Leave the science to the scientists and let the outcomes speak for themselves. Keep all discord inside conference halls and university cafeterias. Show more decorum and respect, if grudging, to every scientist colleague while being more discriminatory of who is “a scientist”: PhDs from recognized universities only, please.

This would, of course, be a step back and I in no way, shape or form condone a turn of events quite like this — least of all because it would exclude me from the conversation.

4

Is there a way to stick to the “open science” principles while keeping some modicum of community trust? Being a fan of Costco, their sort of low but effective barrier to entry is appealing. For the uninitiated: Costco charges a modest annual membership ($65, or $130 for their “executive tier”) for the privilege of shopping for premium and premium-mediocre products at incredibly discounted prices. Their only profit is from the membership, as there is little to no margin. But then they also don’t need to spend money on things like advertising, keeping the shelves pretty, or monitoring for shoplifters.

The space between payinh $65 per year and earning a PhD is vast. Whatever the new gate is, it should probably not be degree-based. Maybe have it be a professional society that also takes up interested laypeople using its own criteria. Or a verified subscription to Experimental History. Whatever it is, make it official, make it publich, and make it stick. Then, keep most of the conversation inside the circle. Keep all ambiguity inside the tower, please, just make the tower entrance bigger and charge for entry.

5

Is this the way? I am not sure. Maybe science doesn’t deserve the public’s trust and attempts to increase it are like plugging tiny holds on a massive damn about to burst. But to those who care, let this be some food for thought.


Labor day links, and there are many of them

Happy grilling!


If you constantly cry corruption, could it be because you yourself are corrupt?

There are few pieces of advice as misguided as the one to “follow the science”. The most recent example for why that is comes from Tim Nguyen who describes While giving a shout-out to the podcasting grifter Lex Friedman, but I won’t hold that against him.a remarkable set of physics grifters:

We thus have a disturbing truth. Eric Weinstein, the man who waxes poetic about a Distributed Idea Suppression Complex, is a hypocrite willing to use his own influence to squash criticism. Weinstein’s grievances and tale of persecution are frequently invoked to serve his narrative, yet when he receives opposition, he is willing to use his own power to suppress others.

In that way Weinstein seems remarkably similar to a certain other grifter who — setting everything he can control in his own favor — sees everything not in his favor as rigged. This very phenomenon was discussed recently on the Dithering podcast, but was recognized a long long time ago.


Andrew Gelman writes:

One reason why these celebrity scientists have such great stories to tell is that they’re not bound by the rules of evidence. Unlike you or me, they’re willing to make strong scientific claims that aren’t backed up by data.

So it’s not just that Sapolsky and Langer are compelling figures with great stories who just happen to be sloppy with the evidence. It’s more that they are compelling figures with great stories in large part because they are willing to be sloppy with the evidence.

An under-appreciated fact which reminded me of this old post of mine.


Out today in Annals of Clinical and Translational Neurology: Durability of Response to B-Cell Maturation Antigen-Directed mRNA Cell Therapy in Myasthenia Gravis. It only took 18 months to get here from the pre-print but hey, we were able to get longer follow-up!


A wonderful example of why you should always check the primary sources from Andrew Gelman: When fiction is presented as real: The case of the burly boatmen. Caveat lector. Yes this applies to peer-reviewed literature as well. (ᔥAndrew Gelman, who self-cited)


For your weekend reading pleasure

Happy Friday, etc.


A brief note on AI peer review, education and bullshit

When I wrote about formalizing AI “peer” review I meant it as a tongue-in-cheek comment on the shoddy human peer review we are getting anyway. “Wittgenstein’s ruler: Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table you may also be using the table to measure the ruler. The less you trust a ruler’s reliability (in probability called the prior), the more information you are getting about the ruler and the less about the table.”, Nassim Taleb in Fooled by Randomness. Peer reviewers are the ruler, the articles are the table, and there is zero trust in the ruler’s reliability. It was also (1) a bet that the median AI review would soon be better than the median human review (and remember, the median journal article is not submitted to Nature or Cell but to a journal that’s teetering on being predatory), and (2) a prediction that the median journal is already getting “peer” reviews mostly or totally “written” by LLMs.

Things have progressed since January on both of these fronts. In a textbook example of the left hand not knowing what the right hand is doing, some journals are (unintentionally?) steering their reviewers towards using AI while at the same time prohibiting AI from being used. And some unscrupulous authors are using hidden prompts to steer LLM review their way (↬Andrew Gelman). On the other hand, I have just spent around 4 hours reviewing a paper without using any AI help whatsoever, and it was fun. More generally, despite occasionally writing about how useful LLMs can be, my use of ChatGPT has significantly decreased since I fawned over deep research.

Maybe I should be using it more. Doc Searls just wrote about LLM-driven “Education 3.0”, with some help from a sycophantic ChatGPT which framed eduction 1.0 as “deeply human, slow, and intimate” (think ancient Greeks, the Socratic method and the medieval Universities), 2.0 as “mechanized, fast, and impersonal” (from the industrial revolution until now), and 3.0 as “fast and personal”. Should I then just let my kids use LLMs whenever, unsupervised, like Neal Stephenson’s Primer (“an interactive book that will adapt as the user grows and learns”)? But then would I want my kids hanging out with a professional bullshitter? Helen Beetham has a completely contrarian stance — that AI is the opposite of education — and her argument is more salient, at least if we take AI to mean only LLMs. Hope lies eternal that somebody somewhere is developing actual artificial intelligence which could one day lead to such wonderful things as the “Young Lady’s Illustrated Primer”.

Note the emphasis on speed in the framing of Education 3.0. I am less concerned about LLM bullshit outside of education, in a professional setting, since part of becoming a professional is learning how to identify bullshitters in your area of expertise. But bullshit is an obstacle to learning: this is why during medical school in Serbia I opted for reading textbooks in English rather than inept translations to Serbian made by professors with an aptitude for bulshitting around ambiguity. This is, I suppose, the key reason why we need LLMs there in the first place for there is nothing stopping a motivated learner from browsing wikipedia, reading any number of freely available masterworks online, watching university lectures on YouTube, and interacting with professionals and fellow learners via email, social networks, Reddit and what not. But you need to be motivated either way: to be able to wait and learn without immediate feedback in a world without LLMs, or to be able to wade through hallucinations and bullshit that LLMs can generate immediately. Education faces a bootstrapping problem here, for how can you recognize LLM hallucinations in a field you yourself are just learning?

The through-line for all this is motivation. If you review papers in order to check a career development box, to get O1 visa/EB1 green card status, and/or get brownie points from a journal I suspect you would see it as a waste of time and take any possible shortcut. But if you review papers because of a sense of duty, for fun, or to satisfy a sadistic streak — perhaps all three! — why would you want to deprive yourself of the work? Education is the same: if you are learning for the sake of learning, why would you want to speed it up? Do you also listen to podcasts and watch YouTube lectures at 2x? Of course, many people are not into scientia gratia scientiae and are doing it to get somewhere or become something, in which case Education 2.0 should be right up their alley, along with the playback speed throttle.


A tale of two graphs

The FT and NYT both have stories about the dollar’s poor start to the year, which sounds alarming. But then NYT shows this graph to back up the claim and you know what, it really doesn’t seem to be all that dramatic. In fact, the very beginning of the year has been quite average, as have the last two months. It is only the period from March until mid-April that saw two unusual slumps, but does that count as “dollar having its worst start to a year since 1973”, as the NYT put it? It might, depending on your definition of “worst” and “start”, but hardly a foregone conclusion. I know that newspapers need to prepare for the slow news week with the holiday coming up, but come on. “Worst start to a year in more than 50 years” is a bit too dramatic for what the chart shows us.

What kind of data would deserve some drama? Well, again the NYT provides the perfect example with their front page news on April 2020 US unemployment data. The headline, in much deserved all-caps, says “U.S. UNEMPLOYMENT IS WORST SINCE DEPRESSION” and has the unemployed bard dip so far below anything in the past 50 years that it falls all the way down to the bottom of the front page. A true extreme value.

As an aside, if you thought you could call either “an outlier”, think again. Here is a 12-minute explainer on the difference from Pasquale Cirillo’s Log of Risk podcast but in short: outliers are impossible values, extreme values are, well, extreme but still in the realm of the possible. The dollar’s decline this year is neither but you wouldn’t know it if you just read the headlines.


Some of the best blog posts are rants, and Andrew Gelman just published one, about reckless disregard for the truth. Here is why he thinks the term “bullshit” does not apply:

In my post, I asked what do you call it when someone is lying but they’re doing it in such a socially-acceptable way that nobody ever calls them on it? Some commenters suggested the term “bullshit,” but that didn’t quite seem right to me, because these people seemed pretty deliberate in their factual misstatements.

I disagree. Whether the bullshitter is deliberate should not matter, and many do indeed BS with a specific goal in mind. In the examples he lists those are inflating the impact of a paper and getting paid for expert testimony in favor of big tobacco. Indeed, dig deep enough and you will find hunger for money and prestige to be at the root of much bullshit.