One very good use case for ChatGPT is… actually chatting. Who knew?
I was waiting for someone at a restaurant and had 10 minutes by myself — a rarity these days. Instead of scrolling through one social network timeline or the other, I opened up the ChatGPT iPhone app, and learned that the only Leonardo painting in the Western hemisphere was right at my doorstep. Also learned that there are only 20 paintings by Leonardo and more than 300 by Rembrandt, and a few other tidbits.
Are any of them true? Well, trust but verify as they say, but why should I trust it any less than a random X account? Or worse yet, a non-random threadboy — now there’s a useful word I saw just recently — who posts unsourced graphs and opinions-as-facts.
Now sure, X and other social networks have upsides, like bonding with fellow humans over topics of joint interest. But these are for the most part shallow connections, empty calories for our socialite stomachs. A more stable and sustainable equilibrium for most people, certainly for me, could be real-life interactions for socializing and algorithms for tickling the mind without any pretense that we would be buddies. Social networks try to be both but are not great at either, the same way pickup trucks try to be both a car and a truck but are mostly gas-guzzling parking spot-hogging behemoths, and at the same time the most popular vehicles in the US. So, a perfect analogy.
From this perspective, ChatGPT’s forgetfulness is an excellent feature. It remembering prior conversations would bring it a step closer towards parasocializing, making it even worse than human social networks. I have no doubt that the feature is coming any month now, if it’s not already here. If and when it comes will the the point when X/Threads/Bluesky et al should sound the alarm — or introduce friendly algorithms of their own.
Three or so years ago I started a blog post draft titled “Twitter as a dark forest”. Unsurprisingly, I never finished it — and now I can delete it knowing that someone has formulated the issue better than I ever could have, and earlier. There is even a book out which, yes, I’ve ordered. (↬Waxy.org)
This morning I learned that one of the many plot lines in the British TV show Bodies has been lifted from real life:
The Tichborne case was a legal cause célèbre that captivated Victorian England in the 1860s and 1870s. It concerned the claims by a man sometimes referred to as Thomas Castro or as Arthur Orton, but usually termed “the Claimant”, to be the missing heir to the Tichborne baronetcy.
It goes without saying that the real-life Claimant was not as successful as the fictional one — this is why Wikipedia makes for better reading than most fiction.
“Maybe that’s why young people make success. They don’t know enough. Because when you know enough it’s obvious that every idea that you have is no good.”
This was Richard Feynman per the James Gleick biography, and he was correct! Biomedicine is now in this position, as I wrote yesterday.
Glenn Fleishman has a new book project on Kickstarter, and it is the easiest backing decision I’ve made in years. I mean, just look at that sample chapter. It’s called How Comics Were Made: a Visual History of Printing Cartoons and here’s hoping it makes it.
The Medical Journal of Record The New York Times. The link is to a gift article. has an excellent story on Kawasaki disease out today which reminded me of Balkan endemic nephropathy, another rare disease with an unusual and infectious disease-like distribution. Note that prevalence and distribution are where the similarities end. Kawasaki disease is a vasculitis (inflammation of the blood vessels) that affects children and young adults. Balkan endemic nephropathy caused your kidneys to shrivel up and stop working, and affected the middle-aged and the elderly. It had no known cause back when I was in medical school (which was — gasp — 20 years ago), but it has since been tied to accidental consumption of a certain plant. Well, accidental in the Balkans but intentional in China, where it was used in some traditional medicines and could cause “Chinese herbs nephropathy”, which was like the Balkan version on speed. Note that I am referring to both BEN and CHN in the past tense, but should probably temper my enthusiasm: even though we know what’s causing them and how to prevent them, their prevalence has decreased but is not zero.
The genetic revolution has been great for many aspects of medicine, but it has also made us a bit lazy. The promise of the late 1990s and the early 2000s has been that we would fine the genetic cause of most diseases, and would be only a step away from solving them. While we certainly found many genetic disorders, most of the are in the ultra-rare category and tied to newly-established diseases that were previously only described as syndromes. The “big” diseases — hypertension, type 2 diabetes mellitus, major depression and the like — are as unknown as ever, their cause described as “multifactorial” which is code for “we don’t know”. Stomach ulcers were also thought to be multifactorial until we found out they were mostly caused by a bacterium. A similar thing seem to be happening with multiple sclerosis, which seems to be caused by a virus, one that was mostly thought to be an unavoidable nuisance but is now a vaccine target.
But haven’t we already discovered all the big bad bugs? I sincerely doubt it. We have trouble identifying even larger organisms Also a gift link, this time to The Washington Post. I’m on a roll today. — there could be hundreds of disease-causing creatures and substances that we don’t yet know about because we can’t see them, can’t grow them, and/or don’t know where to look. And we terrifyingly bad at looking for anything but the obvious — there are parts of our own anatomy that we’ve discovered just recently.
So, I know that we will find the cause of Kawasaki disease and can only hope that it will be soon. I also hope that we will find the one main cause of the obesity epidemic. Add in essential hypertension and psychiatric disorders in there. Much money has been spent on discovering the genetic factors of these diseases. Now that we know that genes play but a small part in most of them, maybe it’s time to reallocate the funds.
After a few weeks of owning an AVP, my use has settled into four categories:
Apple should be giving me a commission.
Among many consequences of covid-19, the rise and fall of Citizen Science has been one of the more amusing ones to watch. It has fallen from grace significantly since its 2020 peak, when everyone was an expert on cloth versus surgical versus N95 masks and “did their on research” on which one was best for them. As with any progressive idea, it soon became adopted by the other end of the American political spectrum who “did their own research” on vaccines and genomic integration of mRNA. So it goes.
Another type of science also thrived during covid-19 but unlike the Citizen sort it is now stronger than ever. It began with daily updates on covid-19 incidence and mortality, usually from the same source, when every Twitter user became an expert on data visualization and every media outlet had a “data journalist” job posting. It now continues with a daily stream of (predominantly) opinions presented as hard facts, backed by pretty graphs above and a list of sources below. I speak of course of Journalist Science.
Before I go into why — spoiler alert — I don’t think Journalist Science is a net positive, a disclaimer: I like and respect many “data reporter”-type people, the Financial Times’s John Burn-Murdoch probably most of all. FT has a clear and known bias toward capitalism and markets, which makes it one the most legible sources of news around; it is in fact the only newspaper I subscribe to. And I’ve linked to Burn-Murdoch’s reports many times, even on this blog. His covid-19 charts, small multiples in particular, were the peak of data visualization and deserved to be included in Edward Tufte’s latest book. So when looking for examples of why Journalist Science is bad, I will use the Financial Times as the prime example: not because they are particularly bad but because they are one of the best newspapers around, and even they don’t get it right.
This article and the reaction to it is what got me to question the concept. It was about the diverging political paths of kids these days: girls becoming every more liberal, boys turning more and more conservative, in four “developed” countries (South Korea, US, Germany and the UK). On the day it came out, two of my friends who don’t know each other thought something was fishy, and both linked to an act of Citizen Science attempting to debunk it. The attempt failed, not through any fault of the citizen doing it but because the original sources were opaque, and the ways in which they were combined were unclear.
Before the pandemic, most data visualization exercises had a single source: a Gallup poll, a think tank’s projection, or just a CIA Factbook piece of miscellany. Covid-19 seems to have eased data journalists into looking at more than one source, combining them into the same graph, correcting for this or that anomaly that is bound to occur in any large data gathering exercise, all in order to perform a feat that is impossible to distinguish from actual science and in fact is proper science by any reasonable definition. This drift from painting pretty pictures to doing research proceeded over months and years, with each change explained in a footnote, and with the public familiar enough with the numbers that any particular graph did not need special introduction.
By the time covid became old news, the drift was complete: data journalists became data scientists in everything but the name, with an editor instead of a PI and retweets, likes and view counts instead of citations. As problematic as academic research is — and look no further than this very blog for a few thoughts on the rot — Journalist Science is even worse and let me count a few of the ways how:
Two things absent from my list of problems are lack of peer review and lack of statistician input, the first because we don’t really need it, the second because we do but I see no evidence that journalists are any worse at statistics than doctors, molecular biologists, psychologists, or anyone else writing research papers who isn’t an actual statistician.
So how to fix this? A good start would be to publish the actual work of science you did, either as a preprint or as a full-blown academic publication — thought that would be overkill. This is the approach taken by Nassim Taleb: if you have something scientific to say, write a paper instead of a Twitter thread. BioRxiv and medRxiv are two well-respected depositories accepting papers of any length, but outlets like the FT produce enough research that they may as well have a preprint server of their own. This would ensure that at least some amount of rigor was given to the methodology, and would enable dialogue.
Less likely would be for newspapers to pre-register their studies, or at least name the Journalist Science articles that were considered but turned down. Least likely? Have the data journalists dedicate their considerable knowledge and talent to describe, enlighten and criticize the body of scientific work that’s already there, or alternatively dedicate themselves full to science and call themselves scientists. The uncomfortable middle ground that Journalist Science straddles is contributing greatly to the reputational decline of both of its components.
📺 Beckham (2023) seems to have had full access to David and Victoria. It may have paid for that access by painting too rosy of a picture of the couple. But that’s OK! I have new respect for both realizing how young they were when they had their family photos plastered all over tabloids, and how dedicated David Beckham was to football and family — in that order. Most of all, how dedicated his parents were to his career, which is all the more poignant when you realize, in the last few moments of the last episode, that he could not show the same dedication to his own children.
🍿 Kingsman: The Secret Service (2014) will, I hope, be the worst film I will have seen this year. Words fail to describe how utterly bad everything in it is, from the soulless story through the already-dated CGI to the then-edgy (if you squinted) now-cringe humor. The creator’s name is plastered all over the promos for his follow-up franchise. It serves as an excellent deterrent.