January 23, 2023

Some phrases in Latin, from profound to trite

Serbian educational system in the 1990s and early 2000s did not get many things right, but the one thing it did was to introduce Latin in high school The gymnasium, to be more precise, or what is a lycée in France and I guess prep school in the US. And while I don’t think it has the same negative connotations in Serbia and France as it does in America — lycées and gymnasiums being as public as the other high schools — that may just be cluelessness on my part. and continue it in medical school. In retrospect not nearly enough, but what little of it we had seems to have stuck. I am therefore always surprised by my American colleagues not having a clue about what some or any of the bellow mean.

Some of these have been repeated so often that they are part of the popular culture. I would expect gamers and fans of sci-fi to be familiar with Deus ex machina, and connoisseurs of expensive watches should have heard about Festina lente. To be clear, I’ve maybe heard of… 30% of what’s on this Wikipedia list. Looking at it, American lawyers should know more Latin that the doctors, but is that actually the case?

My first Covid-19 paper

The beginning of the year was busy enough for a short commentary I co-athored to come out without my noticing.

Briefly, the US government spent $10 billion procuring the anti-Covid drug Paxlovid after a study confirmed its efficacy in unvaccinated people exposed to the delta strain. It then proceeded to hand it out to everyone, including the vaccinated and boosted during the omicron wave, with no data on whether it is actually needed in that setting. A similar drug, molnupiravir, ended up not having any meaningful effect in those who received the vaccine despite preventing hospitalization and death in the unvaccinated.

Could those $10 billion have been better spent? We believe the answer is: yes. For a fraction of the cost, using the same network of local pharmacies as in the Test-to-Treat initiative, the federal government could have randomized the first 100,000–250,000 patients to Paxlovid, Molnupiravir, or usual care — an order of magnitude more than PANORAMIC as many in the American health care system would have been lost to follow-up. The study would have taken mere months to accrue and would have provided valuable information on the efficacy of these treatments in the U.S. population. As importantly, it would have provided an important precedent and infrastructure for more federally funded pragmatic randomized controlled trials of agents under EUA or accelerated approval. The precedent set instead was for government’s full support for use of drugs far outside of the tested indication.

You can read the whole thing here, without a paywall.

January 22, 2023

Finding an article about AI in a major news publication that sticks to facts and makes sense has become an event worth celebrating, so here is a recent one by Tatum Hunter of the Washington Post.

Finished reading: 1177 B.C. by Eric H. Cline 📚

Come for the meticulously documented story of the Bronze Age collapse, stay for what preceded it: alliances, feuds and intrigue to rival anything you’d find in the Game of Thrones.

January 21, 2023

With Tweetbot and Twitterrific gone, and both the website and official app insistant on algorithmic timeline as the default, it is time to say goodbye.

Well, almost. For the few accounts that haven’t yet migrated and still have interesting things to say, there is NetNewsWire.

My position in regards to ChatGPT

Unmodified ChatGPT output, if it were produced by a human, would precisely fit the definition of bullshit BS from Harry Frankfurt’s essaywords meant to persuade without regard for truth. We can debate whether an algorithm can have intent or not, I’d say not, so on its own the output would not qualify as BS but it definitely has no regard for truth because predictive AIs don’t have a concept of anything other than the probability of one word coming after the other.

So, if people are worried about ChatGPT or any other predictive AI replacing them, or passing the Turing test, that is only to the extant that their work was BS anyway and that, as Frankfurt predicted, we are awash with BS and have become desensitized to it, almost expecting it.

With that in mind, I find it amusing that reporting on ChatGPT — some of which I commented on — misses the BS-ness of predictive AIs while itself being BS. Well, amusing and terrifying at the same time.

This is in response to a question.

January 20, 2023

Feeling sad for Twitter app developers and, considering there will be some delay between the fit hitting the shan and it spreading all around, even sadder for the inevitable hardship of anyone who depended on a Twitter audience for their livelihood. Castles out of sand…

January 19, 2023

📚 An unpopular opinion: nonfiction audiobooks are an oxymoron. Those which are better heard than read (see: Gladwell) are entertainment disguised as education, giving only an illusion of understanding.

The very best works of fiction, however, work equally well as either.

Why are people losing their minds over ChatGPT?

Reporter Holly Else in a news article for Nature:

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December.

So far so good. Per the preprint, researches collected 50 real abstracts, 10 each from JAMA, NEJM, BMJ, Lancet, and Nature Medicine, then asked ChatGPT to generate a new abstract out of each article’s title and journal name. They ended up with 100 abstracts, half of them AI-generated, that they were able to analyze using 3 methods: a plagiarism detector, an AI detector Or, to be more precise, the GPT-2 Output Detector. Note that ChatGPT is based on GPT-3., and blinded The preferred term nowadays seems to be masked over blinded, but either way you are bound to have funny-slash-distrubing associations pop into your head. human reviewers.

You can click through the link to read the outcomes, but per the pre-print’s own conclusion:

The generated abstracts do not alarm plagiarism-detection models, as the text is generated anew, Emphasis mine. but can often be detected using AI detection models, and identified by a blinded human reviewer.

So the “can often be detected” from the preprint itself becomes “often unable to be spotted” in the hands of a crafty human reporter. Gotcha.

Of course, no alarmist article is complete without some color comentary:

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

We have always and forever will be in a situation where everyone — expert or not — had to engage their critical thinking to determine whether data presented are true and important, true but unimportant, true but misinterpreted, fragile, exagerated, overblown, or just plain fake. AI making it easier for the unscrupulous to do what they would have done anyway does not change the equation by an Earth-shattering amount.

Look, some people can’t handle a blank page but are good at editing, even if it means completely replacing the original text. In the olden days of 6 months ago trainees had no other recourse but to grind their teeth and just get on with it, hoping that at some point in their careers they will have trainees of their own writing those pesky first drafts. ChatGPT seems like a godsend for them. Whether what’s sent to journals for publication or posted on a pre-print server is real, fake, nonsense or profound still depends on the person doing the submitting.


Some side observations in no particular order:

Further evidence that T cells are the best cells.