Invention versus discovery, medical treatment edition
Google Scholar alerts are a quick if crude way to be up-to-date with literature. In addition to journal articles and conference abstracts it also looks at U.S. patent applications, and despite the impenetrable legalese something will ocasionally turn up that is at least amusing, if not informative.
Today was one such occasion: a patent for a combination of two already approved drugs to treat toxicity of CAR T-cell therapy, by the group which, admittedly, was the first to give CAR T-cells to humans and the first to treat their side effects.
I may be showing my ignorance of U.S. patent law here but, how is this a thing? These drugs are already commercially available and widely used for exactly this indication. How would they enforce this patent, and how exactly would the patent help with development and commercialization of two drugs which are already on the market?
After reading Steven Johnson’s Where Good Ideas Come From I realized that not everyone makes the distinction between discoveries and inventions, Which is the first website that DuckDuckGo returned, and it is servicable, but I was flabergasted by the long list of nearly identical websites with domain names all some variant of “difference between”. This is how ChatGPT destroys Google. and this may be an example of a discovery masquerading as an invention. Nothing was created — the drugs were already there — the team merely discovered that those two drugs work in a specific indication. If this is deserving of a patent, should every drug combination be patented?
To be clear, I am not a lawyer — caveat lector — but the whole patent system needs an overhaul and making a clearer distinction between discoveries and inventions should be one of the items on the long list of things that need attention.
A lengthy overview of the implications of ML/AI to biology and drug discovery came out yesterday, and while I appreciate its enthusiasm and breadth, the answer to the question posed in the summary — What if this time is different? — is, sadly, no, probably not.
If you are giving a pre-recorded talk at a “hybrid” scientific conference, you can count on the number of people listening to you being functionally zero. Some may take photos of your slides, your face included.
Among the few Latin phrases I listed yesterday, I’ve somehow managed to miss my favorite: Ars longa, vita brevis.
Comes to mind each time I glance at my bookshelf.
My first Covid-19 paper
The beginning of the year was busy enough for a short commentary I co-athored to come out without my noticing.
Briefly, the US government spent $10 billion procuring the anti-Covid drug Paxlovid after a study confirmed its efficacy in unvaccinated people exposed to the delta strain. It then proceeded to hand it out to everyone, including the vaccinated and boosted during the omicron wave, with no data on whether it is actually needed in that setting. A similar drug, molnupiravir, ended up not having any meaningful effect in those who received the vaccine despite preventing hospitalization and death in the unvaccinated.
Could those $10 billion have been better spent? We believe the answer is: yes. For a fraction of the cost, using the same network of local pharmacies as in the Test-to-Treat initiative, the federal government could have randomized the first 100,000–250,000 patients to Paxlovid, Molnupiravir, or usual care — an order of magnitude more than PANORAMIC as many in the American health care system would have been lost to follow-up. The study would have taken mere months to accrue and would have provided valuable information on the efficacy of these treatments in the U.S. population. As importantly, it would have provided an important precedent and infrastructure for more federally funded pragmatic randomized controlled trials of agents under EUA or accelerated approval. The precedent set instead was for government’s full support for use of drugs far outside of the tested indication.
You can read the whole thing here, without a paywall.
Further evidence that T cells are the best cells.
Why are people losing their minds over ChatGPT?
Reporter Holly Else in a news article for Nature:
An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December.
So far so good. Per the preprint, researches collected 50 real abstracts, 10 each from JAMA, NEJM, BMJ, Lancet, and Nature Medicine, then asked ChatGPT to generate a new abstract out of each article’s title and journal name. They ended up with 100 abstracts, half of them AI-generated, that they were able to analyze using 3 methods: a plagiarism detector, an AI detector Or, to be more precise, the GPT-2 Output Detector. Note that ChatGPT is based on GPT-3., and blinded The preferred term nowadays seems to be masked over blinded, but either way you are bound to have funny-slash-distrubing associations pop into your head. human reviewers.
You can click through the link to read the outcomes, but per the pre-print’s own conclusion:
The generated abstracts do not alarm plagiarism-detection models, as the text is generated anew, Emphasis mine. but can often be detected using AI detection models, and identified by a blinded human reviewer.
So the “can often be detected” from the preprint itself becomes “often unable to be spotted” in the hands of a crafty human reporter. Gotcha.
Of course, no alarmist article is complete without some color comentary:
“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.
We have always and forever will be in a situation where everyone — expert or not — had to engage their critical thinking to determine whether data presented are true and important, true but unimportant, true but misinterpreted, fragile, exagerated, overblown, or just plain fake. AI making it easier for the unscrupulous to do what they would have done anyway does not change the equation by an Earth-shattering amount.
Look, some people can’t handle a blank page but are good at editing, even if it means completely replacing the original text. In the olden days of 6 months ago trainees had no other recourse but to grind their teeth and just get on with it, hoping that at some point in their careers they will have trainees of their own writing those pesky first drafts. ChatGPT seems like a godsend for them. Whether what’s sent to journals for publication or posted on a pre-print server is real, fake, nonsense or profound still depends on the person doing the submitting.
Some side observations in no particular order:
- I have no issue with the pre-print itself, which I hope and trust will find a good home.
- Why does Nature deem the work important enough to cover in a news article, but not important enough to publish in one of its own journals?
- For an online news article, it is sadly lacking in that great breakthrough from six decades ago, the hyperlink. Even the URL for the pre-print itself is given as an un-clickable footnote. And no mention of the online and freely accessible plagiarism and AI detection tools.
- Nature’s news department is on a roll.
A yearly theme, of sorts
Instead of setting a Yearly Theme A CGP Gray video is where I first heard it used as a replacement for New Year’s resolutions, but I’m not entirely sure if he’s the originator. right at the outset, I let it crystalize on its own in the first few months of the year. The theme of 2022 was shelter-building — guess where that came from — and as a result we now have a whistle-clean basement ready to serve as a home gym until a nuclear strike anhilates us all.
Odds are that this year’s theme will end up being statistical shenanigans. First a brief letter to JAMA Internal Medicine we wrote received a confused commentary from a giant of cancer care that showed that even oncology giants are not immune to errors Finding the error I will leave as an excercise for the reader; I do, however, plan to address it in a follow-up letter. Never pass an opportunity to increase your publication count! of statistical reasoning. Soon after that, working on a different — still top-secret — paper got me down a rabbit hole of the many ways we use to present clinical data. I thought these were lacking in oncology; other fields of medicine showed me that there was room for further deterioration. Not to be so secretive about everything, but clinical data representation in this particular field will also be the subject of a commentary. And yet, the US FDA still thinks statistically illeterate doctors — present company included — are important gatekeepers of diagnostic tests, essentially banning home test kits available in other parts of the world because they are worried people are too innumerate to correctly interpret their own results.
Humans being pattern-recognition machines, I don’t doubt I will continue seeing matemathical malpractice, malfeasance, and just plain stupidity everywhere I look. It is pretty much guaranteed I will inadvertently comit some myself! I hope this yearly theme results in a few papers, at least.
Adam Mastroianni’s Experimental History newsletter has enabled paid subscriptions today, and if there is one science-oriented Substack worth paying for, it’s Adam’s. I’m sold.
I have seen many people sharing links to Maciej Cegłowski’s (excellent!) case against colonizing Mars.
Of course, Werner Herzog said it first, and more succinctly. Good luck with that, indeed.