Saturday links, finance and economics
- John Burn-Murdoch for the Financial Times: Why birth rates are falling everywhere all at once. This is a gift link but has only 3 uses, so I will reveal the punchline of this beautifully illustrated exploration of data here: “In country after country the birth rate plunged after the introduction of smartphones, no matter what the previous trend was. The younger the age group, the more pronounced the downturn — a mirror image of smartphone usage patterns.” Note that my thoughts on journalist science still apply: caveat lector. But since the article matches my own bias I link to it without hesitation.
- Harry Law for Works in Progress: Why Spain has the world’s greatest cities. Having recently been to Spain, I agree with their assessment that it does indeed have the best cities. Though with 65% of people living in apartments, and not of the luxury kind, I imagine it can get claustrophobic for introverts.
- Scott Lincicome for The Dispatch: GDP Is Good, Actually. Sure is, as long as you remember Goodhart’s law. Otherwise you get into all sorts of moral conundrums, such as whether it is OK to produce and sell stuff that causes cancer because hey, cancer drug research, manufacture and sale will also make GDP go up, amiright?
- Melissa Naschek for Jacobin: Socialism Has a Future. Central Planning Doesn’t. This is an interview with Vivek Chibber, professor of sociology at NYU. I will emphasize the same part Alex Tabarrok did, and for the same reason: “If we’re actually serious about changing the world, people on the Left … should be the most remorseless and the most merciless when it comes to facts.” Being merciless about facts used to be the defining characteristic of the scientific way of thinking, until people started using phrases like “settled science” and such as a linguistic bludgeon.
The altruist bait-and-switch
After dissecting the minutiae from the ongoing battle of the bozos [Note: To save you a click: it is about the Musk-Altman trial. ] , Andrew Sharp’s weekly column ends with this paragraph:
The reality is knottier. Had the OpenAI founders not launched with a nonprofit structure in 2015, they probably never recruit the talent required to compete with Google. And had they done anything else other than exactly what they did in 2018 and 2019, all of computing would be less interesting today, and the company probably wouldn’t exist eight years later. Musk’s trial has been clarifying on that point, at least for me.
The AI side of technology is one of those rare occasions where biotech may indeed be like tech: people with knowledge, skills and ambition to make the early steps towards creating something new generally don’t do it for the money. Accolades, titles, a few more increments on their h-indices sure, but unless they are seriously delusional a lab postdoc coming in on a weekend to split the cell culture generally has no hope of getting into the top percentile in income. Up until a few years ago AI research was much like that, until it wasn’t.
Sharp writes that OpenAI had to flip the switch if it were to survive in these shark Google-infested waters once they smelled blood profit an opportunity to tell a new story to investors. Same can be said about any biotech: become successful enough, and there will come a time when the academic founders are asked to step away and let someone with different motivations run the show, lest they be lost in a sea of copycats, smoke-peddlers and competitive intelligence officers. The whole business has just become too expensive for some Jonas Salk-wannabe to dabble in.
A person of bad intent may propose that the adults coming to run the show once it becomes too expensive are the ones making it expensive in the first place to justify their existence, contributing the health care cost ouroboros on the way. But that is of course nonsense. The proof is in the pudding, what with famously efficient drug development pipelines, low health care costs and improving lifespans.
So let’s do what a genuine financial scion once proposed: invert. Instead of asking ourselves how to make drug development more efficient and cost-effective, let’s see how we could make it more expensive. Number one thing to do would making it all about the money: let’s portray people who don’t capitalize on their inventions as losers not heroes, make Nobel Prize winners notable only if they are billionaires (who won the Nobel Prize in Physiology or Medicine last year, again?), measure success of drugs in dollars earned not lives improved, extended or saved, have everyone skim a percent or five of the money swishing around in the ecosystem as their primary source of income without any penalty for ultimate failure [Note: For more on this, do read Nassim Taleb’s Skin in the Game, which is about much more than the titular phrase which has become — much like his The Black Swan — a phrase people throw around without having any idea of the underlying concepts. ] guaranteeing that they will have every incentive possible to grow the pie, and I think you see where this is going because the system functions as designed so why should you complain? After all, there is no alternative.
Except that, of course, there is. It would be a big lift, to remove incentives of skimmers to inflate the balloon, stop various influencer platforms from inducing FOMO in everyone and anyone, recalibrate the median science journalist’s value system from Mr. Market to something more reality-based. Big, but not impossible, provided there is a will.
Therein lies the problem: that kind of thinking is somewhat at odds with the shared American culture, at least as recently described by Chris Arnade, that “you can live how you want, eat what you want, live (up to a point) how you want at a thin level, as long as you ultimately believe in making big money through hard work and playing by the rules.” Determining if the other two legs of the three-legged money/work/rules American stool are performing as intended I will leave as an exercise for the reader.
Phrase of the day: "positional ambition"
Dave Winer posted an important piece of text yesterday under the title Transcript of AOC’s answer. This is the American politician and congresswoman from New York Alexandria Ocasio-Cortez’s response to an interviewer’s question of whether she would run for president in 2028. [Note: Not yet being a US citizen I will refrain from commenting on her politics. Though, provided the federal government is still functioning, 2028 may be the year I actually get to vote! ] It is short and to the point and you should read or listen to the whole thing, but here is the meat of it:
So the elite think: if you want this job, you just stepped out of line. And we want you to know where the real power is. And it’s in the modern-day barons who own the Post and own the algorithms. And we’re gonna — we’ll make an example out of you.
And what’s funny about that is that they assume that my ambition is positional. They assume that my ambition is a title or a seat. But my ambition is way bigger than that. My ambition is to change this country.
“Positional ambition” is the perfect way to describe much of the American — and indeed the world’s — malaise. Many heads of various institutions, from state to corporate, are there because they imagined themselves at some point sitting in the chair, or being in the room, or having some letters next to their name, without much thought of what they would do once they reached the position except whatever it took to keep it. In fact, I can think of only a single US president in living memory whose ambition wasn’t primarily positional — and he was kicked out after 4 years in a landslide. But of course that is by design: the system is made to produce the exact results that it does (see also: the American business).
So that is an important lesson for any young person, to think in terms of actions not positions. It is a spectrum, sure, and you cannot completely separate what you want to do from what it would take to do it and how to get there, but you shouldn’t dream about having a rock star lifestyle unless you also want to make music. And if we dialed down our collective positional ambition I suspect there wouldn’t be as many aspiring influencers around, most “influencers” being all about the position and without even a pretense of substance.
This week in hubris
What possessed me to type x.com into the address bar I can tell you not, but there I was, staring for the first time in weeks at the “For you” tab. And there it was, in all capital letters: “THIS IS HOW WE CURE PANCREATIC CANCER”, staring back.
That was the X-crement of one Derek Thompson, writer for The Atlantic, podcaster, abundance enthusiast. It was promoting his most recent blog post which, being on Substack rather than X, had a more subdued name: “How AI Could Help Cure Pancreatic Cancer”. It is, supposedly, an interview with a co-author of a paper with an ever-less-so boastful name: “Next-generation AI for visually occult pancreatic cancer detection in a low-prevalence setting with longitudinal stability and multi-institutional generalisability”. Most of the interview, however, is behind a paywall which I shall not climb.
Above the fold is Thompson’s exuberant, hyperoptimistic speculation. He approaches the problem from the perspective of the three recent developments — one from above, the other two previously discussed — but presents the areas which they are “solving”, targeting KRAS mutations, pancreatic cancer’s immune evasiveness, difficulties with early detection, as the sole reasons why the disease is so difficult to treat.
But that is disingenuous. There are so many more reasons why it is hard: the uniquely hostile, acidic, high-pressure environment of the tumor that makes drug delivery to it nigh-impossible. It’s propensity to metastasize — spread to distant organs — no matter what size the original tumor is. A biochemical storm it stirs up in the body leading to rapid weight loss, blood clots and horrendous pain which are distinct even among other cancers. Why not highlight those three as the “3 broad reasons why pancreatic cancer is so hard to treat”, to use Thompson’s terminology? Well, no recent high-profile studies for those, are there?
I understand that he has some personal reasons to be interested in pancreatic cancer, and I am sure it is coming from the best of intentions, but please.
This letter to Ted Turner from his dad on the choice of college major could be the best thing you will read today. Horribly misguided and against everything I stand for, but oh how much fun. This is how it starts:
My dear son,
I am appalled, even horrified, that you have adopted Classics as a Major. As a matter of fact, I almost puked on the way home today.
And it gets better! (ᔥNY Times Pitchbot on Bluesky)
Topic of the essay aside — and it’s a good one — NYT editors didn’t use to let this kind of grammatical malfeasance fly.
Behind every human success story lies a billionaire with a heart of gold
I tend to avoid podcasts in the style of Joe Rogan, those that begin with a 15-minute long ad block selling mushroom supplements followed by hours of meandering conversation between two people who may or may not be under the influence. Who in the world has the time?
So for that reason I avoided the podcast of one Dwarkesh Patel even as I occasionally linked to an article of his. I filed him mentally in the same “Avoid!” bucket as Lex Fridman — probably unfairly, as no one in the world can be as big of a mental bore as Fridman — without giving his podcast a chance. Although, judging by his writing on AI, I would not have liked the tone even if I had heard it. I remember, in fact, resisting the temptation to pan some of his more outlandish texts prophesying the rise of our LLM overlords with a tone which was as matter-of-fact as it was uncaring about human culture and society. My headphones are a direct link to my brain and I did not want that kind of world view to influence it.
Well a whole bunch of people are about to get influenc’d, because the New York Times has just published a glowing profile of Patel and his podcast, framing the show as a way to “eavesdrop on the A.I. elite” while burying an important fact — the one that kept me from listening in the first place — in the fourth-to-last paragraph:
Mr. Patel doesn’t see himself as a journalist, and he will do things that news organizations’ ethics rules generally prohibit, such as signing onto an amicus brief on behalf of Anthropic in its recent lawsuit against the Department of Defense, and angel-investing in companies whose founders he has interviewed (he disclosed the stakes). He believes in a “glorious transhumanist future,” and his tone isn’t adversarial. But his admirers say that his technical fluency and extensive preparation enable him to follow up or push back on superficial answers that most interviewers would simply accept. The Jensen Huang episode became heated as Mr. Patel repeatedly challenged the world’s most valuable company’s chief executive on the national-security implications of selling chips to China. “If I do cover a topic,” Mr. Patel says. “I think my reputation would suffer a lot if I don’t ask tough questions or don’t do it in a deep way.”
Of course, praising for this kind of pushback on a transhumanist podcast is like praising the host of “The Ultimate Potato Chip Podcast” for pushing back against Frito-Lay’s most recent price hike: it goes without saying that you like junk food.
But it was not this small bit of confirmation bias which made me link to the NYT. Rather, it was the same revelation that piqued Tyler Cowen’s interest, if for a different reason. Rather than paste the whole excerpt, let me provide a (human) summary: bored during the covid pandemic, a 19-year-old Patel asks the libertarian George Mason economist Bryan Caplan to be a guest on his brand-new podcast; Caplan agrees. They continue the exchange, online and in person, while Caplan is spending months in Austin, TX at the home of his billionaire friend Steve Kuhn. [Note: This wasn’t the only good billionaire-themed article in the NYT. For more reasons why Americans should probably do a bit more to clip their wings see the travails of one Sergey Brin and the series of hardships he endured that pushed him to the right. ] Kuhn also meets Patel and, liking the cut of his jib, offers to invest in return for equity. So do other people in the Caplan-Kuhn circle which inevitably expands all the way to your friendly neighborhood founder of Amazon. Cue NYT’s signature glazing.
Crikey. Fans of C.S. Lewis should recognize immediately the themes he raised in The Inner Ring, The Abolition of Man and That Hideous Strength, essays and books which were most likely not on Patel’s reading list during his formative years. One can only wonder whether his belief in “the glorious transhumanist future” came before or after the Silicon Valley billionaires made landfall in his young mind.
Yes there has been a breakthrough in treatment of pancreatic cancer and no AI was not instrumental in its development (as far as we know)
Apart from looking like he has just been on the losing end of a fistfight, and having occasional bouts of nausea, Ben Sasse seems to be doing as well as someone recently diagnosed with metastatic pancreatic cancer possibly could. Both the nausea and his face peeling off are because of daraxonrasib, a new drug which targets KRAS G12 mutations which are common in many cancers but are found in most pancreatic ductal adenocarcinoma (PDAC). As a reminder, PDAC is the one that Steve Jobs did not have, the one that has the dubious distinction of being both the most common and the most lethal cancer of the pancreas.
Well, daraxonrasib seems to be doing its job and doing it well, based on a company press release. Remember, most press releases should not count as evidence for anything. This particular one, however, is worth reading because it is (1) for a randomized controlled trial with (2) a “hard” endpoint of overall survival [Note: OK, putting my pedant hat on, the pre-specified co-primary endpoints are progression-free survival (PFS) and overall survival (OS) in the RAS G12-mutant population. What is reported in the press release is only OS in the “intent-to-treat” which is to say both G12-mutant and wild type populations, which was a secondary endpoint. A bullet point at the beginning says that all primary and key secondary endpoints were met, so why not report both? Probably because one looked better than the other, but would it not be a tad suspicious that a less targeted population did better than the more targeted one? But this is just speculation, let’s see review the actual data once they come out. ] which will (3) be presented at the ASCO annual meeting, I imagine as a plenary talk, in early June of this year. The thing to look for there will be informative censoring, in particular early censoring of frail participants — the ones more likely to die early of their disease — who were randomized to receive daraxonrasib but then withdrew due to the “manageable” toxicity of a melting face. The fact that there are no participant numbers reported at all in the release makes me suspicious, though information on the number of patients enrolled is readily available: 501. That’s a lot of patients!
The company is certainly feeling optimistic: they have already received a National Priority Voucher from the US FDA and will now submit a New Drug Application. Kudos and congrats for designing and testing a working drug without using AI, because to read both professional and lay media the past two years it is a miracle there were any drugs being discovered until Large Language Models came along.
Yes, I had to invoke AI, because it is becoming exceedingly common for people to give algorithms credit where it is not due. This is what Tyler Cowen wrote yesterday about pancreatic cancer research:
AI and the pancreatic vaccine. More testing is needed, but there is a reasonable chance that we have a good treatment for pancreatic cancer, and AI was instrumental in that. It is mRNA as well, so a double burn on the haters.
The link is to a post on X by one Rotimi Adeoye, a “contributing opinion writer @nytimes” (one guest essay as of today which is one more than I have so congratulations, I guess?) who in true X fashion superimposed a screenshot from an uncredited journal abstract over someone posting a link to an NBC news article about the updated results of a phase 1 trial of an mRNA vaccine for pancreatic cancer. [Note: For those not keeping track, you are right now reading a blog post about a blog post about a retweet of a tweet about a news article based on a press release. You’re welcome. ] These were presented yesterday at the annual meeting of the American Association for Cancer Research but were hinted at in a press release (?) from Memorial Sloan Kettering, where the vaccine — generic name autogene cevumeran which rolls right off the tongue doesn’t it? — was being tested.
Remember how a few paragraphs above I had implied that you should ignore most press releases? Well, news on academic websites should rank even lower as no one there has to answer to the SEC. The primary study was great for what it was, a first-in-human trial with laboratory endpoints meant to test whether the participants’ immune system responded at all to the vaccine. And it seems that it did, as shown in not one but two papers in Nature published two years apart. The number of original participants, all of whom had early-stage, freshly resected and otherwise untreated PDAC upon enrollment, was 19. Three of these did not make it to the vaccine as they had progression, died, or had toxicity from adjuvant chemotherapy before being dosed. Chemotherapy? Yes, in addition to the vaccine everyone also received “adjuvant” (meaning: there to “clean up” any residual cancer after surgery) chemotherapy (FOLFIRINOX, not for the faint of heart) and immunotherapy (atezolizumab which is in comparison to the chemo a walk in the park but even that has its side effects). There was no control.
Of the 16 participants, 8 were “responders” to the vaccine as measured by some highly sophisticated laboratory tests — not that the patients would care what their blood work showed — and in 7 of those the cancer hasn’t come back for 3 years as noted in the follow-up Nature paper or for 4-6 years as noted in yesterday’s update. This compares to 2 of 8 who were “non-responders” to the vaccine.
If you don’t have your calculator handy let me do the math for you: 9 of 16 patients, or 56.25%, with newly resected PDAC who received chemotherapy, immunotherapy and the vaccine were still alive more than 3 years after treatment. You may not know this, and I didn’t until I looked it up just now as it has been a while since I have treated patients with newly diagnosed early-stage pancreatic cancer, but the median OS after (modified) FOLFIRINOX alone in a recent large, randomized Phase 3 trial was 53.5 months, with 43.2% of patients still alive 5 or more years. Did the addition of atezolizumab and the vaccine change anything? I can’t tell and neither can anyone else until there is a randomized controlled trial, which isn’t to cast shade on the investigators — kudos to them as well for a successful first-in-human study — but let’s curb our enthusiasm.
So we have some updated results from a tiny trial that didn’t really move the needle one way or another, and yet Cowen et al. feel the need to push AI into the narrative. To be clear, there is absolutely no mention of LLMs, machine learning, algorithms or artificial intelligence of any kind anywhere in the autogene cevumeran literature. Granted, it is a “personalized” vaccine, meaning that every potential participant had their tumor sequenced and up to 20 vaccine targets identified among the newly mutated proteins. I am sure there was a lot of computation involved. But not every sophisticated computer analysis is AI, let alone an LLM, so I truly don’t see how they could legitimately be brought into the conversation.
And in case you were wondering, no, the screenshotted abstract did not in fact back up Adeoye’s claim. Best as I can tell this was the paper in question, a speculative review article in an obscure journal written by a Shanghai-affiliated group of authors who had nothing to do with BioNTech whose purpose was to be a never-looked-at reference for a false claim, that “AI played a critical role in advancing the vaccine”. Anything for the clicks, am I right?
Adeoye’s behavior was regrettable but Cowen’s is detestable, especially when paired with his look-at-the-sheeple attitude towards humans. [Note: The linked to article from Cowen is particularly wrongheaded if you realize who the Luddites really were and that the label should in fact be a positive one. ] Cory Doctorow had warned about AI companies over-promising their capabilities for a short-term gain. But they don’t really need to: there are plenty of useful fools willing to promise on their behalf, giving it credit even where there is none.
Infinite Regress is not a professional outlet yet even this here half-brained bozo knows better than to mix fonts in a single word. They should have shaved off the “ć” to a “c” if their house headline typeface didn’t have the right diacritic, but then why doesn’t it?
And hey, congrats to Denver!
The shameless style in American business
Cory Doctorow wrote this morning about a short-lived business venture of his from the late 1990s that, during a brainstorming session, invented SEO slop years before either of those two terms became widely known. That train of thought didn’t go anywhere — they weren’t sociopaths — but it made him realize an important life fact:
The point of this is that there were lots of people back then who had the capacity to imagine the kind of gross stuff that Zuckerberg, Musk, and innumerable other scammers, hustlers and creeps got up to on the web. The thing that distinguished these monsters wasn’t their genius – it was their callousness. When we brainstormed ways to break the internet, we felt scared and were inspired to try to save it. When they brainstormed ways to break the internet, they created pitch-decks.
Apple is another clear example. The book Apple in China opened my eyes to the ruthlessness with which their operations team worked throughout the company’s history. Small wonder then that elevating their Chief Operating Officer to the CEO role would lead company valuation to skyrocket and its culture to decay so much that it got an introverted nerd to write an open letter to the presumptive CEO futurus.
And of course we have the modern-day King of the Sociopaths in Sam Altman. I have decided not to read anything that is longer than 10,000 words this week unless written by Philip K. Dick so I did not delve into The New Yorker account of Altman’s adventures in bullshitting, but John Gruber has helpfully provided some excerpts. Behold a quote from an OpenAI board member:
“He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
Point number one is on display at any of his interviews. One of the last episodes of Conversations with Tyler I listened to was with Sam Altman and the extent to which he reflexively and without thinking agreed with every possible hypothesis and conjecture Cowen put out was comical. Point number two makes him exceedingly dangerous. That so many luminaries of big tech are willing to hold hands with the man and continue doing business with him is Wittgenstein’s ruler of Silicon Valley sociopathy.
The problem isn’t that sociopaths exist — they always have — but that the casinofication of the American economy has created outsized rewards for those particular personality traits while pushing away people with stronger ties to reality. Once a field attracts a critical mass of sociopaths [Note: What should be the collective term for a group of sociopaths? You know, like “a conspiracy of ravens” or “a murmuration of starlings”. Once comes to mind immediately but I will leave figuring out which as an exercise for the reader. ] the minority rule kicks in. Soon enough, everyone must exhibit sociopath-like behavior just to stay in the game. Like Venkatesh Rao recently wrote: “I’m a good person, but everyone is out to get me, so I’d better try to get them first. I’m still a good person.”
Those who don’t adapt, retreat. Sometimes, if we are lucky, they even write about it. And there we have a paradox, in that the same technology supercharging sociopaths in their quest for bullshittification is enabling more and more people to retreat to a life of quiet content. For now.