Posts in: tech

My preferred m.o. on this blog is to write half-baked posts and never look back, but I left out an important piece out of yesterday’s comment on The Techno-Optimist Manifesto so there is now an update, along with a few corrections to spelling and style.


Technology as the last refuge of a scoundrel

Marc Andreessen, the billionaire venture capitalist, co-founder of Netscape, and occasional podcaster and blogger, wrote a bizarre post today titled The Techno-Optimist Manifesto, in which he first builds a straw man argument of the present-day’s luddite atmosphere using his best angsty adolescent voice (“We are being lied to… We are told to be angry, bitter, and resentful about technology… We are told to be miserable about the future…” all as separate paragraphs; you get the idea), then presents a series of increasingly ludicrous statements about our bright technological future that at the same time glorify the past, creating a Golden Age fallacy Möbius strip.

This is not Andreessen’s first act of incoherence: read or listen to last year’s Conversation with Tyler (Cowen) and his answer to Tyler’s question about the concrete advantages of Web 3.0 for podcasts (spoiler: he couldn’t name any). But that was an impromptu — if easily anticipated — question. Today’s Manifesto should be more baked, one would hope. But one would then be disappointed, as the entire article reads more like a cry for help than a well-reasoned essay. Here are some of the more flagrantly foul bullet points, with my comments below.

We believe that since human wants and needs are infinite, economic demand is infinite, and job growth can continue forever.

This is particularly salient for me after reading Burgis and Girard, and in short: no. Just no. Human desires are infinite, but not all desires are created equal. If your goal is to fulfill every human desire, you are not going to Hell with good intentions — you are intent on going to Hell.

We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly fire.

We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.

A particularly pernicious pair of paragraphs that talks about AI as if it is currently able to save lives (it isn’t), and about people urging caution as if they are murderers (they aren’t). Doctors and biomedical researchers will be the first to welcome AI wholeheartedly into their professions, but that is mostly because too much of their professional time is spent fighting the bullshit that their technocratic overlords — say, IT companies funded by billionaire investors — have wrought upon them.

We believe that we are, have been, and will always be the masters of technology, not mastered by technology. Victim mentality is a curse in every domain of life, including in our relationship with technology – both unnecessary and self-defeating. We are not victims, we are conquerors (emphasis his).

The dichotomy is not master/victim, it is master/slave, and the only reason Andreessen would think that 21st century humans are not slaves to technology is that he doesn’t get around much. We can agree that humans are not victims, but then again, no one is arguing that humans are committing crimes against technology.

We believe in nature, but we also believe in overcoming nature. We are not primitives, cowering in fear of the lightning bolt. We are the apex predator; the lightning works for us.

And yet we don’t go around randomly setting stuff on fire. The tribes whose members did that either got rid of those members or else got extinguished.

We believe in risk, in leaps into the unknown.

Good for you. I believe in managing risk and exploring the unknown before leaping into it.

We believe in radical competence.

All I see is radical stupidity. See: you can put “radical” in front of anything and it makes you seem profound!

We believe technology is liberatory. Liberatory of human potential. Liberatory of the human soul, the human spirit. Expanding what it can mean to be free, to be fulfilled, to be alive.

It is! I was at an airport a few days ago and saw several double below-the-knee amputees who a few decades ago would have had a miserable time but can now walk around like nobody’s business. However, technology making up the difference to something that was there before is one thing — creating something completely new is a different beast altogether. The probability space is vast and full of landmines, and a Manifesto which praises leaps into the unknown without mentioning a single externality is foolish at best, dangerous at worst.

Baldur Bjarnason, who likened the philosophy espoused to fascism. It made me think of Nationalism of the Serbian kind, and a saying from a (far from perfect) Serbian politician that, whenever he heard the word “patriotism”, he’d start looking for his wallet. Well, “technology” is the patriotism of Silicon Valley bros, and we’d better start paying attention to our wallets.

Update: Typos fixed and style cleared up. I also forgot to note one of Andreessen’s more henious acts: naming the dead as Patron Saints of his disastrous cause. I am sure Nietzsche wouldn’t have minded — nihilism masquerading as materialism is right up his alley — but I am not sure how Feynman and Von Neumann would have felt, the former explicitly rejecting to work on the hydrogen bomb. Edward Teller would have been a much better ideological fit — nuking Alaska seems to be right up the Techno-Optimists' alley — but then again I doubt they are self-aware enough to have the person who was the likely inspiration for Dr. Strangelove as the face of their party.


Surprised that some airlines still ask us to stow away “large electronic devices” but keep using iPhones and tablets in airplane mode. So it’s OK to use the 13 — sorry, 12.9” — iPad Pro, but not the 11” MacBook Air? Or even smaller Chromebooks? Someone hasn’t thought this through.


October lectures of note

The first one is tomorrow, and it’s a good one!


Everything is hi-tech and no one is happy

Emily Fridenmaker, who is a pulmonary disease and critical care physician, writes on X:

Everything is so complex.

Logging into things is complex, placing orders is complex, figuring out who to page is complex, getting notes sent to other doctors is complex, insurance is complex, etc etc. But we just keep doing it.

At what point is all this just too much to ask?

There are a few more posts in that thread, and I encourage you to read all of it to get a sampling of why doctors feel burnt out. Whether you are in medicine, science, or education, your professional interactions have slowly — They Live-style — been replaced by a series of fragile Rube Goldberg machines that worked great in the minds of their technocratic developers, but break, stutter, stammer, and grind to a halt as soon as they encounter another one of their brethren. Which is all the time!

Too much of our professional lives has been spent playing around with a series of Rube Goldberg nesting dolls, Before reading I Am a Strange Loop I would have apologized for mixing metaphors, but this is how our brains think and it doesn’t have to make sense in the physical world to be useful, so apology rescinded. 2FA inside a 2FA, and if Apple is wondering why people are taking more and more time to replace their aging iPhones, I bet a good chunk of them dread doing it because they don’t even know how many different authenticating services, email clients, education portals, virtual machines — and all other needless detritus sold to management by professional salespeople — they would need to log back into.

Don’t get me wrong: Rube Goldberg machines are fun to play with — The Incredible Machine was one of my first gaming memories — and they can even be useful for individual workflows. But mandating that others use your string-and-pulley concoction that will break at first unexpected interaction is sadistic. Just this Monday we had yet another AV failure at a weekly lecture held at a high-tech newly-opened campus. I knew there would be trouble the moment I saw that the only way to interact with any AV equipment was via a touchscreen that had no physical buttons and no way to remove the power cord, which was welded to the screen on one end, and went into a closed cabinet on the other. Lo and behold the trouble came not two weeks later: we couldn’t get past the screensaver logo. We ended up asking students to look at their own screens while guest lecturers were speaking — and nowadays everyone carries at least two screens with them to school — which was too bad, because I was looking forward to using the whiteboard which is as far from Rube Goldberg as it gets.

Me from 20 years ago would have salivated for that much technology in my everyday life, but I’m hoping it was a function of the time, not of my age, and that kids-these-days know better. My own kids' experience with the great remote un-learning of 2020–2021 makes me hopeful that they will be more cautious about introducing technological complexity into their lives.


The Wondermark Calendar was great while it lasted. The 2018 edition had some strange ideas about what constituted a workout in Serbia.

The Wondermark Calendar for January 2018. Note the text on the bottom right.

Photo of a printed calendar with 19th century-style illustrations. One depicts a man balancing on two asymetric misshapen wheels. Explanatory text says the machine was made in Serbia.


Things I got wrong: in-person versus online

We are three weeks into our clinical trials course for the UMBC graduate program. This remark will surprise absolutely no one, but: it was refreshing to see a classroom full of attentive, engaged students, interrupting, asking questions, having a dialogue, others jumping in, etc. Alas, I cannot take any credit for the interactions, as we have guest lecturers on for most weeks.

Regardless, the course has reminded me about how absolutely wrong I was back in December 2019, when during a post-conference The link is for the 2019 ASH Annual Meeting abstract book, ASH being the American Society of Hematology, and I am only now realizing that — in what could be described as our profession’s version of burning man, which is actually quite fitting for an organization called ASH — they tear down the conference website each year to make a new one. So, the 2019 version is no more, but here is one for the 65th annual meeting in December 2023 although if you go to the website after December 2023 I am sure it will point you to the 66th and beyond. I could have linked to the Internet Archive version of the website from 2019 instead — in fact, here you go — but why relinquish myself of the opportunity for that burning man’s ash pun? dinner I stood on my soapbox and wondered in amazing why we were still wasting our time meeting in person once per year like barbarians, when we could in fact be having continuous virtual conversations on Zoom. Timely spread of scientific information and all that.

Well, I have apologized — in person! — to everyone who had to suffer through my Orlando diatribe, because the last 3 years have shown that while video conferencing may rightfully replace most business calls and other transactional meetings, it is absolutely abysmal for education for all parties involved. And it’s not just that any kind of non-verbal communication is lost, it is that even spoken language is stilted, muted, suppressed. With the slide-up-front, speaker-in-the-corner layout that is so common for lectures, you may as well be pre-recording it. It makes absolutely no difference.

And, attending a few online lectures every month myself, it is hard to decide what is worse: having it be completely online and trading off the quality of the lecture for more opportunity for interaction, or watching a live/hybrid lecture and being completely shut out from the discussion (because the in-person attendees take priority when it comes time to ask questions, and rightfully so).

Even at the most basic practical level: technology failure with an online lecture means no lecture; technology failure in-person means, at worst, using the whiteboard and interacting more, which actually makes me wish for more technology to fail. And if you absolutely need to have the slides to make your points, just print them out for your own reference, and share them beforehand for the students to view on their own screens, of which they will have many.

Craig Mod had similar thoughts this week about work in general:

After the last couple of weeks of in-person work, I have to say: Some things simply can’t be done as efficiently — or at all — unless done in person. The bandwidth of, and fidelity of, being in the same room — even, maybe especially, during breaks and downtime, but during work periods, too, of course — of being able to pass objects back and forth, to have zero latency in conversation between multiple people. To not fuss with connections or broken software (Zoom, a scourge of computing, abjectly terrible software (I prefer … Google Meet (!) by a mile)) or cameras that don’t allow for true eye contact — where everyone’s gaze is broken and off and distracted. To dispatch with all of those jittery half-measures of remote collaboration and swim in the warm waters of in-person mind-melding is a privilege for sure, and a gift.

Education is one of those types of work that can not easily be replicated online. AR/VR being a big and obvious caveat here, but that will take a while. For evidence, look no further than the persistence of the physical university campus despite the plethora of free and paid online options. Were it not for the in-person factor, the higher education equivalent of The New York Times would have gobbled up the market by now. I’ll know the tide has turned once Harvard and Yale — which would be my first proxies for the NYT and WaPo of higher ed — start investing more in their online offerings to the detriment of in-person experience.


Disruption as a concept is Lindy. Each particular manifestation of disruption, on the other hand…

From the The Car and Carriage Caravan Museum at the Luray Caverns.

Photo of an 1898 Benz horseless carriage, as a museum exhibit.


Re-capitalizing the i̶Internet

Laments about the glory days of the internet are popping up in my field of view with increasing frequency. This is mostly a sign that people of a certain generation are reaching middle age, but since that generation is my own, I am in full agreement!

Just this weekend, Trishank was praising Geocities, and Rachel Kwon wanted to make the internet fun again. Not only that: she started a collection of like-minded articles which doubles as a most excellent blogroll.

On the margin of a book review I noted that, some time in 2016 — Year Zero of the New Era — most house manuals of style dropped the capital “I” from the internet, acknowledging something that people have been doing in their minds for at least a decade prior. The lower-case “i” internet has become a fish stew that we can’t unboil, but any quixotic attempt to fight the second law of thermodynamics in this regard has my full support.


Why AI can't replace health care workers just yet

To convince myself that I am not completely clueless in the ways of medicine, I occasionally turn to my few diagnostic successes. To be clear: this is cherry-picking, and I make no claim for being a master diagnostician. Yes, a bunch of my colleagues had missed the first patient’s friction rub that was to me so evident; but say “friction rub” to a third-year medical student and they will know immediately the differential diagnosis and the treatment. How many friction rubs have I missed actually hearing? Plenty, I am sure! Like this one time when a 20-something year old man who languished in the hospital for days with severe but mysterious chest pain. Our first encounter was on a Saturday, when I saw him as the covering weekend resident; he was discharged Sunday, 24 hours after I started treatment for the acute pericarditis he so obviously had.

Once, during a mandatory ER rotation, I figured out that a patient who came in complaining of nausea and vomiting actually had an eye problem: bilateral acute angle closure glaucoma. I pestered the skeptical ophthalmology resident to come in on a Sunday afternoon, confirm the diagnosis, treat the glaucoma, likely save the patient’s vision, and get a case report for a conference out of it.

And I will never forget the case of the patient who was in the steaming hospital room shower whenever I saw him; he had come in for kidney failure from severe vomiting and insisted he never used drugs, illicit or otherwise. Still, it was obvious with anyone with a sense of smell that he had cannabinoid hyperemesis syndrome and would have to quit.

Superficial commonalities aside — all three were men with an acute health problem — what ties these together is that I had to use senses other than sight to figure them out: This being the 21st century taste is no longer allowed, but I will leave to your imagination how doctors of old could tell apart the “sweet” diabetes (mellitus) from the “flavorless” one (insipidus). hearing the friction rub, feeling the rock-hard eyeballs, smelling the pungent aroma of cannabis. And all three cases came to mind when I read a tweet an X about ChatGPT’s great diagnostic acument.

I can’t embed it — and wouldn’t even if I could — but the gist of Luca Dellanna’s extended post is that he:

  1. Had a “bump” on the inside of his eyelid that was misdiagnosed by three different doctors.
  2. Saw the fourth doctor, who made the correct diagnosis of conjunctival lymphoma.
  3. Got the same, correct diagnosis from ChatGPT on his/its first try.

A slam-dunk case for LLMs replacing doctors, right? Well, not quite: the words Luca used to describe the lesion, “a salmon-pink mass on the conjunctiva”, will give you the correct response even when using a plain old search engine. And he only got those words from the fourth doctor, who was able to convert what they saw into something they could search for, whether in their own mind palace or online.

Our mind’s ability to have seamless two-way interactions with the environment is taken for granted so much that it has become our water. This is the link to the complete audio and full text of David Foster Wallace’s commencement speech that became the “This is Water” essay, and if you haven’t read it yet, please do so now. But it is an incredibly high hurdle to jump over, and one that is in no danger of being passed just yet. It is the biggest reason I am skeptical of any high proclamations that “AI” will replace doctors, and why I question the critical reasoning skills and/or medical knowledge of the people who make them.

In fact, the last two years of American medical education could be seen as simply a way of honing this skill: to convert the physical exam findings into a recognizable pattern. A course in shark tooth-finding, if you will. This is, alarmingly, also the part of medical education that is most in danger of being replaced by courses on fine arts, behavioral psychology, business administration, medical billing, paper-pushing, box-checking, etc. But I digress.

Which is not to say that LLMs could not be a wonderful tool in the physician’s arsenal, a spellcheck for the mind. But you know what? Between UpToDate, PubMed, and just plain online search doctors already have plenty of tools. What they don’t have is time to use them, overburdened as they are with administrative BS. And that is a problem where LLMs can and will do more harm than good.