Posts in: tech

The Wondermark Calendar was great while it lasted. The 2018 edition had some strange ideas about what constituted a workout in Serbia.

The Wondermark Calendar for January 2018. Note the text on the bottom right.

Photo of a printed calendar with 19th century-style illustrations. One depicts a man balancing on two asymetric misshapen wheels. Explanatory text says the machine was made in Serbia.


Things I got wrong: in-person versus online

We are three weeks into our clinical trials course for the UMBC graduate program. This remark will surprise absolutely no one, but: it was refreshing to see a classroom full of attentive, engaged students, interrupting, asking questions, having a dialogue, others jumping in, etc. Alas, I cannot take any credit for the interactions, as we have guest lecturers on for most weeks.

Regardless, the course has reminded me about how absolutely wrong I was back in December 2019, when during a post-conference The link is for the 2019 ASH Annual Meeting abstract book, ASH being the American Society of Hematology, and I am only now realizing that — in what could be described as our profession’s version of burning man, which is actually quite fitting for an organization called ASH — they tear down the conference website each year to make a new one. So, the 2019 version is no more, but here is one for the 65th annual meeting in December 2023 although if you go to the website after December 2023 I am sure it will point you to the 66th and beyond. I could have linked to the Internet Archive version of the website from 2019 instead — in fact, here you go — but why relinquish myself of the opportunity for that burning man’s ash pun? dinner I stood on my soapbox and wondered in amazing why we were still wasting our time meeting in person once per year like barbarians, when we could in fact be having continuous virtual conversations on Zoom. Timely spread of scientific information and all that.

Well, I have apologized — in person! — to everyone who had to suffer through my Orlando diatribe, because the last 3 years have shown that while video conferencing may rightfully replace most business calls and other transactional meetings, it is absolutely abysmal for education for all parties involved. And it’s not just that any kind of non-verbal communication is lost, it is that even spoken language is stilted, muted, suppressed. With the slide-up-front, speaker-in-the-corner layout that is so common for lectures, you may as well be pre-recording it. It makes absolutely no difference.

And, attending a few online lectures every month myself, it is hard to decide what is worse: having it be completely online and trading off the quality of the lecture for more opportunity for interaction, or watching a live/hybrid lecture and being completely shut out from the discussion (because the in-person attendees take priority when it comes time to ask questions, and rightfully so).

Even at the most basic practical level: technology failure with an online lecture means no lecture; technology failure in-person means, at worst, using the whiteboard and interacting more, which actually makes me wish for more technology to fail. And if you absolutely need to have the slides to make your points, just print them out for your own reference, and share them beforehand for the students to view on their own screens, of which they will have many.

Craig Mod had similar thoughts this week about work in general:

After the last couple of weeks of in-person work, I have to say: Some things simply can’t be done as efficiently — or at all — unless done in person. The bandwidth of, and fidelity of, being in the same room — even, maybe especially, during breaks and downtime, but during work periods, too, of course — of being able to pass objects back and forth, to have zero latency in conversation between multiple people. To not fuss with connections or broken software (Zoom, a scourge of computing, abjectly terrible software (I prefer … Google Meet (!) by a mile)) or cameras that don’t allow for true eye contact — where everyone’s gaze is broken and off and distracted. To dispatch with all of those jittery half-measures of remote collaboration and swim in the warm waters of in-person mind-melding is a privilege for sure, and a gift.

Education is one of those types of work that can not easily be replicated online. AR/VR being a big and obvious caveat here, but that will take a while. For evidence, look no further than the persistence of the physical university campus despite the plethora of free and paid online options. Were it not for the in-person factor, the higher education equivalent of The New York Times would have gobbled up the market by now. I’ll know the tide has turned once Harvard and Yale — which would be my first proxies for the NYT and WaPo of higher ed — start investing more in their online offerings to the detriment of in-person experience.


Disruption as a concept is Lindy. Each particular manifestation of disruption, on the other hand…

From the The Car and Carriage Caravan Museum at the Luray Caverns.

Photo of an 1898 Benz horseless carriage, as a museum exhibit.


Re-capitalizing the i̶Internet

Laments about the glory days of the internet are popping up in my field of view with increasing frequency. This is mostly a sign that people of a certain generation are reaching middle age, but since that generation is my own, I am in full agreement!

Just this weekend, Trishank was praising Geocities, and Rachel Kwon wanted to make the internet fun again. Not only that: she started a collection of like-minded articles which doubles as a most excellent blogroll.

On the margin of a book review I noted that, some time in 2016 — Year Zero of the New Era — most house manuals of style dropped the capital “I” from the internet, acknowledging something that people have been doing in their minds for at least a decade prior. The lower-case “i” internet has become a fish stew that we can’t unboil, but any quixotic attempt to fight the second law of thermodynamics in this regard has my full support.


Why AI can't replace health care workers just yet

To convince myself that I am not completely clueless in the ways of medicine, I occasionally turn to my few diagnostic successes. To be clear: this is cherry-picking, and I make no claim for being a master diagnostician. Yes, a bunch of my colleagues had missed the first patient’s friction rub that was to me so evident; but say “friction rub” to a third-year medical student and they will know immediately the differential diagnosis and the treatment. How many friction rubs have I missed actually hearing? Plenty, I am sure! Like this one time when a 20-something year old man who languished in the hospital for days with severe but mysterious chest pain. Our first encounter was on a Saturday, when I saw him as the covering weekend resident; he was discharged Sunday, 24 hours after I started treatment for the acute pericarditis he so obviously had.

Once, during a mandatory ER rotation, I figured out that a patient who came in complaining of nausea and vomiting actually had an eye problem: bilateral acute angle closure glaucoma. I pestered the skeptical ophthalmology resident to come in on a Sunday afternoon, confirm the diagnosis, treat the glaucoma, likely save the patient’s vision, and get a case report for a conference out of it.

And I will never forget the case of the patient who was in the steaming hospital room shower whenever I saw him; he had come in for kidney failure from severe vomiting and insisted he never used drugs, illicit or otherwise. Still, it was obvious with anyone with a sense of smell that he had cannabinoid hyperemesis syndrome and would have to quit.

Superficial commonalities aside — all three were men with an acute health problem — what ties these together is that I had to use senses other than sight to figure them out: This being the 21st century taste is no longer allowed, but I will leave to your imagination how doctors of old could tell apart the “sweet” diabetes (mellitus) from the “flavorless” one (insipidus). hearing the friction rub, feeling the rock-hard eyeballs, smelling the pungent aroma of cannabis. And all three cases came to mind when I read a tweet an X about ChatGPT’s great diagnostic acument.

I can’t embed it — and wouldn’t even if I could — but the gist of Luca Dellanna’s extended post is that he:

  1. Had a “bump” on the inside of his eyelid that was misdiagnosed by three different doctors.
  2. Saw the fourth doctor, who made the correct diagnosis of conjunctival lymphoma.
  3. Got the same, correct diagnosis from ChatGPT on his/its first try.

A slam-dunk case for LLMs replacing doctors, right? Well, not quite: the words Luca used to describe the lesion, “a salmon-pink mass on the conjunctiva”, will give you the correct response even when using a plain old search engine. And he only got those words from the fourth doctor, who was able to convert what they saw into something they could search for, whether in their own mind palace or online.

Our mind’s ability to have seamless two-way interactions with the environment is taken for granted so much that it has become our water. This is the link to the complete audio and full text of David Foster Wallace’s commencement speech that became the “This is Water” essay, and if you haven’t read it yet, please do so now. But it is an incredibly high hurdle to jump over, and one that is in no danger of being passed just yet. It is the biggest reason I am skeptical of any high proclamations that “AI” will replace doctors, and why I question the critical reasoning skills and/or medical knowledge of the people who make them.

In fact, the last two years of American medical education could be seen as simply a way of honing this skill: to convert the physical exam findings into a recognizable pattern. A course in shark tooth-finding, if you will. This is, alarmingly, also the part of medical education that is most in danger of being replaced by courses on fine arts, behavioral psychology, business administration, medical billing, paper-pushing, box-checking, etc. But I digress.

Which is not to say that LLMs could not be a wonderful tool in the physician’s arsenal, a spellcheck for the mind. But you know what? Between UpToDate, PubMed, and just plain online search doctors already have plenty of tools. What they don’t have is time to use them, overburdened as they are with administrative BS. And that is a problem where LLMs can and will do more harm than good.


On the benefits of microblogging

The five or so regulars readers of this blog may have noticed a pattern of promises made and not kept of things I will, may, or should discuss at some future, unspecified point. These were usually somewhere in the margin notes, but sometimes I would end with a cliffhanger. The topics included mental models, notable microblogs, and ABIM’s financial shenenigans; in my head, the list was significantly longer, and the items expanded into 1,000+ word posts that would be a slog to reference, a nightmare to edit, and which no one would ultimately read.

Up until last year, whatever I thought about those topics would stay in my head, waiting for the stars to align and for the Gods of chaos and time Also known as my children. to smile upon their humble servant. Which is a net good for the reading public — who needs to read the unbaked thoughts of an oncologist? — but as Cory Doctorow wrote, having one’s thoughts written down is good practice both for developing them and for future reference. I was, in a way, depriving my future self of the benefit of knowing how big of a fool my past self was.

But ever since learning of the micro.blog/MarsEdit combination This is Miraz Jordan’s brief YouTube introduction to the two; 15 minutes of time well spent if you have even a tiny bit of interest., I’ve maintained a daily log of thoughts, readings, viewings, and writings. The low friction of the tools begs for scattered non-sequiturs and word salads — think of an unkempt Obsidian database — but the semi-social veneer that micro.blog provides tempers my worst instincts and makes the posts better overall for everyone exposed, including my future self. Sure, those longer texts still don’t get written — although, just watch this one grow! — but for personal use the snippets are even more valuable (and easier to skim).

Not everyone should be a capital-b Blogger — or have a gated newsletter for that matter — but many more people could benefit from a small-p personal blog of the commonplace type. The reason I bristle at overproduced “content” and at statements that anyone who writes must give it their all, strive to perfect everything they write above the 80%-done good-enough-for-government-work standard that is close to my heart, is that they create the wrong impression of what blogging could/should/would be if it hadn’t been for the Huffington Posts and the Gawkers of the peak-blog internet that equated blogging with monetization. And also why I took an initial dislike of The Curator’s Code despite its obvious usefulness. Why should personal blogging be standardized? It’s Personal!

And if you want to find some of these to read, for instruction, inspiration, or just plain enjoyment? Outside of the great micro.blog blogs — check out the Discover page for a daily sampling — there is Dave Winer’s scripting.com, John Naughton’s Memex, Ian Betteridge’s Technovia, Reader John’s Tipsy Teetotaler, Jerry Coyne’s Why Evolution is True, and oh so many more.


Phrase of the day: digital dandruff. Thank you, Charlie Warzel.


I’ve recently had to do a simple but tedious literature search: finding the incidence and prevalence of a handful of rare diseases, using only peer-reviewed articles as reference. The perfect job for AI, some would yes. Well, yes for an AI, but apparently not for LLMS.

I am a proponent of using LLMs as tools of the spellcheck sort, but they are still not ready to act as serious research assistants. Bing presented me with gobbledygook that ended with a repeating string of “KJKJKJKJ” or some such — and no, my prompt was not for it to pretend a cat was walking over the keyboard — while Bard gave a beautifully formatted table full of inaccurate numbers and fake references. Correcting it would have taken more time than doing the research myself.

Which is to say: I anticipate a lot of angst when incoming students start passing off unadulterated LLM work as their own in anything that’s not a business writing course.


Kudos to @danielpunkass for making MarsEdit idiot-proof. I was writing a long-ish text this morning and while uploading the photo the app crashed, an unsaved post disappeared, and my heart sank. But on restart, everything was still there. Phew! ❤️


The roundaboutness of Apple

Jason Snell notes that the iMac’s strongest legacy was Apple itself:

The company was close to bankruptcy when Jobs returned, and the iMac gave the company a cash infusion that allowed it to complete work on Mac OS X, rebuild the rest of the Mac product line in the iMac’s image, open Apple Stores, make the iPod, and set the tone for the next twenty five years.

I’m currently reading The Dao of Capital, which is all about the Austrian school of economics and the roundaboutness of true entrepreneurs, and this made what Apple is doing even more salient. Can you name a more roundabout tech company than Apple? To be clear, I suspect little of this was premeditated in the long term — i.e. no, Jobs and Ive probably did not have a Vision Pro in mind as the ultimate goal when they thought of the iMac — but the ethos of seeing everything as a potential intermediary and not commoditizing it fully à la Samsung is very much the Apple way. Using the iMac as the intermediate step towards the iPod, which was itself an intermediate step towards the iPhone, which was supposedly to be an intermediate step towards the iPad but turned into something much greater, though it also did end up being an intermediate step towards Apple silicone, all the while peppering these intermediary products with technology — LiDAR, ultra-wide lenses, spatial audio — that would become the key building blogs of Vision Pro, which is itself an intermediary towards who knows what. Very Austrian.

Thinking more closely to home, I can think of a few biotech companies that may be doing something like this — maybe, if you squint — but none come close. The addiction to immediate profits that the distorted American health care market provides is much too great.(↬Daring Fireball)