The headline: ChatGPT appears to pass medical school exams, educators rethinking assessments.
The article:
- They were mock, abbreviated exams,
- done incorrectly, There are no open-ended questions on the real USMLE.
- which it didn’t actually pass,
- and which were reported in a pre-print. Which isn’t a complete knock against the study per se, but even a glance at it shows that some questionable choices have been made regarding the scope — there were only 376 publicly available questions instead of more than a 1,000 on the real exams — and the methods used to ensure the publicly available questions hadn’t already been indexed by the ChatGPT training algorithm.
To be clear: this is my complaining about misleading headlines, not saying that predictive AI wouldn’t at some point be able to ace the USMLE, that point not being now, for reasons stated above. And let’s not even get into whether having a high USMLE score means anything other than the person achieving a high score being a good test-taker (it doesn’t).