Published on [Permalink]
Posted in:

Microsoft claims their new medical tool is “four times more successful than human doctors at diagnosing complex ailments”. Unsurprisingly, what they meant by “diagnosing a disease” was the thinking-hard part, not the inputs part:

To test its capabilities, “MAI-DxO” was fed 304 studies from the New England Journal of Medicine (NEJM) that describe how some of the most complicated cases were solved by doctors. 

This allowed researchers to test if the programme could figure out the correct diagnosis and relay its decision-making process, using a new technique called “chain of debate”, which makes AI reasoning models give a step-by-step account of how they solve problems.

If and when deployed, how likely is it that these algorithms will get a query comparable to a New England Journal of Medicine case study? Most doctors don’t reach those levels of perception and synthesis, let alone the general public.

✍️ Reply by email

✴️ Also on Micro.blog