- AI models identify rare diseases faster than many experienced doctors
- Systems achieve correct or close diagnoses in most difficult cases
- Models analyze symptoms and test data using structured reasoning processes.
A new generation of artificial intelligence tools aims to outperform experienced doctors in diagnosing rare and complex medical conditions.
These reasoning models can process long chains of symptoms, test results, and clinical notes, and then propose or narrow down the correct diagnosis faster than many human specialists.
Some researchers argue that this represents a profound shift in technology that will reshape medicine, especially in cases where the correct diagnosis is not obvious even after thorough evaluation.
Article continues below.
AI models tackle difficult diagnoses
“We are witnessing a really profound change in technology that will reshape medicine,” Harvard University’s Arjun Manrai said at a news conference.
Still, serious questions remain about whether these systems can handle the full weight of real-world clinical uncertainty.
In a major study, researchers tested a leading AI reasoning model on a combination of textbook-style cases and real patient data from a Boston emergency department.
The model analyzed step-by-step descriptions of symptoms, test orders and results, just as doctors do.
It listed possible diagnoses more frequently than human doctors and included the true diagnosis, or something very close to it, in about 80% of difficult cases.
For a transplant patient with subtle signs of a life-threatening infection, the model raised suspicions about a day before the clinical team.
Researchers say the technology is particularly powerful for scanning broad patterns in rare diseases that individual doctors rarely find.
However, the studies rely on selected descriptions of patients rather than the raw, chaotic environments of emergency rooms.
Models respond to the information they are given, not all the confusion of overlapping priorities and incomplete data seen in real clinics.
Why uncertainty is still a problem
Despite the power of these AI reasoning models, critics point out that clinical reasoning is more than just step-by-step logic in a clean text summary.
“When we say clinical reasoning, it doesn’t mean the same thing as model reasoning,” says Arya Rao of Harvard Medical School, who was not involved in the study.
“These models have been optimized to do this type of sequential thinking that we call reasoning, but it is not at all the same as what we teach medical students to reason about.”
Doctors often have to consider multiple uncertain possibilities at once and then update them as new data arrives.
AI models tend to cling to a single strong explanation and brittlely update their responses when new facts emerge.
A team that tested 21 different AI systems found that even the best reasoning models had problems considering multiple uncertain diagnoses at the same time.
The team argued that large language models are not yet ready to make independent decisions in medical settings.
At best, they are useful for getting second opinions or discovering rare diseases that doctors might initially miss.
Experts emphasize that human doctors remain essential to interpret the context, talk to patients and weigh risks in real time.
The technology can help avoid misdiagnoses in some settings, but introduces new risks if used without careful supervision and appropriate safety measures.
Through Scientific news
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.




