Clinical Reasoning: The Complete Guide for Medical Students
Evidence-based guide · Educational use
It's your third week on the wards. A 58-year-old woman is sitting upright in bed, breathing fast, clutching the side rail. The nurse tells you she came in with chest pain and shortness of breath. Your attending is across the unit. You have a stethoscope, a blank assessment form, and no idea what to ask first.
You know the pathophysiology of pulmonary embolism. You can draw the clotting cascade from memory. But standing at this bedside, none of that knowledge assembles itself into a useful next step. The gap between what you know and what you can do with it — that gap is clinical reasoning.
Clinical reasoning is the thinking process doctors use to collect information, generate hypotheses, weigh evidence, and reach a diagnosis. It is not medical knowledge (though it depends on knowledge). It is not clinical skills (though it uses them). It is the cognitive engine that connects a patient's story to the right diagnosis and the right treatment plan. And it is the single skill that most determines whether a doctor is safe or dangerous.
The stakes are not abstract. A 2024 study in BMJ Quality & Safety estimated that roughly 795,000 Americans are permanently disabled or killed each year by diagnostic errors — and three-quarters of that harm clusters around just three disease categories: vascular events, infections, and cancers (Newman-Toker et al., 2024). An earlier analysis put the outpatient diagnostic error rate at approximately 5% of encounters, or about 12 million adults per year in the United States alone (Singh et al., 2014). The 2015 National Academies report Improving Diagnosis in Health Care concluded that most people will experience at least one diagnostic error in their lifetime, calling these errors the most common, most costly, and most dangerous category of medical mistake.
These numbers are not somebody else's problem. They are the direct consequence of how doctors think — and how they sometimes think badly. The good news: clinical reasoning is a skill, not a talent. It can be studied, practised, and improved. This guide explains how.
Your brain runs two operating systems
The most important concept in clinical reasoning — the one that explains both expert performance and expert failure — is dual-process theory. Psychologist Daniel Kahneman popularised the framework as System 1 and System 2. Emergency physician Pat Croskerry adapted it specifically for medical diagnosis in his 2009 paper in Academic Medicine, proposing what he called the Universal Model of Diagnostic Reasoning (Croskerry, 2009).
Here is the basic idea.
System 1 is fast, automatic, and effortless. When a dermatologist glances at a rash and says "shingles" within two seconds, that's System 1. It works by pattern recognition: the brain matches the current presentation against thousands of previously encountered cases, finds a match, and delivers an answer before the clinician is consciously aware of the reasoning process. System 1 is what makes experienced doctors look like they have a sixth sense. It handles high volumes of routine decisions without overwhelming working memory.
System 2 is slow, deliberate, and effortful. When a medical student works through a differential diagnosis for undifferentiated chest pain — listing possibilities, weighing each against the evidence, ordering tests to discriminate between them — that's System 2. It is serial rather than parallel, conscious rather than automatic, and it consumes significant cognitive resources.
Neither system is inherently better. They serve different functions. The danger comes from using the wrong one for the situation.
System 1 works well when the presentation is classic and the clinician has extensive experience with similar cases. A cardiologist recognising an ST-elevation MI on an ECG. A paediatrician hearing a barking cough and thinking croup. In these situations, pattern recognition is fast, accurate, and efficient.
System 1 fails when the presentation is atypical, when the clinician is fatigued, or when cognitive biases distort the pattern-matching. Croskerry described a telling example: a 60-year-old man presents to the emergency department with flank pain and haematuria. System 1 says renal colic. It is almost always right. But occasionally the actual diagnosis is a dissecting abdominal aortic aneurysm — and the heuristic fails catastrophically (Croskerry, 2009). Research has shown that when patients with acute coronary syndrome present without chest pain, the diagnostic error rate jumps tenfold (Brieger et al., 2004, cited in Croskerry, 2009).
System 2 is essential for unfamiliar presentations, complex cases, and high-stakes decisions. But it cannot be the default for every patient encounter — the cognitive load would be unsustainable. The goal, Croskerry argued, is a "toggle function" that allows clinicians to shift between systems: to recognise when System 1 has delivered an answer that deserves scrutiny, and to deliberately engage System 2 for verification (Croskerry, Singhal & Mamede, 2013).
What does this look like in practice? Imagine two clinicians seeing the same patient: a 70-year-old man with sudden-onset right-sided weakness and slurred speech.
The experienced stroke consultant walks in, registers the facial droop, arm drift, and dysarthria, and says "this is a left MCA stroke, we need CT and thrombolysis assessment now" within 30 seconds. That's System 1. The pattern is so familiar that recognition is nearly instantaneous.
The third-year medical student seeing the same patient thinks: "Right-sided weakness could be stroke, but also Todd's paralysis after a seizure, hypoglycaemia, a space-occupying lesion, or even a conversion disorder. The acute onset favours vascular. I need to check glucose, ask about seizure history, and determine the exact time of onset for thrombolysis eligibility." That's System 2. It takes longer, but it is thorough, and in this case it covers the same ground as the consultant's instant recognition, just more slowly.
Neither approach is wrong. The danger arises when the consultant's System 1 fires on a case that only looks like a stroke — say, the hypoglycaemic patient whose low blood sugar is mimicking a neurological emergency. If the consultant skips the glucose check because the pattern match felt so certain, the patient receives thrombolysis they do not need while the actual problem goes untreated.
This is why Croskerry emphasised what he called "cognitive forcing strategies" — deliberate mental habits that force a pause between System 1's output and the clinical decision (Croskerry, 2003). The simplest is a question: "What else could this be?" That question, asked consistently, is one of the most powerful tools in clinical medicine.
As a medical student, almost every clinical encounter forces you into System 2. You do not yet have enough stored patterns for System 1 to work reliably. This is not a weakness. It is the appropriate mode for your stage of training. Over time, as you accumulate clinical experience, many of those effortful System 2 processes will become automatic System 1 patterns — what researchers call illness scripts. The transition from effortful reasoning to fluid recognition is the cognitive journey from student to expert.
Five clinical reasoning examples, step by step
The best way to understand clinical reasoning is to watch it happen. Here are five cases that walk through the process from complaint to diagnosis.
Case 1: The breathless teacher
Complaint: A 32-year-old female primary school teacher presents with two weeks of progressive breathlessness and a dry cough. No fever.
Initial hypotheses: Asthma exacerbation, pneumonia, anxiety-related hyperventilation, anaemia, pulmonary embolism.
Targeted questions: No history of asthma. No recent infections. Started oral contraceptive pill three months ago. Left calf has been "a bit sore" after a long-haul flight two weeks ago. No haemoptysis.
Narrowing the differential: The combination of OCP use, recent long-haul travel, and calf pain dramatically raises the pre-test probability of venous thromboembolism. Pneumonia is less likely without fever or productive cough. Anaemia doesn't explain the acute timeline.
Key investigation: D-dimer is elevated. CT pulmonary angiogram shows bilateral pulmonary emboli.
Diagnosis: Pulmonary embolism.
The reasoning in action: The critical move was not accepting "breathlessness plus cough" at face value and asking about risk factors for VTE. A clinician who stopped at "young, healthy, no wheeze" might have sent her home with an inhaler.
Case 2: The confused grandfather
Complaint: An 78-year-old retired engineer is brought in by his daughter. He's been increasingly confused over three days. He was "fine last week."
Initial hypotheses: Urinary tract infection (common in elderly), stroke, medication side effect, delirium from any cause, dementia (but acute onset argues against).
Targeted questions: Medication review reveals his GP recently started him on oxybutynin for urinary frequency. No fever. No focal neurological signs. Daughter confirms he was cognitively intact five days ago.
Narrowing the differential: Acute onset over days, not months, points to delirium rather than dementia. Oxybutynin is an anticholinergic — a well-known cause of confusion in elderly patients. No signs of infection. No focal neurology to suggest stroke.
Key investigation: Urinalysis is clear. Bloods are unremarkable. The timeline matches the new medication.
Diagnosis: Anticholinergic delirium secondary to oxybutynin.
The reasoning in action: The drug history was the pivot. Without it, this patient might have been worked up for stroke, had an unnecessary CT head, or been labelled "query dementia" and discharged to a memory clinic.
Case 3: The marathon runner's knee
Complaint: A 27-year-old male presents with a hot, swollen, painful right knee. He ran a half-marathon two days ago.
Initial hypotheses: Mechanical injury (meniscal tear, ligament sprain), gout, septic arthritis, reactive arthritis.
Targeted questions: No history of trauma beyond the run. No previous joint problems. Temperature is 38.4°C. He recently had a course of antibiotics for a urethral discharge he "didn't think was important."
Narrowing the differential: A hot, swollen joint with fever is septic arthritis until proven otherwise — this is a clinical reasoning rule that overrides all other considerations. The recent urethral discharge raises the possibility of disseminated gonococcal infection. Simple post-exercise strain does not cause fever.
Key investigation: Joint aspirate: turbid fluid, elevated white cell count, gram-negative diplococci on Gram stain.
Diagnosis: Gonococcal septic arthritis.
The reasoning in action: Semantic qualifiers matter here. This isn't just "knee pain" — it's acute monoarticular arthritis with fever. That reframing activates an entirely different differential from "sore knee after running."
Case 4: The tired accountant
Complaint: A 45-year-old female accountant presents with three months of fatigue, weight gain of 5kg, and constipation.
Initial hypotheses: Hypothyroidism, depression, type 2 diabetes, colorectal pathology (given constipation), iron deficiency anaemia.
Targeted questions: No low mood or anhedonia. Periods have become heavier over the past six months. Skin is dry. She feels cold even in warm rooms. Hair has become brittle. No blood in stool. Family history: mother has "thyroid problems."
Narrowing the differential: The cluster of fatigue, weight gain, constipation, cold intolerance, dry skin, menorrhagia, and family history is almost a textbook illness script for hypothyroidism. Depression remains possible but the absence of core depressive symptoms (low mood, anhedonia) makes it less likely as the primary diagnosis.
Key investigation: TSH is markedly elevated. Free T4 is low. Anti-TPO antibodies are positive.
Diagnosis: Hashimoto's thyroiditis causing primary hypothyroidism.
The reasoning in action: Each individual symptom is nonspecific. Fatigue alone has dozens of causes. The power of clinical reasoning here lies in recognising the pattern — the constellation of findings that together point to a single unifying diagnosis.
Case 5: The teenager with headaches
Complaint: A 16-year-old female presents with four months of worsening headaches, worse in the morning and when coughing or straining.
Initial hypotheses: Tension headache, migraine, medication overuse headache, raised intracranial pressure (tumour, idiopathic intracranial hypertension).
Targeted questions: No family history of migraine. Headaches wake her from sleep. She has vomited on three occasions, always in the morning. Visual disturbance: "things go blurry sometimes." She has gained weight recently. BMI is 32.
Narrowing the differential: Morning headaches that wake from sleep, vomiting, and visual changes are red flags for raised intracranial pressure. The pattern does not fit tension headache (which worsens through the day) or typical migraine. In an overweight teenage girl, idiopathic intracranial hypertension is a strong possibility — but a space-occupying lesion must be excluded first.
Key investigation: Fundoscopy reveals bilateral papilloedema. MRI brain is normal (no mass lesion). Lumbar puncture shows elevated opening pressure.
Diagnosis: Idiopathic intracranial hypertension.
The reasoning in action: The red flags here — morning predominance, waking from sleep, provocation by Valsalva, visual symptoms — should trigger an automatic shift from "common headache" to "dangerous headache." Missing them means missing a condition that can cause permanent vision loss.
The four cognitive biases that derail diagnoses
Cognitive biases are systematic errors in thinking. They are not signs of incompetence. They affect every clinician, including experts, and research suggests they contribute to roughly 74% of diagnostic errors (Graber, Franklin & Gordon, 2005). Understanding them is the first step toward catching them.
Anchoring bias
What it is: Fixing on a salient piece of information early in the encounter and failing to adjust as new evidence arrives.
How it causes harm: A man presents repeatedly to his GP with chronic burning pain in his feet. Each visit, the diagnosis is documented as peripheral neuropathy. Over months, the pain worsens. Eventually he presents acutely with a cold, dusky, tender leg and non-palpable pulses. Imaging reveals complete occlusion of his superficial femoral artery. He requires an above-knee amputation. The label "neuropathy" was applied early and never re-examined, even as the clinical picture evolved (AHRQ PSNet case report).
A 2023 study in JAMA Internal Medicine found that when emergency department visit reasons mentioned "congestive heart failure," physicians were roughly one-third less likely to test for pulmonary embolism — even though PE rates were the same regardless of how the visit reason was framed (Ly, Shekelle & Song, 2023).
How to counter it: After forming your initial impression, ask: "What doesn't fit this diagnosis?" Force yourself to account for every abnormal finding, not just the ones that support your leading hypothesis.
Premature closure
What it is: Stopping the diagnostic process once a plausible explanation is found, before alternatives have been adequately considered. As the teaching phrase goes: when the diagnosis is made, the thinking stops.
How it causes harm: A 75-year-old man presents with rectal bleeding, weight loss, and a change in bowel habits. Examination reveals a small haemorrhoid. The clinician attributes all symptoms to the haemorrhoid and does not investigate further. The weight loss and change in bowel habit — red flags for colorectal malignancy — are absorbed into an insufficient diagnosis.
In Graber's landmark 2005 study of 100 diagnostic errors in internal medicine, premature closure was the single most common cognitive error. Ninety of those cases involved patient harm, including 33 deaths.
How to counter it: Before finalising any diagnosis, run through Murtagh's five questions (described below). In particular, ask: "What serious condition must I not miss?"
Availability bias
What it is: Overweighting diagnoses that come easily to mind — because you saw a similar case recently, read about it, or because it is dramatic and memorable.
How it causes harm: On a busy Friday night in the emergency department, a doctor has already treated six intoxicated patients. A 28-year-old man is brought in unresponsive. He smells of alcohol. The doctor assumes intoxication and places him in the observation bay. Eight hours later, a nurse notices a medic alert bracelet: insulin-dependent diabetes. His blood glucose is undetectable.
In a controlled experiment, Mamede and colleagues showed that when internal medicine residents had recently diagnosed a particular condition, they were significantly more likely to misdiagnose a subsequent case that looked similar but had a different underlying cause. Second-year residents were especially susceptible (Mamede et al., 2010). The crucial finding: structured reflection counteracted the bias.
How to counter it: When you notice yourself thinking "this is obviously X," pause and ask why. Is it obviously X because the evidence points there — or because X is fresh in your memory?
Confirmation bias
What it is: Seeking and interpreting evidence in ways that confirm your existing hypothesis while discounting or rationalising evidence that contradicts it.
How it causes harm: A 60-year-old man with a history of alcoholic pancreatitis presents with severe epigastric pain. The clinician's first thought is pancreatitis. But the patient insists he has been sober for two years, and his serum lipase and amylase are normal. Rather than revising the hypothesis, the clinician rationalises: the patient is probably lying about his drinking; the pancreas may be "burned out" and unable to produce enzymes; the lab result might be an error. The actual diagnosis — a penetrating gastric ulcer — is delayed because each piece of contradictory evidence was explained away rather than allowed to challenge the working diagnosis.
How to counter it: Actively seek disconfirming evidence. For each hypothesis, ask: "What finding, if present, would make me abandon this diagnosis?" Then look for that finding.
Clinical reasoning vs critical thinking: what's the difference?
These terms are often used interchangeably, but they are not the same thing. Critical thinking is a general cognitive skill — the ability to analyse arguments, identify assumptions, evaluate evidence, and draw logical conclusions. It applies to law, philosophy, engineering, and everyday decisions.
Clinical reasoning is critical thinking applied specifically to patient care. It adds layers that general critical thinking does not cover: pattern recognition from clinical experience, tolerance of uncertainty inherent in biological systems, time pressure, probabilistic thinking (pre-test and post-test probability), and the ethical weight of decisions that directly affect another person's body. A philosophy professor may be an excellent critical thinker but a poor clinical reasoner, because clinical reasoning requires domain-specific knowledge and experience that cannot be substituted by logical ability alone.
Norman's research established this distinction empirically. He demonstrated that clinical reasoning performance is highly case-specific — a physician who reasons expertly about cardiac cases may reason poorly about neurological ones. This means clinical reasoning is not a transferable "thinking skill" that can be taught in a single course and applied universally. It must be developed condition by condition, through repeated exposure to clinical problems across every specialty (Norman, 2005).
The practical implication for students: generic "critical thinking workshops" are not enough. You build clinical reasoning by working through clinical cases, not by studying logic.
Clinical reasoning frameworks compared
Different frameworks serve different purposes and suit different stages of training. No single framework is universally correct — experienced clinicians often combine several, sometimes within the same encounter.
Illness scripts were described by Schmidt, Norman, and Boshuizen in 1990. Each script has three components: enabling conditions (risk factors and demographics), the fault (underlying pathophysiology), and consequences (the signs, symptoms, and test results that follow). As you gain experience, your library of illness scripts grows, enabling faster and more accurate pattern recognition. The hypothyroidism case above is a good example — the cluster of findings maps directly to a stored illness script.
Semantic qualifiers, developed by Bordage, are the abstract descriptors that transform raw clinical data into reasoning-ready language. "Knee pain" becomes "acute monoarticular arthritis." "Headache" becomes "progressive morning headache with Valsalva provocation." Research showed that clinicians who used semantically richer case descriptions achieved significantly higher diagnostic accuracy (Bordage, 1994). Problem representation — distilling a case into a single summary sentence using semantic qualifiers — is one of the most teachable clinical reasoning skills.
The hypothetico-deductive method, formalised by Elstein, Shulman, and Sprafka in 1978, describes the process most students naturally use: acquire initial cues, generate hypotheses (typically three to five), gather further data to test them, and evaluate which hypothesis best fits the evidence. Elstein's research revealed that both experts and novices use this method — the difference lies in the quality of the hypotheses generated, not the strategy itself.
Murtagh's diagnostic strategy is a safety-net approach built around five questions. For any presenting complaint, ask: What is the probability diagnosis? What serious condition must I not miss? What conditions are often missed? Could this be a masquerade? (Murtagh listed seven common masqueraders: depression, diabetes, drugs, anaemia, thyroid disease, spinal dysfunction, and urinary tract infection.) Is the patient trying to tell me something? This framework is particularly valuable for primary care and for students who need a structured way to avoid premature closure.
SBAR (Situation, Background, Assessment, Recommendation) is primarily a communication tool, originally adapted from US Navy submarine protocols for healthcare by Kaiser Permanente. It supports clinical reasoning indirectly by requiring the clinician to formulate a specific assessment and recommendation before handing off a patient. It is less a diagnostic framework than a structured way to articulate your reasoning to someone else.
| Framework | Best suited for | Level |
|---|---|---|
| Illness scripts | Pattern recognition, building diagnostic expertise | All levels; develops with experience |
| Semantic qualifiers | Problem representation, case presentations | Students and trainees |
| Hypothetico-deductive method | Unfamiliar or complex cases | Novices; experts facing atypical presentations |
| Murtagh's five questions | Safety-net thinking, avoiding missed diagnoses | Primary care, all levels |
| SBAR | Clinical handoffs and communication | All levels |
How to practise clinical reasoning every day
Geoffrey Norman's research on expertise development established a finding that should both humble and motivate you: clinical reasoning is not a general transferable skill. It is content-specific and case-specific. Expertise lies not in possessing a superior reasoning strategy but in having a richer, better-organised store of clinical knowledge built from thousands of patient encounters (Norman, 2005). There are no shortcuts. But there are ways to accelerate the process.
Structured reflection after cases. This has the strongest evidence base of any training method. Mamede and colleagues demonstrated that students who practised structured reflection — systematically considering their initial impression, listing alternatives, identifying what fits and what doesn't — achieved diagnostic accuracy of 0.67 compared with 0.36 in a control group, with effects persisting one week later and transferring to novel diseases (Mamede et al., 2014). After every patient encounter, even a brief one, ask yourself: What was my initial hypothesis? What else could it have been? What findings supported my conclusion? What findings didn't fit?
Deliberate case practice. A 2022 NEJM editorial argued for deliberate practice at the virtual bedside to improve clinical reasoning, drawing on Ericsson's framework: repeated exposure to varied cases with immediate feedback (Dhaliwal & Detsky, 2022). Clinical case series — whether in textbooks, apps, or case conferences — build illness scripts when combined with active reasoning rather than passive reading. A 2025 systematic review of 50 studies found that 92% of clinical reasoning teaching interventions reported measurable improvement, with small-group and technology-enhanced approaches performing best (Williams et al., 2025).
Bedside teaching with think-aloud. When your attending examines a patient, ask them to verbalise their thought process. What cues triggered their hypotheses? When did they narrow the differential? Why did they order that specific test? Observing expert reasoning in real time is one of the most efficient ways to absorb clinical thinking patterns you cannot learn from books.
Diagnostic time-outs. Ely, Graber, and Croskerry proposed a simple checklist-based approach: before finalising any diagnosis, pause and ask three questions. Was I comprehensive? Did I consider the worst-case scenario? Do I need to make this diagnosis now, or can I watch and wait? (Ely, Graber & Croskerry, 2011). Building this into your daily routine — even mentally — counteracts premature closure and anchoring.
Case-based games and simulations. Virtual patient tools have shown positive effects on clinical reasoning in 58% of studies reviewed in a 2022 BMC Medical Education systematic review (Plackett et al., 2022). Apps like HeyDoctor, which present a new AI-simulated patient case every day, combine several evidence-backed elements: varied case exposure, free-text history-taking that mirrors real clinical encounters, hypothesis testing through investigation ordering, and immediate feedback on diagnostic accuracy. Daily case practice builds the storehouse of solved problems that Norman identified as the foundation of expertise.
Follow up on your patients. This might be the most powerful habit and the least practised. Clinicians rarely receive systematic feedback on diagnostic accuracy. One review noted that the average US residency graduate completes roughly 2,500 to 3,000 hours of clinical practice — far short of the 10,000 hours of deliberate practice associated with expert performance in other domains (Dhaliwal, 2011). The gap can be partially closed by making each encounter count more. Follow up on what happened after your shift. Check whether your working diagnosis was confirmed or revised. Record the outcome in a brief reflection journal — even a few lines per case. Each correction refines your illness scripts. Each confirmation strengthens them. Over months, your journal becomes a personal database of clinical reasoning lessons that no textbook can replicate.
Putting it all together
Clinical reasoning is not one skill. It is a collection of cognitive habits: generating hypotheses early, using semantic qualifiers to sharpen your thinking, recognising when System 1 has been triggered and whether it deserves trust, actively seeking disconfirming evidence, running safety-net checklists, and reflecting on every case afterward.
You will not master these habits in a lecture or by reading this guide once. They develop through repeated practice with real and simulated patients over months and years. Trowbridge noted that there are no quick fixes in guiding learners to diagnostic reasoning expertise — it is a long journey, and the goal is continuous improvement, not a finish line (Trowbridge, 2008). The 2015 National Academies report found that diagnosis should be understood as a process, not a moment — and improving that process requires investing in the cognitive skills of the people who do it.
The numbers bear repeating. Roughly 795,000 Americans experience serious harm from diagnostic error each year. Three-quarters of that harm involves diseases that clinicians encounter regularly: strokes, infections, cancers. These are not rare zebras. They are missed horses. And the primary reason they are missed is not lack of knowledge — it is flawed reasoning: anchoring on the wrong cue, closing the case too early, letting a recent case distort the differential, or rationalising contradictory evidence rather than following it.
Every one of those errors is a reasoning error. And every reasoning error represents a skill that can be trained.
Start small. After your next patient encounter, spend 60 seconds asking yourself three questions: What was my leading hypothesis? What else could it be? What would change my mind? If you do this consistently, you are practising clinical reasoning — and the evidence says it works.
References
- Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Academic Medicine. 1994;69(11):883–885.
- Croskerry P. A universal model of diagnostic reasoning. Academic Medicine. 2009;84(8):1022–1028.
- Croskerry P. From mindless to mindful practice — cognitive bias and clinical decision making. New England Journal of Medicine. 2013;368(26):2445–2448.
- Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Quality & Safety. 2013;22(Suppl 2):ii58–ii64.
- Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press; 1978.
- Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Academic Medicine. 2011;86(3):307–313.
- Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Archives of Internal Medicine. 2005;165(13):1493–1499.
- Mamede S, van Gog T, van den Berge K, et al. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA. 2010;304(11):1198–1203.
- Mamede S, van Gog T, Moura AS, et al. How can students' diagnostic competence benefit most from practice with clinical cases? The effects of structured reflection on future diagnosis of the same and novel diseases. Academic Medicine. 2014;89(1):121–127.
- National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
- Newman-Toker DE, Nassery N, Schaffer AC, et al. Burden of serious harms from diagnostic error in the USA. BMJ Quality & Safety. 2024;33(2):109–120.
- Norman G. Research in clinical reasoning: past history and current trends. Medical Education. 2005;39(4):418–427.
- Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: theory and implications. Academic Medicine. 1990;65(10):611–621.
- Singh H, Meyer AND, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Quality & Safety. 2014;23(9):727–731.
- Trowbridge RL. Twelve tips for teaching avoidance of diagnostic errors. Medical Teacher. 2008;30(5):496–500.