- Road to Artificia
- Posts
- To see the future of AI in healthcare, look outside the system
To see the future of AI in healthcare, look outside the system

Did you receive this forwarded from a friend?
I recently had a conversation with a colleague who has a company in the healthcare software space. It, of course, touched on AI. He expressed his view that AI will won’t penetrate healthcare any time soon - and that it will be 20 years before real change.
I of course don’t agree. But one of the most valuable things about speaking to people in many fields about their perception and awareness of AI is that there are so many interesting dynamics to integrate into the big picture of AI’s impact on society and individuals.
This colleague has spent years pushing against institutional and cultural barriers to technology adoption in the healthcare space. I’ve pursued technology introductions in other compliance-heavy sectors like banking, and it’s impossible to come through it without some scar tissue. Every institution exists to perpetuate itself, and regulatory institutions perpetuate themselves through installing more and more opposition to change. Trying to make change in that context can make one fatalistic about the idea that change could go faster. But in the context of AI it’s not true.
There’s a mistake here that transcends the discussion of AI and medicine: Thinking that existing societal systems will continue to be the pivot around which AI will conform. What will actually happen is: AI is a grand pivot around which societal systems will rearrange themselves. (I like this framing so much I’m calling it the “Road to Artificia Doctrine” from now on).
Outcomes among early adopters
For a small but growing subset of early adopters, AI has already had life-changing medical value - delivered at low-to-zero end-user cost. There are scores of examples in which individuals, having longstanding conditions unresolved via standard medical channels, have been diagnosed successfully in an AI assistant chat session, later confirmed in a doctor's office.
There’s a selection effect in which patients predisposed to be early adopters of a personal AI assisted medical “consultation” tend to be those dissatisfied with their medical care. But the fact these cases already exist is the signal to pay attention to.
Let’s see some of the cases:

Undiagnosed mitochondrial disease identified

Identification of a false diagnosis of terminal blood cancer

3-year undiagnosed tethered cord syndrome in a child
https://radiologybusiness.com/topics/artificial-intelligence/after-seeing-17-different-doctors-boy-rare-condition-receives-diagnosis-chatgpt

Undiagnosed potentially fatal drug complication
These cases can’t be dismissed - they are very consequential positive outcomes.
Interestingly, there is a clear problem of measurement emerging here: When recorded by existing EHRs, they look like outcomes of the current system, rather than a success outside the system correcting a failure within it. They improve the baseline metrics for the pre-AI system even as they actually fill terrible gaps within it.
This mirrors the productivity paradox of the Internet era - the observation that despite widespread adoption of powerful digital technologies (e.g. internet, search engines, smartphones), measured productivity growth does not capture the effect of making knowledge work vastly more efficient.
The Big Adjustment - Zero marginal cost intelligence
People in every part of the knowledge economy have a hard adjustment to make: we’ve all had success in these fields based in large part on the strength of our intelligence. The arrival of zero-marginal cost intelligence threatens more than just people’s jobs, it threatens their self-conception and their most valuable trait. This adjustment will impact every field a little differently, but here's how I think it may impact doctors.
The first, most obvious area for change is medical education. So much of medical education is memorization of a large set of decision trees representing differential diagnoses and standard of care. AI already obliterates human performance on this type of task. It’s hard to make a case that curriculum shouldn’t change now to assume more reliance on LLMs as a method of search over that tree of possible diagnoses, and refocus study time on other areas.
Another impact is that human roles will ultimately be valued not on the basis of comparisons between humans - they will be valued based on the those humans’ ability vs AI. This can lead to counterintuitive outcomers. For instance, there’s good reason to believe that human nurses may ultimately be more highly valued than human doctors - because AI will soon outcompete the mental work of the doctors but not the direct manual care of nurses. By this logic human surgeons should continue to be highly valued for some time longer.
Disruption of the guilds, for the first time in history
Doctors and lawyers have maintained guild systems for hundreds of years. In disruption theory terms, doctors are the ultimate incumbents. Healthcare, looked at through this lens, can never reach a point of over-service or be "good enough". Until we banish death and decline, people will always be willing to pay for a better product. In a market like this, incumbents are supposed to be safe from disruption.
And yet the cases above, of people using AI assistants to generate or validate diagnoses, is a textbook example of disruption in action. It starts as “low-end” - worse in some ways, but better on a different axis of performance than the traditional providers. Over time, the disruptor’s capabilities expand, while the incumbent remains addicted to high-end offerings and is unable to compete as their offerings are eaten from below.
So what’s happening? There are two explanations.
First, even as our healthcare systems save lives, they impose a lot of costs on patients and society: the healthcare industry treats patients conspicuously unlike customers, and healthcare’s impact on national budgets makes any alternative delivery approach attractive.
Second, for all the power disruptive innovations to threaten incumbents, they win primarily because incumbents can’t react due to addiction to the profits of high-end offerings. But AI has stronger economic wedge than that. It promises to actually push out the Pareto frontier in healthcare, to soon be better and cheaper even at the high-end.
How are AI assistants currently worse than doctors (today)?
AI assistants are worse than doctors in many ways: It requires a bit of skill on the part of the patient to get the best results out of them (although this could be said of patient interactions with doctors too). Clearly, AI assistants can only substitute for certain parts of the current healthcare system. They can’t complete the treatment process on their own: no referrals, no prescriptions without a followup at the doctor’s office. They can’t be sued.
They can’t directly observe much about the patient. They can’t collect hands-on data - can’t palpate an abdomen, look in an ear, or smell ketoacidosis. They can hallucinate in their responses. They are a small disconnected island with respect to the healthcare system.
But how are AI assistants better?
They are vastly more accessible. Lower latency, no gatekeepers to get past (receptionists), and no appointment necessary. No conforming to a schedule defined for the maximum benefit of the service provider. Use as much as you want, when you want. For some people, say single working mothers, appointments are a barrier to consuming healthcare much at all.
They’re dramatically lower cost. Priced at effectively zero cost, and provides savings by eliminating opportunity cost of time spent visiting the doctors office. They’re scalable in a way doctors simply aren’t. Depending on the severity of your symptoms or concerns, you can spend 30 seconds on a medical discussion, or 60 minutes. Come right back to the chat if you forgot to mention something or have another question. A GP’s office is consumable in a fairly inflexible size (the “visit”) between around 15 and 45 minutes.
They have better knowledge of rare conditions than all but a set of top tier doctors that most people don’t interact with. And they can re-review your entire medical history on every “visit”.
For a patient today, clearly the best approach is to use both a human GP and an AI assistant for their healthcare.
Many would add “no certification” or “unregulated” to the worse list. Maybe, but it gets to go on the better list too. Like all disruptive innovations, this one could never have originated inside the system. Some regulation is necessary but it’s never free from some very direct harms. We directly create healthcare service shortages, for instance, by tightly controlling the sizes of medical school graduating classes1 .
There are plenty of valid critiques that can be made of current AI assistants in these contexts. Yet not every critique is valid.
Critiques that focus on the source of the message (social media) - these cases are not being recorded anywhere else, so this is just an argument to ignore what’s happening.
Anyone who tells you that people with these experiences shouldn't have consulted an AI assistant about their health issues is not thinking clearly or has an agenda that you should be wary of.
Try it yourself
Disclaimer: Don’t rely solely on interactions with your AI assistant to make any decision about your health. See your doctor. I am not a doctor. Your AI assistant is not a doctor.
Depending on the complexity of your issue, you can simply bang out a quick question, or upload a library of your test results (decide your own comfort with uploading personal data). But as a good starting point, here’s a prompt I’ve used to begin AI assistant conversations about health issues.
Prompt template:
A [male/female] patient age [N], [M lbs/kgs] presents with [onset/chronic] symptoms of [main symptom] over the last [time period]. Patient has a [sedentary/moderately active/active/athletic] lifestyle. Another office performed [...] tests, [“attached” or describe test results]. Patient also complains of [...]. Patient has a prior history of [chronic and acute conditions]. Patient takes the following medications: [names and dosages]. Patient uses the following recreational substances: [...].
Tasks:
1. List a rank-ordered differential diagnosis
2. For each item, cite supporting & contradicting findings in ≤ 2 lines.
3. Flag any red-alert features that warrant immediate ED referral.
4. Recommend the next best tests or referrals (evidence-based).
5. Ask clarifying questions if critical data are missing.
Follow up the AI assistant’s response with further questions, provide additional information, and investigate different aspects of the response. You’ll certainly learn a lot. Use this information in your doctor visit, either printed for their consumption or to inform your own line of questions, after the doctor provides their own differential diagnosis.
Need an expert AI strategy, product, or technical partner? |
Have feedback? Are there topics you’d like to see covered?
Reach out at: jeff @ roadtoartificia.com
What did you think of this issue? |
1 Why Canada intentionally limits its supply of doctors
National Post, Feb 6 2023