AI vs Doctors: Who Will Patients Trust in the Future of Healthcare?
Competitive Introduction
2026 is about a month old and we've already seen two big announcements for AI and Healthcare.
First, OpenAI announced the release of ChatGPT Health, a LLM (large language model) that is a dedicated experience combining the intelligence of ChatGPT with one’s personal health information.
Four days later, Anthropic announced Claude for Healthcare, introducing a complementary set of tools and resources that enable healthcare providers, payers, and consumers to use Claude for medical purposes through HIPAA-ready products.
Now, we face the ultimate question of the autonomous age: Who will we trust with our lives—the physician or the machine?
Who Will We Trust
We are currently living through a staggering "Trust Paradox."
According to a KFF Tracking Poll (Jan. 2025), 85% of adults trust their physician to make the right recommendation when it comes to their health.
Yet, data published in Becker’s found that general trust in physicians and hospitals has plummeted from 71.5% in 2020 to just 40.1% today.
How can both be true? We trust the person, but we no longer trust the system. And this is where the AI enters. With 230 million people already asking ChatGPT health questions every week, the public has already voted with their thumbs.
The 4T’s of Healthcare: Why AI has the upper hand
To understand why AI is winning, I revisited my own TRUST equation, better known as the 4T’s of Medicine. When we measure a human doctor against an LLM, the "math" of the patient experience starts to favor the machine.
- Transparency - Can you have an honest, unhurried conversation? In a clinic, you get 15 minutes. With Claude or ChatGPT, you have an infinite window to dig into the "why" behind a diagnosis.
- Transition - Can your provider map your journey from "sick today" back to "normal life"? AI doesn't just give a diagnosis; it builds a 24/7 roadmap based on your specific daily habits.
- Time - Access is the ultimate currency. While a patient might wait weeks for a specialist or hours for a callback, an AI "professional" is 24/7. It is always on, accessible via voice, text, or image, the moment an ailment arises.
- This is the silent "T." In my clinical experience, patients sometimes withhold the truth about their habits - whether it's drinking, smoking, or symptoms - due to fear of judgment. You cannot embarrass a machine.
In today’s climate, physicians are pressured to focus on documentation, length of the appointment, and addressing the reason for the appointment. When a patient may have an additional question, they find that the physician is pressured for time, may delegate to a nurse or other clinician, or, in some instances, face an additional charge for the additional time needed.
Patients may not always share all of their important history with a physician due to embarrassment, fear of being judged, or believing that it does not have merit to the current visit. On the other side of the coin, I have seen physicians not always believe that the patient is being completely honest with them when it comes to their personal history.
Physicians have set hours for appointment scheduling, they take leave (however rarely) and do themselves experience sickness and have to take time off work. When it comes to LLM’s such as ChatGPT or Claude for Healthcare, they are always on. As long as a patient has their smartphone (91% of Americans own a smartphone) and the app, they can access a conversation with an “artificially intelligent” professional at any time. 24/7, no matter what your question, concern, idea, thought, or ailment you will always have an answer that is merely a voice, image, video, or text message away.
My Own Experience with LLM’s
I use all of the major LLM’s—ChatGPT, Gemini, Claude, DeepSeek—but for my personal health and deep thinking, I use Claude.
Early on, I treated LLMs like friends. ChatGPT was the "people pleaser" of the group, always saying yes even when they should challenge you. Gemini felt like a snippy teenager who would get a bit of an attitude with me during a long session. But Claude is different. Claude is a partner. It banters, it welcomes me, and most importantly, it identifies my blind spots.
As I have continued experimenting, I use each of the models for different work. I sometimes cross pollinate the work to optimize the output. Claude has continued to be my favorite because, in a healthcare context, you don't need a "yes man"; you need a collaborator who pushes your limits and helps you see around corners.
Radical Honesty and the "N of 1"
The most profound shift isn't the technology—it’s the data. My gut tells me that people will soon trust LLMs with the secrets they keep from their spouses and their doctors. Both OpenAI and Anthropic are introducing an intelligent assistant that can be available to millions of people in real-time when they need it, where they need it, and provide them with answers.
The more information we give to these health apps - our sleep cycles, genetic markers, daily journals, and lab results - the smarter it becomes. We are moving away from "population health" (treating you like everyone else) to the N of 1 (treating you like only you).
Ten years ago, I stood on stage and told 5,000 radiologists that they had a decade to step out of the dark and into the light and "own their words." I told them they needed to become the guides of the patient journey before the window closed.
That decade is over. On Jan 7th and 11th, 2026, when OpenAI and Anthropic announced the introduction of their healthcare specific LLM’s...the N of 1 arrived.
The N of 1 is no longer a theoretical concept; it is a subscription service. And the question is no longer whether AI is "ready" for healthcare, it is: Who will you trust more with your life—a physician who has 15 minutes for you, or an intelligence that has been with you every second of the day? Or a combination of both?
The choice is yours, at the N of 1.