AI is a Tool—Not a Shield: Can I Get Sued for Using It?

Senior doctor learning to use AI tools in his medical practice
Medical Justice solves doctors' complex medico-legal problems.

Learn how we help doctors with...

Artificial Intelligence (AI) is everywhere. It’s being used to transcribe visits, summarize research, draft letters, and more. Some say it’s the future of medicine. Others say it’s the Wild West. 

A question I’ve been asked more than once:
“If I use AI in my practice, can I get sued?” 

Short answer: Yes.

But let’s unpack that. 

First: You Can Be Sued for Anything 

This is America. People file lawsuits for all kinds of reasons—some legitimate, some not. Lawsuits are as American as apple pie. Whether you’re using AI, a staff member, or a yellow legal pad, if a patient believes they’ve been harmed, your name could end up on a complaint. 

So no, AI doesn’t make you immune to lawsuits. In fact, it may introduce new risks if you don’t treat it carefully. 

The Real Risk Is Over-Reliance 

Let’s say you use an AI scribe. You assume it’s accurate, so you don’t double-check the note. Later, a critical clinical detail is missing, or mis-transcribed. The patient suffers a poor outcome. Now you’re facing a claim with an incomplete or inaccurate medical record. 

Or imagine you ask an AI tool to summarize a new study or write a letter to a patient. The content sounds smart but turns out to be outdated or flat-out wrong. (And it may also sound impersonal, rude, or arrogant.) You didn’t catch the error. The patient relied on it—and got hurt. Guess who’s liable? You. 

AI Is a Tool. Not a Shield. 

Using AI won’t protect you from legal responsibility. In fact, you are still fully accountable for any output that influences patient care. If a mistake is made because you trusted the software, the plaintiff’s attorney won’t be suing the software. They’ll be suing you. 

In legal terms, AI is not a licensed provider. It’s not a person. It’s a tool.  It doesn’t carry malpractice insurance. You do. 

Think of AI Like Any Other Vendor 

You already trust third-party systems in your practice—EHRs, billing platforms, lab vendors. They help, but they also fail. The same is true with AI. 

So how do you reduce risk? 

  • Always review AI-generated content. Or any content, for that matter.  Don’t copy/paste blindly. (I still remember a case where a surgeon performed an urgent cholecystectomy, and the template auto-populated for family history, “negative for liver disease.” Apparently, the patient’s brother had a liver transplant for alpha-1 antitrypsin deficiency. This same patient, with elevated liver function tests, presumably for gallstone disease, also had alpha-1 antitrypsin deficiency. It ran in the family. Delayed diagnosis. Not sure how such a delay impacted treatment. Perhaps getting him on the liver transplant list sooner. Still, a lawsuit was filed.) 
  • Understand the limitations of the tools you use. What data were they trained on? How often are they updated? 
  • Incorporate AI thoughtfully into your workflow, not as a replacement for human judgment. 
  • Document carefully. If AI touches your documentation, you need to be extra vigilant about accuracy. Not everything AI spits out is accurate. I’ve seen a number of letters to physicians threatening litigation, with AI generated legal backgrounds referencing case law. Or fictional case law. The cases either did not exist. Or the legal conclusion the case was trying to buttress was the opposite of what the court actually ruled. Tread cautiously. 

A Legal Gray Area—For Now 

One reason this topic is so murky is that AI regulation is lagging behind adoption. There are very few laws that govern its use in healthcare right now. That may change. But in the meantime, if something goes wrong, expect the blame to fall on the licensed professional who used the tool—not the developer who built it. Can you sue the AI vendor? Hard to say. You’d have to identify the terms of use of the license. It might be used as you see fit—caveat emptor. 

The Bottom Line 

AI is not magic. It’s not medicine. And it’s not malpractice insurance. 

It can be helpful. It can be efficient. But it can also be wrong—confidently and convincingly wrong. And when it is, you’re still the one on the hook. 

What do you think? 

1 thought on “AI is a Tool—Not a Shield: Can I Get Sued for Using It?”

  1. The AMA, the CDC and the NHS all recommend that physicians review all AI output. While generative AI may be better than scribes, it may contain hallucinations, omissions and errors. ZyDoc offers to include medical language specialists in the loop to review the AI output and correct it BEFORE physician review and sign off and EHR insertion. Physicians do not want to make corrections nor should they have to. Their opportunity costs are $10-$30/minute so it is cost effective. Other AI companies put all the burden on the physicians. AI companies should be up front about the error rates and effort required to fix mistakes. One malpractice case yearly is too much and too expensive. Errors reflect poorly on the physician when patients and referring physicians receive documents.

    Reply

Leave a Reply to James Maisel, MD Cancel reply

Jeffrey Segal, MD, JD
Chief Executive Officer & Founder

Jeffrey Segal, MD, JD is a board-certified neurosurgeon and lawyer. In the process of conceiving, funding, developing, and growing Medical Justice, Dr. Segal has established himself as one of the country's leading authorities on medical malpractice issues, counterclaims, and internet-based assaults on reputation.

Subscribe to Dr. Segal's weekly newsletter »
Latest Posts from Our Blog