Is Facebook Engaged in the Practice of Medicine?

Medical Justice solves doctors' complex medico-legal problems.

Learn how we help doctors with...

What is considered the practice of medicine?

In the seminal 1964 Supreme Court case which litigated the threshold definition of obscenity, Justice Potter Stewart opined:

“I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.”

What constitutes the practice of medicine often distils to I know it when I see it.

The states regulate the practice of medicine through enabling Medical Practice Acts. The consensus is a person practices medicine when he or she tries to diagnose or cure an illness or injury, prescribes drugs, or claims that he or she is a doctor.

People practice medicine. What about a device that tries to diagnose or cure an illness or injury. Generally, that is addressed by the FDA. The FDA says it does not regulate the practice of medicine.

But it will properly control what drugs are available to physicians. The Court in U.S. v. Rutherford, held that the distribution of Laetrile was prohibited for any use until it was approved by the FDA. “Although the Court did not explicitly address the practice of medicine exception, its holding necessarily places limitations on a physician’s unfettered practice of medicine by interpreting the Act as intending a pre-market approval process applicable to all drugs, whether sold over-the-counter or by prescription.” By federal statute Congress has required the FDA Commissioner to assure that all ‘new drugs’ are safe and effective for the use “under the conditions prescribed, recommended, or suggested” in the labeling. In order for the FDA to allow a new drug into interstate commerce, there must be “substantial evidence,” upon which experts in the field could conclude, that the drug is safe and effective.

Which brings me to an interesting article by Mason Marks, MD, JD who asks whether Facebook has overstepped its bounds by engaging in the practice of medicine in using artificial intelligence to flag posts for suicide risk. Or more broadly, should such practice be regulated?

Let’s start with the obvious. If suicide can be reliably identified and prevented (by Facebook or anyone else), that would be a good thing. Such a system would not even need to be perfect.

In 2017, Facebook announced its system had triggered 100 wellness checks in one month. A wellness check generally means the police will visit the house to make sure all is OK. If a suicide is being attempted, then they will intervene. In several high-profile cases, police had arrived in time to prevent a successful suicide. In other cases, the police arrived, but too late to change the outcome.

[optin-monster-shortcode id=”nsdrk0nacb6lbx5m4ixi”]

Dr. Marks notes that Facebook treats its artificial intelligence algorithms as trade secrets. Furthermore, Facebook does not follow-up to determine if its intervention was warranted or succeeded. It passes the baton. Dr. Marks notes that we do not even know if Facebook’s interventions are safe and effective. Does Facebook’s system avoid more harm than any benefits it causes?

At first, I was prepared to dismiss those arguments. Who would be against suicide prevention?

After reading his arguments, I reconsidered.

Unlike medical suicide prediction research, which undergoes ethics review by institutional review boards and is published in academic journals, the methods and outcomes of social suicide prediction remain confidential. We don’t know whether it is safe or effective.

If a medical professional engages in suicide prevention, he is bound by privacy considerations and HIPAA. Not so with Facebook. (One exception in the footnote[1]). Facebook can share its predictions with third parties without consumer knowledge of consent. Who might be interested in such information? Insurance companies, credit card companies, and so on. Any entity trying to determine whether a consumer will cost them money or prevent them from being paid.

Couldn’t Congress just prevent companies from selling this information? Not so fast.

Advertisers and data brokers may argue that the collection and sale of suicide predictions constitutes protected commercial speech under the First Amendment, and they might be right. In Sorrell v. IMS Health, the US Supreme Court struck down a Vermont law restricting the sale of pharmacy records containing doctors’ prescribing habits. The Court reasoned that the law infringed the First Amendment rights of data brokers and drug makers because it prohibited them from purchasing the data while allowing it to be shared for other uses. This opinion may threaten any future state laws limiting the sale of suicide predictions. Such laws must be drafted with this case in mind, allowing sharing of suicide predictions only for a narrow range of purposes such as research (or prohibit it completely).

Dr. Marks also argues that such interventions pose risk to consumer safety and autonomy. How often are wellness checks associated with involuntary hospitalizations that might actually create anxiety, depression, and anger only to later increases the risk of suicide? This argument would be valid if the initial wellness check that led to involuntary hospitalization was inappropriate.

Well, what to do?

Dr. Marks states suicide prediction algorithms should be regulated as software-based medical devices. The FDA already has a program in place to address algorithms if software is intended to diagnose, monitor, or alleviate a disease or injury.

Or

[a]lternatively, we might require that social suicide predictions be made under the guidance of licensed healthcare providers. For now, humans remain in the loop at Facebook …, yet that may change. Facebook has over two billion users, and as it continuously monitors user-generated content for a growing list of threats, the temptation to automate suicide prediction will grow.

Even if human moderators remain in the system, AI-generated predictions may nudge them toward contacting police even when they have reservations about doing so. .. [S]ocial suicide prediction tools algorithms are proprietary black boxes, and the logic behind their decisions is off-limits to people relying on their scores and those affected by them.

Many patients who attempt suicide fully intend to succeed in their goal. But, many do so with regrets either at the time of commission or later with the benefit of being saved and engaging in therapy. Saving a life is a worthy goal. But, Dr. Marks urged caution.

Tech companies may like to “move fast and break things,” but suicide prediction is an area that should be pursued methodically and with great caution. Lives, liberty, and equality are on the line.

What do you think?


[1] The California Consumer Protection Act of 2018 (CCPA) provides some safeguards, allowing consumers to request the categories of personal information collected and to ask that personal information be deleted. The CCPA includes inferred health data within its definition of personal information, which likely includes suicide predictions. While these safeguards will increase the transparency of social suicide prediction, the CCPA has significant gaps. For instance, it does not apply to non-profit organizations such as Crisis Text Line. Furthermore, the tech industry is lobbying to weaken the CCPA and to implement softer federal laws to preempt it.


ABOUT THE AUTHOR

Jeffrey Segal, MD, JD

Dr. Jeffrey Segal, Chief Executive Officer and Founder of Medical Justice, is a board-certified neurosurgeon. In the process of conceiving, funding, developing, and growing Medical Justice, Dr. Segal has established himself as one of the country’s leading authorities on medical malpractice issues, counterclaims, and internet-based assaults on reputation.

Dr. Segal holds a M.D. from Baylor College of Medicine, where he also completed a neurosurgical residency. Dr. Segal served as a Spinal Surgery Fellow at The University of South Florida Medical School. He is a member of Phi Beta Kappa as well as the AOA Medical Honor Society. Dr. Segal received his B.A. from the University of Texas and graduated with a J.D. from Concord Law School with highest honors.

If you have a medico-legal question, write to Medical Justice at infonews@medicaljustice.com.com.

4 thoughts on “Is Facebook Engaged in the Practice of Medicine?”

  1. IMOHO connecting social media to police dispatchers is a bad precedent to set. Consider the possibilities if the scope of this practice is expanded. With a for-profit prison system well ensconced in this country, it is all too easy to find oneself in deep weeds over nothing but an allegation. Allowing computers to trigger police interventions? Really?

  2. Where to begin? Does Facebook practice medicine? Does it give medical advice, see patients directly, prescribe medications? IF not then the answer is no.
    Does it refer people for treatment? Perhaps but is that any different than what a friend or family member might do?
    Does Facebook sell your information? Absolutely. That is why people should be vigilant about what they post on social media. So Facebook sells your information to a company on the basis of suicide risk. But how accurate are those algorithms and on what are they really based?
    Did the person not give up their rights once they signed off on the Terms and Conditions on Facebook? Of course they did. That is how Facebook provides them that “Wonderful” user platform for free.
    With all of the adverse publicity about Facebook and with all of its pitfalls it is only a matter of time before someone comes up with a better social media platform. Does anyone remember MySpace? Does anyone think that Facebook has such a wonderful friendly user interface that someone could not come along and design a better one? All of social media and technology is an object for disruption when someone else comes along and invents a better version.
    Any intervention in this realm with the hope of making facebook suicide prevention more responsible is likely to have additional censorship risks that have other untoward and unintended consequences.
    The user chooses to share information with Facebook, Facebook chooses what to do with the information in exchange for providing its platform for free. Many people cry out for help before a suicide attempt in a variety of ways, and people don’t hear the call for help. One suicide prevented is worth the cost.

  3. Facebook generally doesn’t care about their subscribers. If their algorithm for alerting police of a possibly suicidal subscriber includes 1) having no “Friends”, and 2) someone screaming in the wilderness at the top of his lungs, it may be reasonable. How many terrorist attacks were thwarted by FB? AntiSocialMedia platforms are just that.

  4. Here’s a question: is facebook’s AI conscious? If it is, can you “own” it? Seems that goes to the question of whether Facebook is practicing medicine—assuming that assessing suicide risk and intervening is the sole purview of the practice of medicine.

    But it isn’t. When an EMT or police officer intervenes (or at least tries to), they’re not practicing medicine. They’re intervening in a crisis as good samaritans, although paid ones.

    Whether an AI should do an intervention is the real question, not whether that intervention is within the sole realm of medical practice. And I don’t have a simple answer for that one.

Comments are closed.

Jeffrey Segal, MD, JD
Chief Executive Officer & Founder

Jeffrey Segal, MD, JD is a board-certified neurosurgeon and lawyer. In the process of conceiving, funding, developing, and growing Medical Justice, Dr. Segal has established himself as one of the country's leading authorities on medical malpractice issues, counterclaims, and internet-based assaults on reputation.

Subscribe to Dr. Segal's weekly newsletter »
Latest Posts from Our Blog