Is Facebook Engaged in the Practice of Medicine

What is considered the practice of medicine?

In the seminal 1964 Supreme Court case which litigated the threshold definition of obscenity, Justice Potter Stewart opined:

“I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.”

What constitutes the practice of medicine often distils to I know it when I see it.

The states regulate the practice of medicine through enabling Medical Practice Acts. The consensus is a person practices medicine when he or she tries to diagnose or cure an illness or injury, prescribes drugs, or claims that he or she is a doctor.

People practice medicine. What about a device that tries to diagnose or cure an illness or injury. Generally, that is addressed by the FDA. The FDA says it does not regulate the practice of medicine.

But it will properly control what drugs are available to physicians. The Court in U.S. v. Rutherford, held that the distribution of Laetrile was prohibited for any use until it was approved by the FDA. “Although the Court did not explicitly address the practice of medicine exception, its holding necessarily places limitations on a physician’s unfettered practice of medicine by interpreting the Act as intending a pre-market approval process applicable to all drugs, whether sold over-the-counter or by prescription.” By federal statute Congress has required the FDA Commissioner to assure that all ‘new drugs’ are safe and effective for the use “under the conditions prescribed, recommended, or suggested” in the labeling. In order for the FDA to allow a new drug into interstate commerce, there must be “substantial evidence,” upon which experts in the field could conclude, that the drug is safe and effective.

Which brings me to an interesting article by Mason Marks, MD, JD who asks whether Facebook has overstepped its bounds by engaging in the practice of medicine in using artificial intelligence to flag posts for suicide risk. Or more broadly, should such practice be regulated?

Let’s start with the obvious. If suicide can be reliably identified and prevented (by Facebook or anyone else), that would be a good thing. Such a system would not even need to be perfect.

In 2017, Facebook announced its system had triggered 100 wellness checks in one month. A wellness check generally means the police will visit the house to make sure all is OK. If a suicide is being attempted, then they will intervene. In several high-profile cases, police had arrived in time to prevent a successful suicide. In other cases, the police arrived, but too late to change the outcome.

[optin-monster-shortcode id=”nsdrk0nacb6lbx5m4ixi”]

Dr. Marks notes that Facebook treats its artificial intelligence algorithms as trade secrets. Furthermore, Facebook does not follow-up to determine if its intervention was warranted or succeeded. It passes the baton. Dr. Marks notes that we do not even know if Facebook’s interventions are safe and effective. Does Facebook’s system avoid more harm than any benefits it causes?

At first, I was prepared to dismiss those arguments. Who would be against suicide prevention?

After reading his arguments, I reconsidered.

Unlike medical suicide prediction research, which undergoes ethics review by institutional review boards and is published in academic journals, the methods and outcomes of social suicide prediction remain confidential. We don’t know whether it is safe or effective.

If a medical professional engages in suicide prevention, he is bound by privacy considerations and HIPAA. Not so with Facebook. (One exception in the footnote[1]). Facebook can share its predictions with third parties without consumer knowledge of consent. Who might be interested in such information? Insurance companies, credit card companies, and so on. Any entity trying to determine whether a consumer will cost them money or prevent them from being paid.

Couldn’t Congress just prevent companies from selling this information? Not so fast.

Advertisers and data brokers may argue that the collection and sale of suicide predictions constitutes protected commercial speech under the First Amendment, and they might be right. In Sorrell v. IMS Health, the US Supreme Court struck down a Vermont law restricting the sale of pharmacy records containing doctors’ prescribing habits. The Court reasoned that the law infringed the First Amendment rights of data brokers and drug makers because it prohibited them from purchasing the data while allowing it to be shared for other uses. This opinion may threaten any future state laws limiting the sale of suicide predictions. Such laws must be drafted with this case in mind, allowing sharing of suicide predictions only for a narrow range of purposes such as research (or prohibit it completely).

Dr. Marks also argues that such interventions pose risk to consumer safety and autonomy. How often are wellness checks associated with involuntary hospitalizations that might actually create anxiety, depression, and anger only to later increases the risk of suicide? This argument would be valid if the initial wellness check that led to involuntary hospitalization was inappropriate.

Well, what to do?

Dr. Marks states suicide prediction algorithms should be regulated as software-based medical devices. The FDA already has a program in place to address algorithms if software is intended to diagnose, monitor, or alleviate a disease or injury.

Or

[a]lternatively, we might require that social suicide predictions be made under the guidance of licensed healthcare providers. For now, humans remain in the loop at Facebook …, yet that may change. Facebook has over two billion users, and as it continuously monitors user-generated content for a growing list of threats, the temptation to automate suicide prediction will grow.

Even if human moderators remain in the system, AI-generated predictions may nudge them toward contacting police even when they have reservations about doing so. .. [S]ocial suicide prediction tools algorithms are proprietary black boxes, and the logic behind their decisions is off-limits to people relying on their scores and those affected by them.

Many patients who attempt suicide fully intend to succeed in their goal. But, many do so with regrets either at the time of commission or later with the benefit of being saved and engaging in therapy. Saving a life is a worthy goal. But, Dr. Marks urged caution.

Tech companies may like to “move fast and break things,” but suicide prediction is an area that should be pursued methodically and with great caution. Lives, liberty, and equality are on the line.

What do you think?


[1] The California Consumer Protection Act of 2018 (CCPA) provides some safeguards, allowing consumers to request the categories of personal information collected and to ask that personal information be deleted. The CCPA includes inferred health data within its definition of personal information, which likely includes suicide predictions. While these safeguards will increase the transparency of social suicide prediction, the CCPA has significant gaps. For instance, it does not apply to non-profit organizations such as Crisis Text Line. Furthermore, the tech industry is lobbying to weaken the CCPA and to implement softer federal laws to preempt it.


ABOUT THE AUTHOR

Jeffrey Segal, MD, JD

Dr. Jeffrey Segal, Chief Executive Officer and Founder of Medical Justice, is a board-certified neurosurgeon. In the process of conceiving, funding, developing, and growing Medical Justice, Dr. Segal has established himself as one of the country’s leading authorities on medical malpractice issues, counterclaims, and internet-based assaults on reputation.

Dr. Segal holds a M.D. from Baylor College of Medicine, where he also completed a neurosurgical residency. Dr. Segal served as a Spinal Surgery Fellow at The University of South Florida Medical School. He is a member of Phi Beta Kappa as well as the AOA Medical Honor Society. Dr. Segal received his B.A. from the University of Texas and graduated with a J.D. from Concord Law School with highest honors.

If you have a medico-legal question, write to Medical Justice at infonews@medicaljustice.com.com.