WC.com

Tuesday, July 7, 2020

Urging Behavior - Liability?

Back in February 2020 (seems like years ago), NBC News reported on a vaccination story. It centered on "Facebook groups that routinely traffic in anti-vaccination propaganda." This effort, the news page refers to as "health misinformation groups," of which it claims Facebook "hosts a vast network." This story focused upon a "4-year-old Colorado boy who died" from influenza. The child's mother described her physician prescribing a common antiviral drug, but that she declined to fill the prescription. 

The story notes that "concerns about side effects are common" with this medication. It notes that concerns are shared "even outside anti-vaccination echo chambers." There is ridicule and condemnation in the story directed at the advice this child's mother received in various online conversations, some of which have since been deleted. NBC suggests that Facebook should do more to address "vaccine misinformation." 

Around the same time, Michelle Carter was back in the news. Remember Is it Manslaughter? Does it Matter if it is Not? (April 2015). Ms. Carter was about 16 years old when she engaged in messaging with her young boyfriend. The gentleman was contemplating suicide, and eventually carried that out. She was convicted in 2017 for her role in encouraging him to carry out his own demise. It was labeled a "texting-suicide case" and she was sentenced to "involuntary manslaughter." Then, in late January 2020, she was released from jail "after serving 11 months." 

In Ms. Carter's conviction is a precedent for criminal liability for one's words. With her words, she "encouraged" her eighteen-year-old boyfriend Conrad, according to CNN. This was in text messages sent in July 2014. Those texts were all published verbatim in another CNN story. There is a mixture in the messages. In some, Ms. Carter urges help: "But the mental hospital would help you." and "Please don't" (harm yourself). Others are less helpful "You can't keep pushing it off" and "You're gonna have to prove me wrong because I just don't think you really want this."

There was significant media attention when Ms. Carter was convicted. There was angst and upset over the messages and the outcome. A young life extinguished; another severely damaged by a manslaughter conviction and prison sentence. 

When I consider the two stories, each involving a sad death, I wonder if the time will come when those who post medical advice on social media might be held to account for the outcome of their encouragement? Why is a young woman who urged suicide imprisoned and contributors to a Facebook forum are not? In each case, a young person died. In each case, encouragement was rendered. But, the outcomes are notably different. 

Perhaps there are a multitude of message exchanges underway at this very moment in which advice is rendered or behavior encouraged (discouraged). There is a directness of such messaging, a privacy of conversation. But, is there any distinction if the messaging is broader, and more public, in a forum like Facebook? 

And, is it appropriate that the platform (Facebook) has no liability for the information posted by the "anti-vaxxers" that it hosts? That broad protection dates back to 1996, and the enactment of a federal statute "meant to protect young internet companies from liability," according to the New York Times. That law has been in the news recently. Business Insider recently reported that President Trump supports the effort to "weaken protections for internet companies" in this regard. Facebook currently faces a campaign by advertisers because of its policies or lack of policies on content editing.

See, section 230 protects platforms from liability based upon the content produced by others (their users). The Times notes that this protection extends to sites that host "hate speech, anti-Semitic content, and racist tropes." It is based upon the premise that the platform is merely a conduit and that it cannot effectively police all of the submitted content. Therefore, this law "permits internet companies to moderate their sites without being on the hook legally for everything they host." 

In Hassell v. Byrd, 420 P. 3d 776 (CA 2018), the California Supreme Court was asked to force a platform called Yelp to remove "several postings deemed to have defamed a . . . lawyer." The Court concluded that the platform had no duty to remove the posts. The Court noted that these platforms get many requests to remove information because people find various speech to be disagreeable, unliked, "threatening, obscene, fraudulent or in the present case, defamatory.” This is described by the LA Times

The Court concluded that decisions as to what is or is not appropriate online should be left to the platform. It concluded that the platforms must have the right "to make their own judgments about the material they host without interference from the courts." It is likely that many will agree with that conclusion in favor of the platform and its discretion. The real question, though, has never been about whether one has the right to express views, but whether one is responsible for those views and the outcomes that may come from them.

The recent argument for diminishing (or eliminating) the protection seems focused on the perception that platforms are moderating, editing, and policing. Despite the perception that they are thus capable of doing so, there is concern that they seem to do so based upon ideological positions or beliefs, and without consistency. The argument seems to be that platforms have proven themselves capable and willing to police both users and content. Therefore, when they choose not to police either perhaps the results of their ambivalence might be to their detriment. 

Like Rush once intoned in Freewill, "if you choose not to decide, You still have made a choice." In the end, should there be liability for people whose words lead to untoward outcomes, like the Anti-vaxxers reported by NBC News? Should they be treated similarly to the 16-year-old who served a year in prison for texting encouragement or even less, to a despondent and vulnerable young boyfriend? And, should the platforms that provide such speech wide and broad dissemination be held responsible for their individual decisions on who, what, and when to edit or label with warnings?

The situation is certainly evolving. And, there is plenty of room in all of this for discussion. It is interesting to see the conversation and to understand the various perspectives. I have sent a great many tweets during this pandemic urging people to get out and exercise. Should I be held liable if one gets hurt doing so, and should Twitter share the blame?