I have been writing about artificial intelligence (AI) for several years. There is an amazing evolution happening around us in terms of what machines can do. Remember Chatbot Wins 160,000 Legal Cases (June 2016), or Artificial Intelligence in Our World (January 2017)? More recently, there was Artificial Intelligence Surveillance (August 2020). The fact is that robots are becoming a real part of our world, and artificial intelligence is part of that evolution, but part of a bigger shift in the way that computers operate and assist us.
In those efforts, perhaps there is benefit in a working definition. Britannica defines artificial intelligence broadly:
"the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings."
A friend in the IT business periodically jokes that "artificial intelligence" may also be associated with tasks usually performed by humans. That is a backhanded insult that the vast majority never catch. They seem to assume it is an extension of the definition, and miss that this punch line is essentially that humans are not necessarily "intelligent beings." And, perhaps in missing the joke we demonstrate that?
AI has been in the news in recent months because it is increasingly involved in efforts to protect us from online content that is deemed hurtful, hateful, or worse. There is a great deal on the Internet that Al Gore never dreamed of when he invented it (sorry Al, everyone knows you didn't invent it, and perhaps you never thought you did, but it is a funny punchline). For the sake of argument, things can be funny without being true. I know, I read it on the Internet. But, I digress.
Artificial intelligence programs are purportedly our friends. According to AIMagazine, social media platforms use: "AI to fight against hate speeches and cyberbullying. It uses Deep Text to identify these messages and posts and remove them from the platform."
The AI "learns" as it goes, just the way we did as children. There is subtlety in context perhaps. That challenged us in our youth (for some, the challenge remains). Context and word use challenges AI, which makes assumptions and interprets relationships in an effort to define and categorize. A key target is hate speech. But, there is much anger and meanness on social media that is not blatantly hateful, but nonetheless hurtful. I have seen many a poor wildebeest calmly meandering the virtual Serengeti of social media only to be beset by pouncers and predators - many attacking out of pure, unadulterated ignorance or stupidity. It's enough to make one quit a platform and make a real friend somewhere instead (well almost).
Defining hate may be every bit as difficult as defining pornography, which has been a trope since Mr. Justice Potter Stewart infamously failed in 1964 with: "I know it when I see it," Jacobellis v. Ohio, 378 U.S. 184 (1964). His dereliction there is the textbook failure of an appellate court. Courts should bring predictability and definition. A court that cannot define, and must persistently "see" each example to decide is not an appellate court, but a failure. That is harsh, and I get that. But, it is nonetheless true. And, that is part of the challenge with hate, not only must it be defined, but the bots that patrol the expanse of social media must be able to know it when they see it.
AI must decide. It has to have parameters, definitions, and structure. It has to know what it is looking for, and could easily be programmed to look for words, or even characters. But, context matters. Just because someone says "I hate Brussels sprouts," does not mean that "hate" is truly involved. And, just because one avoids the word "hate" does not mean loathing it is not involved (or at least implicated). One can easily be hateful without mentioning particular characters, words, or phrases. Some might see my derision of Mr. Justice Stewart as critical or even abhorrence. But, alas, I hold no animosity for the man; I don't value his conclusion and find fault. But hate?
The size or volume of the challenge faced by AI is vexing in itself. Twitter has "a quarter of a billion users." And it is but one of many platforms. It and other social media "has become a kind of aggregator of information,” According to
MIT Technology Review. Those Twitter users are generating thousands, sometimes tens of thousands, of tweets per second. Facebook, Instagram, and more are receiving and publishing content from users similarly. And, either someone has to review it all before it is public ("when I see it"), or much will become public that is untrue, inappropriate, hateful, or worse. In steps AI as savior and solution. Or, as scapegoat?
There are a variety of challenges in our languages. There are words with various meanings, there is context, and there is slang. These kids today. I remember when "good" was good before "
bad" became better. That link is to Michael Jackson singing "bad." I remember when he was good, singing
Bad, well, before he was bad. Context can be critical.
I recently ran into a fellow "older" citizen who had utterly avoided a fast food phenomenon because of a social media post that suggested it was "the sh&%." You see when we older folks were kids that word (a short reference to feces) was not a description of something desirable, but to be avoided. The
Urban Dictionary says that "sh&%" remains bad, and the modifier "the" makes all the difference in whether it is good ("the sh$&") or bad (just "sh%&"). Context is critical. Beyond that, some of us are persistently confused by the latest slang. We cannot keep up. But somehow AI is supposed to?
And, to make it worse, our use of language is constantly changing. What is the latest slang? That is up to the young and the "hip" (an articulating joint, a part of a roof, or "very fashionable," you decide). Once those young folks know that the rest of us are on to them, long before Funk and Wagnalls provides a definition for us geriatrics, the youth moves on with some new vernacular. They literally intend for us old folks not to understand what they are saying, or seemingly so. But somehow AI is supposed to?
With all of our challenges with language and context, thank goodness for AI. Oh, I forgot to mention that the real purpose of developing AI has nothing to do with protecting us from speech that offends, misleads, or disturbs us. AI decides what we see on the Internet and social media. AIMagazine notes:
"AI enables social media marketers to get closer to their audience and understand their preferences. This helps them target their ads in a better way as well as create content in a better way."
AI is tracking us, watching us, and plotting against us. Those Snickers adds in my browsing experience proves it.
There is danger in "groupthink" as I discussed in
Consensus in the Absence of Proof (January 2021) and
Hippocrates, Harm, and Racism (May 2022). To provide the content that it thinks we want, AI examines us, monitors us, and feeds us the pablum it believes we individually desire (or need, remember Jagger "you can't always get what you want," Rolling Stones 1969 - the "B-side" of Honky Tonk Woman, if you remember "B-sides," you are my generation or worse). AI not only covets groupthink, it drives groupthink by exposing you primarily to what it thinks you want. AI intentionally makes social media and the Internet perform as an echo chamber. And, that likely drives (mis)perceptions and beliefs untowardly.
But, as useful as our overlords believe AI is in pushing us content and advertising, it is as fallible as the humans that wrote it. Much of what you can or cannot get away with posting on the Internet may come down to these AI programs and their shortcomings, decisions, and persistence.
In November 2022, the British Broadcasting Corporation (BBC) published
Astronomer in Twitter limbo over 'intimate' meteor. The story recounts how a professional posted a picture of a shooting star (which is quite beautiful). The AI at Twitter interpreted it as "intimate content," that was "shared without the consent of the participant." That, in itself, sounds a bit salacious.
She was blocked from making further posts and told to "delete the tweet." Only upon satisfying the demands of the AI ("delete") would she regain the ability to make further posts. The "12-hour ban" that was imposed then drug on for three months. She engaged in, and was frustrated by, an "online appeals process," and steadfastly refused to take the easy way out (delete the video of the meteor). She said that to do so, "she would have had to agree that she had broken the rules." She complained that in this process she could never find anyone at the social media company to speak with about the issue. She was stuck in the realm of sending messages (a bain of modern interaction).
Now, there is right and there is right. We call such engagements "A pyrrhic victory," which Websters defines as "a victory that comes at a great cost, perhaps making the ordeal to win not worth it." In the practice of law, we see a great deal of principal, until the bill comes due. Some clients are adamant about fighting every little thing until they see the cost, and then they become less pyrrhic and hopefully more rational. But, I digress again (this blog is about the law after all).
After the astronomer was fortunate to catch the attention of an international news organization (BBC), it is not surprising that someone at Twitter fixed the issue. She was reinstated without deleting her meteor video, and as they say, life goes on. But there are other examples cited of such errors. There are examples of people electing the easy out of "delete and admit." It is simply easier to go along with AI in order to get along. And, there is broad frustration with AI and its limits and challenges.
The astronomer's perceptions in another regard were also troubling. Despite the fact that we all have many potential outlets for involvement, this astronomer felt "a bit cut off from the astronomy world" during the ban. That is likely generational. In the old days, we used to visit people (this involved travel and interaction), we used an ancient device called a "telephone" to converse with people, often over great distance. In a pinch, we would write words on some flattened pulp and actually pay someone an astronomical sum of $.10 (yes, I am that old) to deliver that "paper" "letter" to someone across the country. If being away from social media affects you profoundly, you should get outside more (just saying, but "stay off of my lawn").
The import of AI worries me. The artificial is scary because we (lay people) don't understand it. The intelligence is scary because it may be either way too intelligent (think Sheldon Cooper, The Big Bang Theory, Warner Brothers 2017-2019) or way too dumb (Meteor is intimate). Despite its strengths, and probable flaws, it is here and steering us with its analysis of what we like or do not (or perhaps should). It is not bringing us different and challenging or making us think. It is shielding us from differences and distinctions and denies us the chance to consider perspectives. I for one would love more meteor images in my feed, and could really care less how the meteor may feel about it.
As the world continues to evolve, as we deal with the challenges of defining what is right, wrong, or even offensive (perhaps it is in the eye of the beholder), AI will drive our content and thus our perceptions. It will drive, for better or worse, our world. Do you trust it? Do you "hate" Brussels sprouts?