Thursday, January 3, 2019

Hallucinating Technology

The age of machines is upon us. This has been discussed before in Ross, AI and the New Paradigm Coming (March 2016), Artificial Intelligence in Our World (January 2017), Chatbot Wins 160,000 Legal Cases (June 2016), and The Coming Automation (November 2017). The changes that are coming our way are systemic and fundamental. The very fabric of our existence is going to change dramatically in decades to come, through the arrival and evolution of technological dominance. The power of robots and computers already amazes us, and there is significant development still to come. 

Futurism notes "Artificial intelligence is rapidly developing and is already starting to change the world, at a pace that is worrying to some experts." Furthermore Elon Musk, Stephen Hawking and others now "often lament the dangers of unfettered development of AI systems." AI is being studied in regards to efficacy and application, but more recently also as regards its impact. A recent Fortune survey supports that a significant number of "tech experts" currently "are concerned that artificial intelligence will leave humanity worse off in 2030 than they are now." 

The changes are coming. Label them "progress" or "threat," and they will come nonetheless. The Hollywood depictions are numerous. I Robot, released in 2004, portends a future where all of our needs are met by robots and AI. A veritable army of robots, governed by intractable rules, assists mankind in virtually every regard. But one, Sonny, has evolved. In a voice-over, the viewer is warned "There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols." The humanization is suggested further in an exchange between Sonny, and the detective investigating Sonny's creator; in it, Sonny notes that "I have even had dreams." Well, we all have, but we are not in the strictest sense robots (yet). 

In the 1989 classic Ghost Busters II, the New York Mayor is contemplating how best to deal with the apparent proliferation of earthbound spirits and extraordinary circumstances. The Mayor is consulting staff regarding the state of affairs, and they are discouraging his inclination to call the Ghost Busters. The Mayor concludes: 
"I spent an hour in my room last night talking to Fiorella LaGuardia, and he's been dead for forty years. Now where are the the Ghost Busters?" 
The audience is left to wonder whether the conversation actually occurred through paranormal forces or whether the Mayor was mistaken, and perhaps hallucinating? In the years since, I have heard similar lines about "talking to Fiorella LaGuardia" a few times in the context of hallucination ("Yeah, and I suppose you discussed it with Fiorella LaGuardia?). It has become a pop-culture reference familiar to many. 

These perhaps seemingly random thoughts coalesced recently as I contemplated an article from the British Broadcasting Company (BBC), titled The Weird Events that Make Machines Hallucinate. Yes, you read that right. At the outset, the potential of a machine hallucinating may be no more disturbing than the New York Mayor doing so. But the lengthy BBC article is worthy of consideration. 

The BBC claims that: 
"Over the past few years, there have been mounting examples of machines that can be made to see or hear things that aren’t there."
There is an effect produced by "noise," in which automated device's "recognition systems" become disoriented, and "these machines can be made to hallucinate." As machines continue to evolve and gain control of our lives, their "mental" health is a viable topic of discussion. 

The examples regarding "visual recognition systems." are notable. In one example placing stickers on a stop sign rendered it unrecognizable to a computer. In another, an "an image of a cat" was "tampered with." This left it appearing "normal to our eyes," but it was "misinterpreted as guacamole by so-called called neural networks – the machine-learning algorithms that are driving much of modern AI technology." 

These programs, so thoroughly fooled, are currently at the root of software for photo recognition, but have applications for things like driverless, autonomous vehicles. While a computer misidentifying your feline as an appetizer is perhaps harmless and even humorous, a system failing to recognize a stop sign is more disturbing. 


The visual recognition has been fooled by slight changes in "the texture and coloring of" things. A baseball "was miss-classified as an espresso." and "a 3D-printed turtle was mistaken for a rifle," among "some 200 other examples" that were similar. Particularly if Robo-cop becomes a reality, I would like that "rifle" error fixed! 

As technology evolves, the BBC posits that these "deep learning neural networks" will become increasingly integrated into machine performance. These networks will tell machines how to function, and it suggests that having them "tricked into misreading ‘Stop’ signs" has serious implications. The news has already reported the first fatal mis-reliance on self-driving, a Florida collision involving a Tesla. Wired provided a discussion of whether the car (software and sensors) was to blame, or driver error. If such a situation led to your injury would you blame the car, the driver, or both?

Though there may be an inclination to see these examples as neither imperative or personal today, the implications affect more than just visual recognition. In fact, a Google official says "On every domain I've seen . . . neural networks can be attacked to mis-classify inputs." That is, intentional interference, beyond the examples of simple mistakes. 

In some respects, the fallibility of neural networks may be comforting. Recently workers' compensation observer Bob Wilson noted technology challenges in The LoweBot Robot. He concludes that machines are not perfect, but they are getting better. There is humor in their failings, which reassure us of our (current) supremacy. And the BBC's exposition on potential flaws may similarly bolster our esteem. 

Why do these machines make these errors? The BBC notes that our understanding of human "neural recognition" is not as deep as we might hope. And, the models that are programmed for machine recognition of objects are based upon that incomplete understanding of how the human brain functions. We humans build experiential databases in our brains. We see enough cats over time to differentiate them from dogs. Similarly, computers "process numerous examples" and when it can "achieve a good performance ‘on average,’” there is a tendency to accept that level of performance. But, "good" leaves room for errors. Stated otherwise, "average" may leave lots of room for improvement. 

I am reminded in that context of a variety of Internet "clickbait" involving people mistakenly rescuing animals. The underlying theme is a human interpretation of some animal being a stray domestic, ending with the realization that a rescued "dog" is a raccoon, or "dog" is a wolf/coyote, etc. Whether those are frequent, or just frequent clickbait, there is some support our human "neural recognition" is not infallible. The challenge may emanate from the depth of our personal experiential database. Or, it may emanate from the unique qualities of that particular thing we are presently observing. If it is missing something our brain uses as an identifier, then we may make erroneous conclusions. 

Another expert commenting in the BBC story suggests that our human recognition is not based entirely upon "recognizing visual features such as edges or objects," which is essentially the process being used by computers. In addition, human "brains also encode the relationships between those features." Thus, our neural process is about data points and the interrelationship between data points. We perceive patterns and context, and that enhances our ability to differentiate and interpret. The neural networks in AI are not necessarily to that level of sophistication (yet). 

There is ongoing research. A deeper understanding of how our human brains perceive things may be critical to the further development of AI. But also, there may be relevance in the pathways through which our brains process. One scientist is applying analysis to the order in which our brains process inputs and that is showing promise. This sequencing or direction of thought, occurring in our personal neural recognition, may be contributing to our current superiority over technology. But as we better understand those points, the technology will also be improved with each successive iteration becoming closer to our capabilities, until they exceed it. 

And all of this is inextricably tied to the ethics of decisions. In the world of autonomous vehicles and machine learning, there are difficult decisions beyond what the machine sees. A recent New York Times article Tech's Ethical Dark Side considers some. When an autonomous car can correctly identify humans, dogs, and stop signs without fail, how will they make decisions in a conflict? When presented a choice of hitting a dog or a person, what should the car "choose?" (here, maybe the narrator voices in and asks "What would you do if it happened to you," like in The Cat in the Hat?). 

For some, the decision between humans and animals might be seen as a simple choice. Perhaps a harder decision would be between a toddler and an octogenarian (Fiorella LaGuardia)? But for now, the hardest part will be whether that dog or toddler or Fiorella is even real, or just in the machine's imagination (or "dreams"). And as those machines hallucinate, how much can we trust them?

We may have multiple challenges in our future regarding these technologies. However, The Verge reported last month that an autonomous vehicle purportedly recently drove itself coast-to-coast without human intervention. Some would perhaps conclude that "the future is now."