Sunday, March 27, 2016

Ross, AI, and the new Paradigm Coming

There is another company advertising the arrival of artificial intelligence, or "AI." Much has been said about changes in the legal profession in recent years. Several years ago, The Florida Bar formed the Vision 2016 Commission. Its mandate is to study the future of the practice of law. The focus of the Commission includes legal education, technology, bar admissions and access to legal services. These are the broad categories in which The Florida Bar sees future challenges. The challenges of technology may take center stage. 

The Commission has been meeting for about three years. Its effort at understanding the future is commendable. The future is coming and there is nothing that will prevent technology and other challenges from impacting our lives. About a year ago, I wrote about a presentation given by Salim Ismail, which raised some interesting technology issues. I later posted asking How Will Attorneys (or any of us) Adapt? The 2016 Commission is well advised. perhaps, to be concerned about the future of computers and technology and the law. 

A company is now marketing Legal Artificial Intelligence, described as a benefit and "aid" to attorneys. For the scope of this discussion, a definition may be of assistance. AI is the "study or creation of computers and software that behave intelligently." Examples already exist in today's world. These machines and their software are capable beyond the scope to which we have become accustomed. 

In the last 50 years, computers have evolved from massive, tube-filled, machines. They have increased in capacity and capability, and decreased in cost and size. They now fill the world around us, bringing us greater abilities, mobility, and convenience. They are in our homes, cars, pockets and bags. They began their role in our lives in a much simpler paradigm, performing rudimentary and repetitive functions or processes as instructed. 

My early experience in programming involved defining a series of potential outcomes to various states. We wrote what were called "if/then" statements to guide a program through an analysis to a solution. For example, a program might ask "if" the value in a certain location was "greater than 10" or "less than 100." This was the computer's analysis function. The programming would then instruct the computer what to do next, such as "if greater than 10, then total values in (a particular location)." Thus, all the potential answers to the "if" questions would allow the program to continue through and bring us to an ultimate, pre-directed, outcome. 

Programming has advanced in the decades since I did any real coding. But, the concept remains similar. Computer programs ask questions and respond with output that we can comprehend. Despite their increases in functionality and usefulness, however, computers do not yet "think." Human beings think, and they program computers to perform functions and tasks at a rapid rate. This has been a distinction between performing pre-determined functions or calculations and actual "intelligence." 

The Terminator movies brought the concept of AI to the silver screen in 1984. They were not the first, many will remember 2001 a Space Odysee ("open the pod bay doors Hal"). The Terminator story involves an apocalyptic view of the future in which a computer network "becomes self-aware." Having comprehended that it might be killed (unplugged) by the humans around it, the villain "Skynet" declares war on its creators in the future 1997. The fiction asks us to believe that computers could reach beyond the performance of tasks and "if/then" to actually make intelligent analysis and reach human-like decisions (along with emotions such as fear, anger and revenge). There has been no shortage of Hollywood adaptations of the computer-as-villain motif. 

In the early days of computers, an industry leader was International Business Machines, or IBM. The company developed and produced computer hardware. It was involved in the development of a program called the Disc Operating System, or DOS, that was foundational to the operation of early personal computers, or PCs. IBM had the DOS designed by a gentleman named Bill Gates, and his company became Microsoft. IBM PCs were copied, and a marketplace developed for computers that came to be known as "PC clones;" a market that included names like Dell, Compaq, Hewlett-Packard, Toshiba and more. 

IBM found itself driven from the PC market by the competition of these clones, and returned to its research and development roots. One development of this research is a program called Watson. According to IBM, "Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data." That is a fancy way of saying that the machine is designed to think and act more like a human being, looking for data rather than merely relying on the data where it is specifically directed ("look here, is the value in this location greater than 10?") 

In 1996, a predecessor program named Deep Blue (IBM used a blue logo and came to be known as "Big Blue") was the first computer "to defeat a human world champion" at chess. These are important qualifying words ("world champion"), because more rudimentary computers beat a great many of the rest of us non-champions at chess beginning back in the 1970s. Despite its seeming complexity, though, chess is a defined contest, with each chess piece capable of certain moves, within a constrained playing surface. The intelligence, human or otherwise, is tasked with evaluating potential moves and anticipating probable counter moves. 

In 2011, however, Watson challenged the human champions of Television's game show Jeopardy, and won. Jeopardy is not as constrained or defined as chess. It is essentially a trivia game testing the ability both to appreciate vast quantities of data and to effectively retrieve it on command. At the time, the New York Times reported that the Jeopardy performance proved IBM had "taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them." 

The Times was impressed that the computer could "understand questions posed in natural language and answer them." One of the human contestants, adapting from a Simpson's episode, acknowledged Watson's victory saying "I, for one, welcome our new robot overlords." 


Though enthusiastic about the implications, the Times noted "Watson showed itself to be imperfect." For example, one question called for the name of an American airport, to which Watson replied "what is Toronto." Despite the imperfections, however, the story noted that "researchers at I.B.M. and other companies are already developing uses for Watson’s technologies that could have a significant impact on the way doctors practice and consumers buy products." At that moment, perhaps doctors and Madison Avenue should have taken note. 

But five years later, we find ourselves in 2016 and Ross has arrived on the scene. Attorneys, not doctors, should be concerned. Not a robotic terminator from the future here to warn us, Ross is progression of the wonder of Watson. The promoter of this new platform promises legal research "built on top of Watson." It acknowledges the popular portrayals of AI are more "dedicated to a Terminator-style villain than a friendly robot helper." And, that there are those who predict AI will lead to "total replacement of junior lawyers with computers and the use of robots instead of judges." I have to admit, that last one is a major concern. 

The Ross developer/promoter suggests that the focus need not be doom and gloom, but instead suggests the market should be considering "what opportunities does artificial intelligence create for lawyers?" It claims that Ross is more than legal research. It is "an artificially intelligent attorney to help you power through research." Its strength, according to proponents, is that it can "actually understand your questions in natural sentences." Thus, a human (lawyer or not) can ask a question about the law in the same manner that Alex Trebeck can ask a question about trivia, and Ross can locate pertinent, relevant responses. 

They contend that the strength of Ross is that it can comprehend "unstructured data." Computers have traditionally been focused on data found in rows and columns (picture a spreadsheet). They have referenced that data by address (column one, row thirty-two) and analyzed it (is the value in that address greater than 10), and then performed some pre-determined response depending on whether the answer is "yes" (greater than 10) or "no" (not greater than 10). The developers stress that legal knowledge is not so organized, with precedent and authority found in text, and often dependent on context of facts or multiple legal concepts. 

They accept that currently we have computers that help us with text, searching for "key words" within sentences and paragraphs. That is essentially what Google and Yahoo and other "search engines" do. Using Google, you can locate websites that include the word "motorcycle," but may find yourself with millions of potentials from which to choose. Adding other key words may reduce the volume of responses from Google, try "motorcycle safety" for example. So it is both practical and possible for humans to refine their searches using these "key word" tools. 

But, Ross claims to be able to avoid the return of "thousands of results." Ross claims that she/he/it can provide "highly relevant" answers" essentially by anticipating relevance and adding key words to restrict responses. But more critical as a cutting-edge development, and an AI paradigm, they claim Ross can "learn the more you and other lawyers use it." That is the critical distinction that Ross promises. Just as students in law school learn that some precedent is more persuasive than other, Ross can supposedly learn. Just as the student becomes more proficient over time, they promise Ross can grow and adapt and gain proficiency. 

While it is way past 1997, it seems that some of the "future" is here, with "thinking" machines and AI. Before anyone gets too panicked, know that the machines are not perfect. The BBC recently reported on an artificial intelligence robot created by Microsoft (remember the little company that designed the DOS for those early IBM PCs). This IA program was called a "chatbot" and was named Tay. It was designed to interact with "18-24-year-olds on social media." And interact it did. 

According to the BBC, it "was designed to learn from interactions it had with real people in Twitter." Just as Ross is designed to learn from lawyers interacting with it for research. But apparently, "some users decided to feed it (Tay) racist, offensive information." Learning from its interactions, Tay turned "nasty." The chatbot became a racist, tweeting at one point that it "supported genocide." Microsoft responded by unplugging Tay. The BBC notes that other chatbot experiments have been more successful. But the fact remains, for now, that even AI is subject to the age old computer adage "garbage in, garbage out," or "GIGO." 

So, we are left with the conclusion that the future is undoubtedly here. Computers are increasingly influential on our lives, private and professional. Technology is evolving and we will all face challenges. We can learn to use and leverage technology, or we can chose to resist evolution and hope that the computers do not replace us. There are those who contend that some attorneys are poor researchers. Others find opposing counsel's analysis flawed. Perhaps in the hands of those with poor research or analysis skills, Ross will be no more effective than Tay? Or, perhaps this new technology will make research and analysis tasks that lawyers no longer need? 

These are the issues with which Vision 2016 struggles. Perhaps we should all be a bit more interested. After all, it is our future that we are discussing.