By Published: May 3, 2019

What impact will the technology that enables self-driving cars, robots, and drones have on the legal profession?

Harry SurdenFrom films to news headlines, artificial intelligence, or AI, is often portrayed as a threat to many modern professions. It's only logical, then, for lawyers to wonder: Should they be worried or enthusiastic? Will AI take over the legal profession as we know it—or will it bring more access to legal services and enable improved lawyering?

Associate Professor Harry Surden, a distinguished scholar in the areas of AI and law, regulation of autonomous vehicles, and legal automation, suggests most legal careers will remain safe. Rather than replacing lawyers, he says, AI can actually enhance legal work by streamlining mechanical tasks, thus providing attorneys with more time to spend on abstract reasoning and problem-solving.

From software engineer to law professor

Surden’s background is somewhat unconventional for a law professor. As an undergraduate student, he simultaneously pursued courses in computer science and political science, wondering to what extent computer science might apply to law and policy. After working as a professional software engineer for several years, he decided to explore this cross-disciplinary approach further in law school. He earned his JD from Stanford before clerking for a federal judge in San Francisco. From there, he returned to his alma mater as a researcher, where he further pursued this idea of computer science applied to law—sometimes called "legal informatics." In 2006, he helped co-found the Stanford Center for Legal Informatics (CodeX) and served as its first research fellow. In that role, Surden helped develop a proof-of-concept research project that allowed architects to automatically determine when their electronic building designs were in compliance with local building code laws.

"Professor Surden has always dedicated his research in AI and law to questions of immediate relevance to the field,” said Roland Vogl, a professor of law at Stanford and executive director of CodeX. “He has an incredible ability to explore the topics of his research thoroughly, while still presenting very complex issues in a way that makes them accessible to lawyers and computer scientists alike.”

Surden joined the faculty of Colorado Law in 2008, where his scholarship has included such articles as "Machine Learning and Law" (Washington Law Review), "Technological Opacity, Predictability, and Self-Driving Cars" (Cardozo Law Review), and "Computable Contracts" (UC Davis Law Review). He teaches technology- and law-related courses such as Patent Law and Computers and the Law.

Over the years, Surden’s academic research interest began to crystalize on a particular aspect of computer science and law: artificial intelligence. He was drawn to AI in the early 2000s as he observed AI techniques moving out of university laboratories and becoming widely integrated throughout society. At that time, AI was comparatively understudied as a topic within law. While researchers today are more attuned to AI, Surden remains part of a relatively small group of law professors who are not only studying the impact of AI on law and policy, but who are also building software applications that use AI on legal topics. Surden’s research has focused on applying intelligence techniques to various problems in patent and contract law, and, in 2018, he was awarded the University of Colorado’s Provost Award for his research on legal informatics.

"Professor Surden's work on autonomous vehicles is important for both consumer protection and the policy of technological design," said Colorado Law Associate Professor Margot Kaminski, whose own research on the law of information governance, privacy, and freedom of expression overlaps with Surden’s as it relates to autonomous systems such as AI, robots, and drones. The pair organized a May conference in partnership with the law school’s Silicon Flatirons Center for Law, Technology, and Entrepreneurship titled “Explainable Artificial Intelligence: Can We Hold Machines Accountable?” (more information available at siliconflatirons.org).

"[Surden’s work] shifts the conversation away from the over-discussed 'trolley problem' (that is, the question of how to decide who gets hit when a car is offered a choice between two people) to the more pressing question of how to design an entire environment of interaction between autonomous cars and human drivers," Kaminski said. "That's where the harder, more practical questions lie. By doing interdisciplinary research—he's one of few law professors to collaborate with a roboticist—Professor Surden is a trailblazer in this field."

The AI of today

The AI depicted in science fiction and the media as intelligent computers capable of discussing deep, abstract, and insightful ideas with humans, or acting at a level that meets or surpasses general human intelligence, is not the AI that we use or have today, nor is there evidence that we are near such "strong" AI, Surden says.

Rather, AI today is best understood as using computers to solve problems and make automated decisions that, when done by humans, are usually thought to require intelligence, Surden says. However, he notes that these automated decisions are typically based not on artificial human-level intelligence, but on algorithms detecting patterns in large amounts of data, and using statistics to make educated approximations—known as machine learning.

The dominant approach to AI today, machine learning techniques are often able to produce useful, accurate outcomes in certain domains such as language translation. But, because they rely on detecting complex patterns in data, Surden explains them as “producing intelligent results without intelligence."

For example, when a machine learning-based computer system produces a translation, it usually does so using statistical associations. However, such a pattern-based machine learning approach—while often producing decent translations—does not actually involve the computer “understanding” what it is translating or what the words mean in the same way a human translator might.

Despite these limitations, machine learning has been instrumental in producing many recent breakthrough technologies. For example, as Surden explains in "Technological Opacity, Predictability, and Self-Driving Cars," algorithms in autonomous vehicles learn to drive themselves by detecting patterns of braking, steering, and acceleration based on data from human drivers. Other popular machine learning applications include an email spam filter that uses algorithms to detect common words or phrases used in spam to filter out emails that may clog inboxes; credit card fraud detection; and automated cancer tumor diagnosis.

Surden

Surden presents at BYU Law School about how the values and biases in artificial intelligence impact the justice system. (Matt Imbler/BYU Law)

AI and the law

How does machine learning apply to the field of law? In his widely cited article "Machine Learning and Law," Surden notes that a limited number of legal tasks may benefit from current machine learning approaches. Core tasks still require a great amount of problem-solving and abstract reasoning that pattern recognition or machine learning is unable to replicate. However, a fair number of relatively mechanical tasks within law can benefit from AI, such as e-discovery document review, litigation predictive analysis, and legal research.

E-discovery document review is an example of machine learning starting to make inroads into legal tasks that have traditionally been performed by lawyers. Like email spam filters, AI can detect patterns in documents that can then be used to sort through the millions of e-discovery documents and filter out pages that are likely irrelevant to the case. This in turn leaves far fewer potentially relevant documents for attorneys to analyze.

Additionally, AI can be used for predictive analysis in litigation. Surden explains that while attorneys in the past might have told clients that they had an 80 percent chance of early settlement based on experience and intuition, AI can provide substantive support. By using data based on similar cases, claims, or facts of a scenario, AI can predict potential outcomes or even show trends in a timeline. However, one downfall noted by Surden is the difficulty in predicting outcomes for unique cases with distinct fact patterns.

Finally, other more controversial uses of AI in the law exist, such as the use of AI in criminal sentencing or in providing statistics on the probability of reoffending. The patterns in past data on criminal sentencing may contain biases that a machine cannot detect, and reliance on AI would preserve such biases into the future. Thus, while AI may not be suited to all legal tasks, certain assignments may be done more effectively and efficiently by using AI.

There are many other examples of ways in which AI can be used in law. Surden’s research has focused on so-called "computable contracts": legal contracts in which the content and the meaning of the contractual obligations are represented in a way that can be understood and automatically applied by computers. Surden has convened a working group at Stanford that is focused on moving this process out of the university laboratory and into the world. His other research has focused on ways in which machine learning can lower barriers to access to legal services for low-income communities.

How the legal profession can use machine learning

E-discovery document review: AI can improve organization and reduce the amount of discovery clutter by sorting through millions of e-discovery documents and filtering out pages that are irrelevant to a case.

Litigation predictive analysis: By leveraging data from past client scenarios and other relevant public and private data, AI can predict future likely outcomes on particular legal issues that could complement legal counseling.

Legal research: AI can improve organization by grouping documents together based on nonobvious shared qualities, thereby simplifying the research process and saving attorneys time

The AI of tomorrow—and beyond

The use of AI in mechanical tasks will likely continue to expand, and Surden suggests that law students position themselves in an area of law that requires abstract reasoning rather than repetitive tasks that will soon become obsolete. However, there are limits to the use of AI in law. For example, AI still requires patterns and rules and is ineffective for unique fact patterns and distinct cases. AI still cannot complete the abstract reasoning that attorneys carry out, and it is unlikely such complex functions will be automated anytime soon. Finally, Surden notes that while speculation on futuristic cognitive AI is tempting, it is better to understand the existing technology and plan accordingly.

Thus, while AI is likely to replace some legal tasks that today involve mechanical repetition or underlying patterns, lawyers do a variety of things such as advising clients, problem-solving, formulating persuasive arguments, and interpersonal activity—that are unlikely to be automated away soon. However, Surden cautions that we shouldn’t focus only on the job-reducing aspects of new technology.

Historically, while new technologies have often reduced certain jobs, they have also created entirely new classes of jobs that were difficult to anticipate. For example, the rise of computing technology eliminated many jobs involving humans who computed mathematical problems for a living, but that same technology gave rise to entirely new classes of jobs, such as data analysts and software engineers, that didn’t exist and that were hard to predict at the time. Surden says there is likely to be a similar path in law.

"Although AI’s entry into law is likely to eliminate or reduce some existing legal tasks, it is also likely to create entirely new categories of legal jobs in the future—perhaps legal data analyst or machine learning legal specialist—that are today hard to imagine," Surden says.

"Like all technological revolutions, the future of law influenced by AI will not necessarily be good or bad overall for the profession. The only thing that we can count on is that it will be different."

 

This story originally appeared in the spring 2019 issue of Amicus.