Skip to main content

How to Embrace AI

Harry Surden

CU law professor Harry Surden worked as a software engineer for five years before deciding to fuse his interests in tech and law. He attended Stanford Law School, where he helped create a groundbreaking interdisciplinary research center, the Stanford Center for Legal Informatics (CodeX), where he remains involved today. He joined CU Boulder in 2008, and his research focuses on the intersection of artificial intelligence (AI) and law.

What spurred your interest in technology and law? 

As an undergraduate, I wondered about this interplay between society, computer science and law. I was working as a software engineer and kept interacting with the legal sector, noticing ways in which aspects of law were becoming standardized — and to some extent automated.

What brought you to CU Boulder? 

Colorado Law has a leading technology policy center, the Silicon Flatirons Center, led by well-known academics doing groundbreaking research — at the time Phil Weiser and Paul Ohm — and I was attracted to the idea of working with them. I was also very interested in moving to Boulder, which I had heard a great deal about. It turned out to be an absolutely terrific place to live.

What are your thoughts on the impact of large language models (LLMs)? 

LLMs are absolutely revolutionary. I have studied artificial intelligence for about 20 years. For 17 of those years, I was somewhat disappointed. I observed that AI of the era prior to 2022 was good in very specific, narrow circumstances, but was far from the AI systems that most people conceived of when they conjured up the notion that machines could think and reason. With the advent of ChatGPT and LLMs since 2023, we are much closer to that vision of AI. 

Today’s AI systems can understand and process ordinary language. Now, these AI systems are still simulation machines, and are not ‘thinking’ or conscious. They reproduce variants of complicated patterns that they have previously seen in billions of pages of written text and video. Nonetheless, they are extremely useful systems able to engage in fairly complex and advanced problem solving and analysis.

How often are you using ChatGPT?

I use ChatGPT (as well as Claude and Gemini) nearly every day. Part of the reason is due to my academic research, which aims to benchmark the legal reasoning and analysis abilities that AI models have for legal scenarios. 

At this point, most of the frontier-leading AI models can engage in reasonably accurate legal reasoning for basic legal scenarios. But there are a couple of caveats. First, they make mistakes. They sometimes ‘hallucinate,’ inventing case names or occasionally misdescribing the holding of legal cases, so professionals have to be careful in completely relying on these systems. The second limitation is that such AI systems are still not great at complex and nuanced legal scenarios that rely upon the intuition, tacit knowledge and experience of attorneys.

How can LLMs be used to benefit the general public?

These models are not perfect, so we have to learn and practice their strengths and account for their weaknesses. I think of answers from ChatGPT as kind of background information that is likely reliable 90% of the time, but I still want to double check. I often cross-check answers across two or three different models in an attempt to triangulate on common knowledge.

Harry Surden

How do you keep up with the AI industry?

I follow the academic research that is produced and uploaded to the academic article archive site for Cornell University’s arxiv.org. I also read the content of researchers on social media and watch the recordings of academic lectures or conferences on YouTube. However, primarily, I continually use the AI systems and test their strengths and weaknesses over time.

What should people know about the ways AI can intersect with the law? 

AI can be very useful for access to justice. In the U.S., people who are involved in civil (non-criminal) cases, such as family law, landlord tenant law, wage disputes and immigration, have no right to counsel. An estimated 80% of Americans who have a civil matter cannot afford an attorney or do not have access to attorneys. AI may be able to help bridge that gap and provide people with a better option for legal advice and information.

How should law schools prepare students for the ethical dilemmas that AI may present in practice? 

Law schools should be cautiously studying and, to some extent, embracing AI. I try to inculcate in my students principles of good AI usage in learning.

In education, there are two ways that AI can be used: to substitute for learning or to complement and enhance learning. Students who use AI should always reflect upon what any use of AI is doing. They should avoid uses where the AI is doing the work for them. However, AI can be a terrific learning enhancement. Imagine you read about a legal case for class, and you have some core confusion and unanswered questions. Here, you’d have the opportunity to use AI to connect the dots, enhancing your comprehension.

Do you believe current regulations are adequate to manage the risks posed by AI systems in society?

I think we should be cautious in regulating AI too early. AI has both benefits and risks, and we should avoid disproportionately attempting to predict future problems that have not yet arisen and may not arise. In my opinion, the best approach is to engage in continual information gathering and monitoring.

  Submit feedback to the editor


Photos by Glenn Asakawa