AI and the Law: A Lawyer Explains the Risks for Companies
The breakneck advance of AI is leading to a lot of confusion in the business world. But confusion isn’t necessarily a bad thing. If it’s met with genuine curiosity and a desire to understand, it can transform into wisdom.
Chris Hart, a partner at Boston-based law firm Foley Hoag, understands this well. As co-chair of the firm’s privacy data security group, Hart advises business clients on regulatory compliance, helping them identify risks and adopt policies to avoid legal issues.
Recently, Hart spoke alongside two members of the Institute for Experiential AI (Matthew Sample, an AI ethicist, and Cansu Canca, director of Responsible AI Practice) at an event hosted by the Mass Technology Leadership Council. The event, which attracted a diverse group of attendees, was designed to help businesses understand how to optimize their operations with AI.
Legal Jeopardy
A common misconception is that AI is unregulated. While there are not many AI-specific laws, there are plenty of laws that apply to AI technologies. Part of Hart’s job is to advise clients on the risks around those laws that clients might not be privy to. The first step, he explains, is to draw a distinction: Are we talking about AI systems that are currently in development or third-party systems that companies are using?
“One of the things that's become clear with large language models that are being used now pretty ubiquitously for work purposes is that whatever their utility might be, there could be privacy concerns around their inputs,” Hart says. “You want to be really careful about not putting in confidential information for law firms, not putting in privileged information, not putting in sensitive information that could then be used as training data and inadvertently disclosed.”
There are also intellectual property concerns, especially in the case of generative AI, which have led to a rash of copyright lawsuits against AI companies. Most prominently, The New York Times filed suit against OpenAI; Getty Images sued Stable Diffusion; and a group of authors including John Grisham, Jodi Picoult, and George R.R. Martin sued OpenAI for “systematic theft on a mass scale.”
How these lawsuits hold up remains to be seen, but the toll on the companies involved can hardly be underestimated, and the lesson is clear: Companies using AI—especially those developing new tools—need to tread lightly.
“To what extent is that tool going to make adverse decisions for organizations?” Hart asks. “Is there bias involved or could there be? How do you protect against that in the engineering phase? How do you audit the entire process once you put it together to make sure that you can loop back and correct problems?”
New Perspectives
These are difficult questions whose answers depend on specific use cases. They also speak to the importance of weaving a Responsible AI (RAI) framework throughout each stage of development and deployment. Increasingly, it seems, success in AI is defined by the degree to which companies can honor a multidisciplinary approach.
That’s why the Institute for Experiential AI boasts among its ranks engineers as well as philosophers, lawyers, economists, and more. It’s also why both the Institute and Foley Hoag are members of the Mass Technology Leadership Council (MTLC), a technology association that convenes leaders with “diverse perspectives” to solve pressing legal and economic challenges.
“You need to have people who understand the technology,” Hart says. “You have to have the engineers involved, but you also have to have legal involved. You have to have people who are looking at it from a number of different perspectives, willing to think critically about what the technology is designed to do and whether it can create either known or unintended adverse outcomes.”
Patience Is a Virtue
Amid all the AI hype, it’s easy to forget the importance of patience. Things are moving quickly and so, understandably, companies fear if they don’t “move fast and break things” then they’ll lose their competitive edge. Hart advocates for a more prudent approach.
“Some companies have been forced to come to market earlier than they might have intended because ChatGPT blew everything up,” Hart explains. “Organizations should carefully consider how mature their AI vendors are, especially since they need to understand what's happening with their data."
On the one hand, experts are saying AI holds revolutionary promise. From generative AI to medical diagnostics, the breadth of its potential does not easily boil down to a single pitch. On the other hand, such raw predictive power warrants not only patience but perspective. Few companies are equipped to navigate this new landscape on their own.
To learn how the Institute for Experiential AI—with its rosters of AI engineers, academics, and practitioners—can help your business navigate these difficult waters, click here. And make sure to sign up for our newsletter In the AI Loop to stay in touch.