Newswire

PRO Partners

Legal ethics and AI: The responsibilities of in-house lawyers, compliance officers

Tariq AkbarArtificial intelligence (AI) is a growing presence in today’s corporate legal and compliance departments — and for a good reason. With ever-increasing pressure on GCs and compliance officers to demonstrate value, any tool that increases efficiency and lowers costs without sacrificing quality and accuracy will be of great interest. But with the growth of AI comes a responsibility to ensure it is used ethically, which creates unique responsibilities for in-house counsel and compliance officers.

At the core of the problem is that AI tools are not readymade for solving an organization's problems from the start. Legal and compliance teams must train AI programs to analyze and make decisions, just as they would with any employee and their own biases and blindspots may creep in. Given how opaque an AI’s algorithms can be, it might then go on to make decisions that are influenced by those biases and blindspots without any indication that this is happening. That could lead to a whole host of ethical issues when the AI is put to work. 

To address this, in-house legal and compliance teams must institute effective processes and systems to ensure their AI tools operate ethically and fairly. Fortunately, in-house attorneys and compliance officers can be proactive in how they train their AI to make fair and balanced decisions. This article outlines some steps to take. 

LTC4 banner

1. Develop an ethical framework that guides everything a company does

Before any AI training takes place, a company must establish an ethical framework to guide what it does. How that looks depends on a variety of factors, including the company’s industry, the kind of data it collects and handles, who has access to that data and how that information is used. Legal departments should work with IT and key managers in all departments to do a thorough inventory of the data they are handling and processing and whether they are following best practices.

Ultimately, chief compliance officers should spearhead AI ethical compliance efforts and establish a task force to develop and implement specific guidelines for handling and protecting sensitive data. This team can also establish basic principles for ethical decision making for any activity an AI will carry out, such as contracting, procurement and sales. This should also be an effort within the entire company to ensure input from all stakeholders. 

2. Create a risk framework 

Just as important as developing the company's ethical framework is establishing the organization's risk framework. After all, not all risks are equal. A robust risk framework will allow in-house counsel and other stakeholders to evaluate each of the organization’s risks and assess which ones would lead to the most liability. GCs will need to ensure that the company's risk framework answers three questions:

  • What are the risks related to the company's industry? In other words, what sorts of strategies and avoidance tactics would peer organizations take when going through specific decision making processes?
  • What are the company's legal risks? In other words, how do the company's decisions and industry patterns line up with its legal, compliance and regulatory obligations?
  • What are the company's geopolitical risks? In other words, how are the company's decisions accounting for local concerns in each jurisdiction in which it operates?

LinkedIn

GCs will need to work hand-in-hand with their chief compliance officers and other regulatory-facing decision-makers when forming these policies. These internal partnerships will help the company's AI make vital decisions that consider the organization's priorities and risk profile.

3. Identify data and systems at risk for bias and ethical failures 

Once a company has established its frameworks, it must take the necessary steps to train its AI and help it get up to speed. Out of the box, an AI program is a digital tabula rasa. As it grows and evolves, the tool needs to learn critical lessons, adapt and work off a code of standards to operate independently and make informed decisions based on scenarios gleaned from the machine learning process.

Therefore, companies should consider training their AI systems to ignore the facts and factors that may have fostered biased results in the past. Financial services companies, for example, can test out "exception rules" to ensure their AI programs downplay demographic and geolocational information when making creditworthiness decisions. They can also train AI programs to reward certain traits in credit applications, including good fiscal habits, in awarding higher credit scores to applicants who may be members of unfairly-treated demographic and socioeconomic groups. 

Coding in these conditionalities and contexts can help the company's AI program become more immune to systemic biases that may have previously guided human-driven decision making.

4. Develop a change management system 

It is also essential for GCs and chief compliance officers to develop a change management system that all stakeholders, including those training the AI, can follow. Ideally, the organization's collective human intelligence should pick up the responsibilities needed to assess how the company can fairly and equitably modify its AI programs to remove biases and ensure fairness in their algorithmic decision making. This is where the human element of AI training comes into play.

LinkedIn

A single subject matter expert or panel of specialists should spearhead the change management process and assess best practices for implementing changes to the AI. To do this effectively, key stakeholders training the AI must be well-versed in the organization's ethical frameworks and be receptive to regular, reliable feedback from other divisions regarding the quality of the AI’s decisions. Therefore, GCs and other in-house counsel must work with different stakeholders across departments to combat prejudices and collect diverse ideas before tackling the same issue in its AI.

5. Monitor the platform's progress and pivot where necessary

Ensuring an AI system adheres to the organization's ethical standards is a long-term journey. Organizations must review countless data points, algorithmic shifts and outputs to confirm that their AI programs are making decisions consistent with the organization's ethical frameworks. 

In-house counsel and compliance personnel can work with IT to monitor the AI program's performance by implementing closed-loop systems. To do so effectively, the team must identify, log and analyze decisions the AI makes over a set period — usually a couple of months to a year. A significant component of this strategy is conducting manual reviews in a phased manner. For example, teams can deploy their AI on manageable data sets and review its outputs to assess whether it is making critical decisions within their companies' ethics frameworks. Teams can then gradually increase the scope of their AI's work while decreasing the percentage at which they are manually reviewing subsets. Once teams have ironed out the kinks in their AI algorithms, they can likely get away with manually reviewing 5-10% of the AI's decisions as part of their ongoing monitoring initiatives.

At the same time, teams will need to consider how they approach the human intelligence powering their artificial intelligence. Companies must ensure their AI teams contain diverse socioeconomic and job-specific viewpoints. Legal and compliance departments will need to adjust their AI programs to reflect the needs and interests of their compliance personnel, IT heads and business division managers. By involving these stakeholders, a company can deploy AI decision making algorithms that reflect the diversity of its workforce's viewpoints and sidestep any biases and misconceptions that specific divisions or stakeholders may promote.

In the end, ethics in AI is as much a systematic problem as it is a systems-based one. Therefore, in-house counsel and other company personnel must ensure AI algorithms function in the most ethical way possible. Establishing risk and ethical frameworks first before training an AI will go a long way to ensure the technology functions properly. 

Tariq Akbar is the CEO of LegalEase Solutions LLC, an alternative legal services provider with offices in Michigan and India. He is also the co-founder of Nora.Legal, a marketplace that gives small firms and solo practitioners greater access to helpful legal transformation tools that can improve their workflows. Over the couple of decades, Mr. Akbar has developed solutions that help lawyers leverage outsourcing and technology to improve organizational performance. He also actively invests in emerging technology and SaaS companies as a venture partner with Dallas Venture Capital.
 

Copyright © 2023 Legal IT Professionals. All Rights Reserved.

Media Partnerships

We offer organizers of legal IT seminars, events and conferences a unique marketing and promotion opportunity. Legal IT Professionals has been selected official media partner for many events.

development by motivus.pt