Four out of 10 executives are concerned about the
legal and regulatory risks of artificial intelligence, according to a recent
Deloitte survey.
Lawyers and their clients are increasingly becoming
aware of the benefits of artificial intelligence, but the risks of the
burgeoning technology have left some clients wary of implementing AI.
In fact, four out of 10 executives had a “high
degree of concern about the legal and regulatory risks associated with AI
systems,” according to Deloitte’s recently released survey “State of AI in the Enterprise.”
Artificial
intelligence regulations and
its risks cut across many practice. Lawyers suggested that clients should
be fully aware of the data used by their AI and to keep an eye on any
results it provides.
Companies may be wary of implementing AI if the
program’s results have broad applicability, are difficult to reverse
or have results that aren’t predictable. For example, the possibly difficult
position a financial institution may face if it uses AI when issuing a loan. In
the event the software makes a discriminatory or incorrect
decision, detecting, correcting or stopping the result may be difficult.
If the results of an AI
program’s algorithm causes a “detrimental outcome” to customers that a business
may not be aware of, U.K. regulators won’t allow an enterprise to use,
“‘Oh well, I didn’t know the computer would do that’” as a defense.
The slow adoption of AI may be based on the lack of
regulation regarding AI, uncertainty about how their AI implementation could be
challenged and not seeking to change the status quo.
It’s important, when using AI, to
know its integrated data and the science behind it. Cross validation of
important data should be performed to determine which should be used and that
users should constantly retest the algorithm.
Some organizations are tepidly embracing AI because of
the amount of data needed for training artificial intelligence and machine
learning.
Regulations
There isn’t a single law regulating artificial
intelligence, lawyers said, and AI touches a myriad of legal issues.
However, a few attorneys cited provisions in the European Union’s General Data
Protection Regulation as targeting AI.
The GDPR’s AI regulations are geared toward programs
not being able “to run out of control and make substantial effects without
human intervention and monitoring.”
Taking a global perspective when assessing which
jurisdiction an AI program
is confide to artificial intelligence is not limited to one jurisdiction.
Clients seek advice on how to develop their
product and product counseling to minimize the client’s legal liability.
Clients tend to also ask for advice regarding the legislative outlook for AI
and how to manage their risk.
No comments:
Post a Comment