News

Some nerdy legalese and some philosophy regarding AI

June 18, 2019

Artificial Intelligence or “AI” is probably one of the biggest buzzwords going around these days, and everyone seems to be wanting in on the fun. The European Commission will invest €1.5 billion by 2020 (70% more compared to 2014-2017) in AI research and innovation, and the global spending on AI research is estimated to reach somewhere around $35-40 billion in 2019 depending on your source.

The incentives for spending all these resources on AI to be ahead in the technological race are many; more efficient work, smart and cost effective, being able to understand more complex correlations and facts etc. I’ve also heard people arguing from a more societal point of view: without AI we will lose our western world “competitive edge” and see our BNP and living standards drop. On a more day-to-day perspective you might also have felt the regulatory push to improve your risk modeling and being better at detecting money laundering or fraud, which now can be solved with different variations of AI.

AI is not, however, an easy issue to tackle and raises a lot of questions and concerns. The examples of biased algorithms and models are many and the people and organizations using AI are often struggling to understand e.g. how the AI came to a certain conclusion or made a certain decision. Why did that person turn up in our fraud detection system? Why didn’t I get a loan? Furthermore, when the AI improves itself or starts to write code on its own, the challenge of understanding the underlying logic gets even more complicated.

AI is also what lawyers dream nightmares about

Well, maybe not all lawyers (myself I think these questions are kind of fun!) have nightmares but the legal concerns related to artificial intelligence (“AI”) can keep your next-door friendly lawyer pondering for quite some time. Liability issues to consider are many, for example:  who is responsible when something goes wrong? The development company, the company selling AI products/solutions, the company using AI?. Also data protection issues (what data is the AI using and can the individual say no?) and discrimination issues (what if the AI is biased?)needs to be carefully thought over. AI is an area where our beloved printed laws and statues somewhat fails us, and the old image of technology dragging the law behind it is coming to mind.

Some nerdy legalese and some ethics

So, where do we see the need to use, or even must use AI in our businesses? We start to look for standards, and in the lack of standards we turn to ethics. The European High-level expert group recently finalized their “Ethics guidelines for trustworthy AI” (found here) where “trustworthy AI” means AI with three components; 1) the AI is lawful 2) the AI is ethical and 3) it is robust, both from a technical and societal perspective. The first component (AI should be lawful) refers in the guideline to our fundamental human rights, highlighting the respect for human dignity, freedom of the individual, respect for democracy, justice and the rule of law, equality, non-discrimination and solidarity and citizen’s rights as guidance, but does not explain it further or help the sleepless lawyer with the liability issues. In terms of privacy and data protection the guideline discusses transparency and privacy as key principles to achieve trustworthy AI, but offers no solutions.

Some philosophy

All right, let’s say we try to implement/develop/use “trustworthy” AI in our businesses. How do we achieve that? What is “trustworthy AI” really, the guideline aside? I believe we can answer that question by answering another; how do we become trustworthy organizations? Perhaps that is not a question for your lawyer – or maybe it is? When it comes to data protection and being trustworthy, the GDPR talks about core principles such as fairness, transparency and accountability in relation to the processing of personal data, and that could perhaps be a good place to start. Let’s develop/implement/use AI only when we can be sure that the algorithms are fair and gives everyone an equal chance, and use AI that we can understand and see the logic behind and then explain it to any individual affected by the AI. Let’s also finally make sure we take responsibility for our use of AI, document and go through our risk assessments and have procedures in place to “catch” any situation where the AI could do harm to an individual. AI is very likely soon an everyday part of our lives, and given its potential impact, I really do hope it will be trustworthy.

Written by Sofia Arveteg

Related news