Email Twitter complexity Unlock Opinion Email Email Phone linkedin linkedin Download Down arrow

Explainable AI: why we need to fathom self-learning algorithms

30 Dec 2019
What's the reason behind an algorithm's optimization? And why is it important to know?

Artificial Intelligence is booming, and rightly so. The connections AI makes and the solutions it finds tend to be incomprehensible to the human brain. The upsides are numerous and huge. But at the same time, people are gripped by a collective fear. To what extent can we trust self-learning algorithms? And will they ultimately turn against us? Although the 'why' of AI is incredibly difficult to understand, the 'what' should be within our reach: we are in dire need to understand the reason behind the optimization of an algorithm, and we should know whether all aspects have been taken into account. Briefly put, the increasing popularity of AI calls for something that's been coined 'explainable AI.'


What is explainable AI?

Explainable AI (xAI) studies methods and techniques in Artificial Intelligence to ensure experts can understand the outcomes. Ironically, this is exactly what makes xAI complex – after all, its core ability is to find patterns and relationships that cannot be understood by the human brain. If it were easy for us to understand, we wouldn't need the algorithm.

So, is it a catch-22? Not per definition. But explainable AI faces the challenge of balancing the effectiveness of and faith in AI solutions as well as accountability.


A closer look at AI algorithms

Let's start by saying this: AI algorithms are not as bad as you might think. Sometimes, they simply weren't given the right goal. Coupled with a no-constraint approach, this may have led to wrong optima, which can cause a lot of unintended harm. For example, the objective of an AI algorithm optimizing school and bus schedules in Boston was to minimize costs. But it wasn't set to take working parents' schedules into account, nor did it seem to consider educational goals. Was it the program's fault, though? Not at all. It did its job, didn't it? This is why in any given company, xAI needs to map out an algorithm's optimization purpose first.

But what if the algorithm comes up with a solution that goes beyond its purpose? It happened with Deep Patient, a deep-learning platform at Mount Sinai Hospital in New York, which used health records of 700,000 individuals to predict diseases and psychiatric disorders and warn for patients at risk. The question, however, is whether medical professionals can use Deep Patient for such prediction purposes if they don't fully grasp its workings.


The right to know

No matter your stance, you’re increasingly required to explain your algorithm's decisions and solutions. Regulations started as early as in 2014, when the Houston Federation of Teachers filed a lawsuit against the Houston School District. The latter used an algorithm that ranked teachers to ultimately decide who would be fired and what bonuses 'good' teachers should get. It was developed by a private company, which refused to share the underlying ranking system. The teachers won the case, because the unexplainable software violated their 14th Amendment right to due process. The use of the algorithm was discontinued.


Regulations are (and should be) the future

Maybe you're wondering why we need AI-related regulations at all. Do people in leading positions really need to understand the details so as to trust the outcomes? Well, that’s not the point. 'Accountability' is what's important here. We need to empower people: a thorough understanding is their gateway to challenging AI-based decisions.

In the coming years, regulations regarding AI will become more sophisticated. The European Union has already established the General Data Protection Regulation (GDPR), which requires companies to explain decisions made by their algorithms.

Hence, future AI algorithms should reflect organizations’ priorities. Companies and governments alike will be judged on their algorithms’ decisions and design principles, so they should be prepared to justify the purpose of their algorithms' optimization. As the Houston case demonstrates, simply stating that no one truly understands the way algorithms work will no longer suffice.


Subscribe to our monthly newsletter

More articles

We use cookies to analyse the usage of our website and to give you the best possible user experience. If you continue to use our website we will assume you are happy with these cookies being stored on your device.

Close