Categories
Industrial & Service Transformation Latest news

XAI: Understanding how machines make decisions

XAI: Explainable artificial intelligence.

As artificial intelligence (AI) moves into our economy and our lives, we need to understand how computers make decisions. A team of Luxembourg researchers are working on making artificial intelligence understandable, known as explainable artificial intelligence (XAI).

Researchers at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) from the University of Luxembourg are developing AI algorithms based on XAI principles. The objective? To make sure that we are aware and in full control of the decisions made by AI.

Explainable artificial intelligence to gain momentum

Machine learning is becoming increasingly important. While such algorithms can give virtually perfect accuracy, the decision-making process remains elusive. With the acceleration of AI regulation and increased user awareness, explainability has emerged as a priority. 

From a regulatory standpoint, the General Data Protection Regulation (GDPR) includes the right to explanation, whereby an individual can request explanations on the workings of an algorithmic decision produced based on their data.

Additionally, users’ distrust, due to their lack of understanding of the algorithms, may lead to a reluctance to apply complex machine learning techniques. Explainability may come at the expense of accuracy. When faced with a trade-off between explainability and accuracy, industry players may, for regulatory reasons, have to use less accurate models for production systems.

Finally, without explanations, business experts produce justifications for model behaviour on their own. Such a situation is likely to lead to a plurality of explanations, as they conceive contradictory ideas.

Explainability aims to respond to the opacity of the model and its output while maintaining the performance. It gives machine learning models the ability to explain or to present their behaviours in understandable terms to humans.

A human should understand how the software came to a decision

Integrating explainability in production is a crucial yet complex task that the TruX research group at SnT is exploring. As such, the research team is looking to develop AI algorithms following XAI principles to provide solutions to banking processes, in particular.

“State-of-the-art frameworks have now outmatched human accuracy in complex pattern recognition tasks. Many accurate Machine Learning models are black boxes as their reasoning is not interpretable by users. This trade-off between accuracy and explainability is a great challenge when critical operations must be based on a justifiable reasoning.”


Prof. Jacques Klein, Chief Scientist (Associate Prof.)
Head of the TruX Research Group
SnT, University of Luxembourg

Professor Klein’s research team is collaborating with a private bank on building explainability for financial machine learning algorithms. The overall objective is for a human to understand how the software made a decision.

Read open access research paper Challenges Towards Production-Ready Explainable Machine Learning

More about SnT research group TruX

Similar articles

Meet Luxembourg digital twin

What’s a digital twin for? How such a project further supports Luxembourg in being a hub of excellence in terms of digital development? Researchers from Luxembourg Institute of Science and Technology (LIST) in close cooperation with the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg and the Luxembourg Institute of Socio-Economic […]

From lab to startup: digitalUs

Whenever you go online, you leave a digital trail of information footprint. It says where you’ve been, how long you’ve been there and what you’ve been doing. Whenever you sign up for an online service, send an email and upload your photo, this personal information is accessible and therefore adds to your digital footprint. From this […]

In conversation with our young researchers: Dr Alexander Steen

The ubiquity of Artificial Intelligence (AI) and the fact that no scientist can be an expert in every field means AI is an interdisciplinary field at heart. Computational logician and AI researcher Postdoc Dr Alexander Steen, has chosen Luxembourg to run his research projects. The expertise in the research group Dr Steen is associated with […]

Leave a Reply