New report on Explainability in Artificial Intelligence

Explainable Artificial Intelligence allows human users to understand why an algorithm has produced a particular result.

Share
Share
31 JANUARY 2023

The Artificial Intelligence team at the TIC Salut Social Foundation has published its Report on the Explainability of Artificial Intelligence in Health, within the framework of the Catalan Government’s Health/AI Programme. The document describes the benefits of using explainability tools in Artificial Intelligence. It sets out the main techniques used to explain algorithms based on digital medical imaging, tabular data and natural language processing, with the aim of supporting people involved in the development of Artificial Intelligence algorithms in the field of health.

Explainable Artificial Intelligence allows human users to understand why an algorithm has produced a particular result. The main author of the report, and the head of the Artificial Intelligence Area of the TIC Salut Social Foundation, Susanna Aussó, explains that “It is essential for health professionals to understand the mechanisms by which the Artificial Intelligence tool has arrived at a prediction. This knowledge is essential to build users’ trust, as it gives them the tools to verify whether the answer was based on robust clinical criteria. Explainability comes in various formats, and it is necessary to reach agreement with the experts on the most appropriate format in each case. They are normally very visual formats that may be combined depending on the needs.”.

What can Explainable Artificial Intelligence contribute?

 

The use of Artificial Intelligence in the field of health is constantly growing, due to the availability of electronic health records and the vast range of related data, as well as the great potential this technology has to improve people’s health and well-being.

Some health centres mainly use Artificial Intelligence to support diagnosis, prognosis and treatment of certain diseases. In fact, the Health AI Observatory has already detected nearly 100 Artificial Intelligence algorithms that are in the development stage or are being used in a controlled manner.

This technology is used as a decision-making support tool, as health care staff have the final say and make the final decision. However, it is important that this decision is taken with the knowledge provided by the explainability tools. Without these tools, Artificial Intelligence models are a kind of “black box” that prevents us from understanding what is happening. This is the exact problem that Explainable Artificial Intelligence seeks to solve.

So that it can explain the machine learning model in human terms, Explainable Artificial Intelligence must respond to aspects related to the correctness, robustness, bias, improvement, transferability and human understanding of the model. This makes it possible to build professionals’ trust, as they will be able to understand the limitations and difficulties, and simplify and connect them with easier concepts; to involve stakeholders to build an intuitive, understandable model; and to make better models by eliminating errors and identifying unfair scenarios caused by possible biases.

Taxonomy of Explainable Artificial Intelligence

 

Faced with the lack of consensus on how to classify techniques that follow Explainable Artificial Intelligence models, the report describes different taxonomy models: intrinsic and post hoc explainability; global and local explainability; transparent and opaque models; and model-agnostic and model-dependent techniques.

Explainability of algorithms based on digital medical imaging, tabular data and natural language processing

 

In three specific chapters, the report covers the different methods of explanation based on the source of the data. First, it sets out methods for explaining algorithms based on digital medical imaging, such as x-rays and magnetic resonance imaging. The main methods are CAM (class activation mapping), GRAD-CAM (gradient weighted class activation mapping), LRP (layered relevance propagation), LIME (locally interpretable model agnostic explanations ), and SHAP (Shapley additive explanations).

Second, it describes the explainability of algorithms based on tabular data, i.e. variables that come from sources ranging from analytics, through omics data and vital constants, to hospital management data, among others. In this case, the techniques presented are PDP (partial dependence plot), ICE (individual conditional expectation), C-ICE (centred ICE), counterfactual explanations, LIME (locally interpretable model-agnostic explanations), anchors, and SHAP (Shapley additive explanations).

Finally, the document addresses the explainability of algorithms based on natural language processing. This makes it possible, for example, to extract structured information from a free text report with diagnostic, treatment or monitoring data. The techniques specified for this type of explainability are SHAP (Shapley additive explanations), GbSA (gradient-based sensitivity analysis), LRP (layered relevance propagation), and LIME (locally interpretable model-agnostic explanations).