Explainable artificial intelligence
Citation, DOI, disclosures and article data
At the time the article was created Jay Gajera had no recorded disclosures.View Jay Gajera's current disclosures
At the time the article was last revised Candace Makeda Moore had no recorded disclosures.View Candace Makeda Moore's current disclosures
Explainable artificial intelligence (XAI) usually refers to narrow artificial intelligence models made with methods that enable and enhance human understanding of how the models reached outputs in each case. Many older AI models, e.g. decision trees, were inherently understandable in terms of how they produced specific results as they could be distilled into clear rules. Unfortunately many state of the art models which are much better performing in terms of prediction than older models e.g. deep neural nets, have problems in terms of explainability 4. Ideally, human users should understand not only the results generated from artificial intelligence algorithms but also salient features of how they were derived. The development of explainable artificial intelligence seeks to introduce more transparency and enable verification of conclusions from algorithms, making conclusions clearly justifiable or contestable for a user. Explainable AI is contrasted 'black-box' approaches but it should be noted that in reality some decisions of some clinicians and radiologists appear to originate in a ‘black-box’, i.e. the notoriously fickle and opaque human brain, as well.
XAI is a rapidly developing area of research that seeks to address such issues and by introducing design elements and other techniques to enable human experts to verify AI decision making.
Examples of explainable artificial intelligence techniques:
saliency 'heat' maps (such those as in some anomaly detection programs)
attention matrixes that visualize what the network is ‘looking’ at
weight matrixes that identify which features carry the largest impact on the algorithm
algorithms e.g. shapely additive explanations (SHAP), local interpretable model-agnostic explanations (LIME), etc.
There are potential problems with employing explainable AI. In some cases these techniques may increase the computational ‘weight’ of algorithms (which implies more servers and cost or slower operations ) or even increase the risk of deception about the model, as explanations may be partial or misleading e.g. certain kinds of heat maps can tell you where in an image is important, but not what features of that area made it important.
Potential benefits of employing explainable artificial intelligence:
easier justification of computational decision making
increased transparency and accountability
reducing various biases
- 1. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. (2015) PloS one. 10 (7): e0130140. doi:10.1371/journal.pone.0130140 - Pubmed
- 2. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus Robert Müller. How to explain individual classification decisions. (2010) Journal of Machine Learning Research. 11: 1803.
- 3. Peter Mondrup Rasmussen, Kristoffer Hougaard Madsen, Torben Ellegaard Lund, Lars Kai Hansen, Visualization of nonlinear kernel models in neuroimaging by sensitivity maps, NeuroImage, Volume 55, Issue 3, 2011, Pages 1120-1131, ISSN 1053-8119, https://doi.org/10.1016/j.neuroimage.2010.12.035. (http://www.sciencedirect.com/science/article/pii/S1053811910016198)
- 4. Bologna, Guido & Hayashi, Yoichi. (2017). Characterization of Symbolic Rules Embedded in Deep DIMLP Networks: A Challenge to Transparency of Deep Learning. Journal of Artificial Intelligence and Soft Computing Research. 7. 265. doi: 10.1515/jaiscr-2017-0019. doi:10.1515/jaiscr-2017-0019