Explainable artificial intelligence

Last revised by Andrew Murphy on 15 Jan 2024

Explainable artificial intelligence usually refers to narrow artificial intelligence models made with methods that enable and enhance human understanding of how the models reached outputs in each case. Many older AI models, e.g. decision trees, were inherently understandable in terms of how they produced specific results as they could be distilled into clear rules. Unfortunately many state of the art models which are much better performing in terms of prediction than older models e.g. deep neural nets, have problems in terms of explainability 4. Ideally, human users should understand not only the results generated from artificial intelligence algorithms but also salient features of how they were derived. The development of explainable artificial intelligence seeks to introduce more transparency and enable verification of conclusions from algorithms, making conclusions clearly justifiable or contestable for a user. Explainable AI is contrasted 'black-box' approaches but it should be noted that in reality some decisions of some clinicians and radiologists appear to originate in a ‘black-box’, i.e. the notoriously fickle and opaque human brain, as well.

Explainable artificial intelligence is a rapidly developing area of research that seeks to address such issues and by introducing design elements and other techniques to enable human experts to verify AI decision making.

Examples of explainable artificial intelligence techniques:

  • saliency 'heat' maps (such those as in some anomaly detection programs)

  • attention matrixes that visualise what the network is ‘looking’ at

  • weight matrixes that identify which features carry the largest impact on the algorithm

  • algorithms e.g. shapely additive explanations (SHAP), local interpretable model-agnostic explanations (LIME), etc.

There are potential problems with employing explainable AI. In some cases these techniques may increase the computational ‘weight’ of algorithms (which implies more servers and cost or slower operations ) or even increase the risk of deception about the model, as explanations may be partial or misleading e.g. certain kinds of heat maps can tell you where in an image is important, but not what features of that area made it important.

Potential benefits of employing explainable artificial intelligence:

  • human readability

  • easier justification of computational decision making

  • increased transparency and accountability

  • reducing various biases

  • avoiding discrimination

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.