Explaining Predictive Artificial Intelligence Models for ECG using Shallow and Generative Models

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

Explaining Predictive Artificial Intelligence Models for ECG using Shallow and Generative Models

Published Date

2020-05

Publisher

Type

Thesis or Dissertation

Abstract

Opening the lid on the “black box” of artificial intelligence (AI) models including deep neural networks is important for the adoption of this technology in clinical medicine. Given the high stakes, potential for novel or unexpected recommendations, the risk of implicit bias, and the potential legal liability, clinicians may be hesitant to respond to medical diagnoses or therapies suggested by neural networks without the presence of a general understanding of the specific features or characteristics they process to derive their recommendations. Furthermore, the ability to explain predictive AI models may also enhance the ability to improve their performance and to predict appropriate use cases for their adoption. Deep learning methods and convolutional neural networks in specific, achieved state of the art performance in numerous fields and reached human like accuracy in image detection and classification. In some areas, deep learning models superseded human expert capabilities, for example, by detecting asymptomatic left ventricular dysfunction from ECG, by detecting age, sex and cardiovascular risk from fundus photography, and by beating the world champion in Go. Convolutional neural networks use convolutional operations together with non-linear transformations to create feature maps based on the specific outcome the network trained to optimize. While the training of a model as a whole is considered supervised since network weights are optimized with respect to human defined labels, the extraction of the features from a signal is unsupervised, and the features used by a network and their meaning remain unknown (hence, referred to as a “black box”). In traditional computer vision and signal processing, features are engineered based on human knowledge and human observations and later hard coded as a separate step prior to input into a classification model, the human feature are meaningful and in the case of the electrocardiogram (ECG), these features are based on known biological mechanisms. In our work we sought to identify the meaning in convolutional neural network feature maps that were trained on the ECG signal and compare network features to the understandable, human-selected features. Using our proposed methods, which are generalizable, we developed tools to explain AI models. To test, validate, and demonstrate use of this tool, we employ a previously developed AI model that can detect patients age and sex using a surface electrocardiogram (ECG). For any domain with meaningful features, we show that the neural network selects features that are similar to those selected by a human expert, and that neural network “black box” features are in fact a linear combination of human identifiable features. As the network features were created without any human knowledge, this raises the possibility that artificial intelligence models develop a "sense" of the signal it processes in a similar manner to how a human expert does. Thus, artificial intelligence may be truly intelligent; and this work may open the door for creating explainability in artificial intelligence models.

Description

University of Minnesota Ph.D. dissertation. 2020. Major: Biomedical Informatics and Computational Biology. Advisors: Gilad Lerman, Paul Friedman. 1 computer file (PDF); 105 pages.

Related to

Replaces

License

Collections

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Suggested citation

Attia, Zachi Itzahk. (2020). Explaining Predictive Artificial Intelligence Models for ECG using Shallow and Generative Models. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/241290.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.