Attia, Zachi Itzahk2022-08-292022-08-292020-05https://hdl.handle.net/11299/241290University of Minnesota Ph.D. dissertation. 2020. Major: Biomedical Informatics and Computational Biology. Advisors: Gilad Lerman, Paul Friedman. 1 computer file (PDF); 105 pages.Opening the lid on the “black box” of artificial intelligence (AI) models including deep neural networks is important for the adoption of this technology in clinical medicine. Given the high stakes, potential for novel or unexpected recommendations, the risk of implicit bias, and the potential legal liability, clinicians may be hesitant to respond to medical diagnoses or therapies suggested by neural networks without the presence of a general understanding of the specific features or characteristics they process to derive their recommendations. Furthermore, the ability to explain predictive AI models may also enhance the ability to improve their performance and to predict appropriate use cases for their adoption. Deep learning methods and convolutional neural networks in specific, achieved state of the art performance in numerous fields and reached human like accuracy in image detection and classification. In some areas, deep learning models superseded human expert capabilities, for example, by detecting asymptomatic left ventricular dysfunction from ECG, by detecting age, sex and cardiovascular risk from fundus photography, and by beating the world champion in Go. Convolutional neural networks use convolutional operations together with non-linear transformations to create feature maps based on the specific outcome the network trained to optimize. While the training of a model as a whole is considered supervised since network weights are optimized with respect to human defined labels, the extraction of the features from a signal is unsupervised, and the features used by a network and their meaning remain unknown (hence, referred to as a “black box”). In traditional computer vision and signal processing, features are engineered based on human knowledge and human observations and later hard coded as a separate step prior to input into a classification model, the human feature are meaningful and in the case of the electrocardiogram (ECG), these features are based on known biological mechanisms. In our work we sought to identify the meaning in convolutional neural network feature maps that were trained on the ECG signal and compare network features to the understandable, human-selected features. Using our proposed methods, which are generalizable, we developed tools to explain AI models. To test, validate, and demonstrate use of this tool, we employ a previously developed AI model that can detect patients age and sex using a surface electrocardiogram (ECG). For any domain with meaningful features, we show that the neural network selects features that are similar to those selected by a human expert, and that neural network “black box” features are in fact a linear combination of human identifiable features. As the network features were created without any human knowledge, this raises the possibility that artificial intelligence models develop a "sense" of the signal it processes in a similar manner to how a human expert does. Thus, artificial intelligence may be truly intelligent; and this work may open the door for creating explainability in artificial intelligence models.enAIDeep learningECGExplaining Predictive Artificial Intelligence Models for ECG using Shallow and Generative ModelsThesis or Dissertation