Browsing by Subject "AI"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item The Challenges of Detecting Eurasian Watermilfoil with a Pseudo Labeling Semi-Supervised Convolutional Neural Network(2022-05-02) Pargman, ConnorEurasian Watermilfoil is an invasive aquatic plant found in many bodies of water in Minnesota. It tends to out grow and kill many native plants. The current solution to removing Eurasian Watermilfoil is to kill it using a herbicide. However, this has drawbacks because the herbicide can affect native plants, it contaminates the water, and is not sprayed accurately. A solution to this problem is by using autonomous underwater vehicles equipped with a deep learning model that can detect Eurasian Watermilfoil to map it for accurate spraying. However we found this not to be the case. While trying to train a model to detect Eurasian Watermilfoil using a pseudo labeling semi-supervised and supervised convolutional neural network, it could not detect the plant due to the scarce amount of images. However it was found the pseudo labeling a diver dataset proved to be more accurate and efficent than the supervised version.Item Explaining Predictive Artificial Intelligence Models for ECG using Shallow and Generative Models(2020-05) Attia, Zachi ItzahkOpening the lid on the “black box” of artificial intelligence (AI) models including deep neural networks is important for the adoption of this technology in clinical medicine. Given the high stakes, potential for novel or unexpected recommendations, the risk of implicit bias, and the potential legal liability, clinicians may be hesitant to respond to medical diagnoses or therapies suggested by neural networks without the presence of a general understanding of the specific features or characteristics they process to derive their recommendations. Furthermore, the ability to explain predictive AI models may also enhance the ability to improve their performance and to predict appropriate use cases for their adoption. Deep learning methods and convolutional neural networks in specific, achieved state of the art performance in numerous fields and reached human like accuracy in image detection and classification. In some areas, deep learning models superseded human expert capabilities, for example, by detecting asymptomatic left ventricular dysfunction from ECG, by detecting age, sex and cardiovascular risk from fundus photography, and by beating the world champion in Go. Convolutional neural networks use convolutional operations together with non-linear transformations to create feature maps based on the specific outcome the network trained to optimize. While the training of a model as a whole is considered supervised since network weights are optimized with respect to human defined labels, the extraction of the features from a signal is unsupervised, and the features used by a network and their meaning remain unknown (hence, referred to as a “black box”). In traditional computer vision and signal processing, features are engineered based on human knowledge and human observations and later hard coded as a separate step prior to input into a classification model, the human feature are meaningful and in the case of the electrocardiogram (ECG), these features are based on known biological mechanisms. In our work we sought to identify the meaning in convolutional neural network feature maps that were trained on the ECG signal and compare network features to the understandable, human-selected features. Using our proposed methods, which are generalizable, we developed tools to explain AI models. To test, validate, and demonstrate use of this tool, we employ a previously developed AI model that can detect patients age and sex using a surface electrocardiogram (ECG). For any domain with meaningful features, we show that the neural network selects features that are similar to those selected by a human expert, and that neural network “black box” features are in fact a linear combination of human identifiable features. As the network features were created without any human knowledge, this raises the possibility that artificial intelligence models develop a "sense" of the signal it processes in a similar manner to how a human expert does. Thus, artificial intelligence may be truly intelligent; and this work may open the door for creating explainability in artificial intelligence models.Item Humanizing Digital Experiences: Three Essays on the Design of Digital Entities(2021-05) Schanke, ScottMarketing and branding efforts have shifted from broadcast media, as in magazines or television, toward bi-directional/conversational media. Firm representatives are increasingly digital, and thus dynamic, autonomous and personalizable. Rooted in the shift in marketing practice, this dissertation seeks to identify and quantify effective approaches to the design and implementation of the entities that represent firms and brands in customer interactions, e.g., AI-enabled conversational agents, digital brand personalities. This thesis consists of three essays relating to the mediums in which digital entities exist (Social Media Pages, Messaging Applications, and Voice Based applications). In my first essay on this topic, I evaluate how Politeness (Brown & Levinson 1987), a theory used to describe human request behavior, can be adapted to Social Media posts to further garner off platform sales conversions. This is important as it shows that the language used in Social Media posts are not uniformly perceived and can be tailored for customers depending on their relationship with the focal firm. The second essay moves from posts on social media, to messaging platforms. More specifically, in the context of customer service, I evaluate how the humanness of a conversational agent, (i.e. the number of social cues present), influences customer service conversion outcomes, and customer price sensitivity. Our findings suggest that making an agent more humanlike can increase the rate of conversion for customers, however, customers also become more price sensitive in this particular “ultimatum game” like scenario. This shows that efforts to humanize conversational agents need to be carefully thought through and implemented to best support the context. In my final chapter, I explore the interactions between two key design choices for voice-based AI agents: i) disclosure of an agent’s autonomous nature, and ii) aesthetic personalization (implemented via voice cloning). Through use of a Behavioral Economics game, we evaluate these features impact on trust. Overall, we find that people prefer a cloned version of an A.I. voice compared to a default male voice and no message control. Disclosure, on its own, does not significantly impact trust. When examining the interaction of message medium and agent disclosure, we find that dynamic voice cloning, in tandem with disclosure, achieves the highest user trust levels.Item Probabilistic Knowledge-guided Machine Learning in Engineering and Geoscience Systems(2024-06) Sharma, SomyaMachine learning (ML) models have achieved significant success in commercial applications and have driven advancements in scientific discovery across many scientific disciplines. ML modeling has been essential in tackling complex scientific problems, often enhancing our understanding of previously poorly understood processes. These models have been developed to improve computational efficiency in scenarios where traditional process-based or mechanistic models provided only simplified approximations of physical processes. Despite their success, even state-of-the-art ML models can produce physically inconsistent predictions and have limited generalization capabilities. Additionally, the black-box nature of ML models means that researchers and stakeholders often lack insight into their reliability. This thesis proposes the development of novel Probabilistic Knowledge-Guided Machine Learning (P-KGML) models to address these concerns. P-KGML models integrate domain knowledge and probabilistic reasoning to improve the explainability, generalization, and physical consistency of ML outputs. These models are particularly valuable in engineering and geoscience systems, where understanding uncertainty and ensuring adherence to physical laws are crucial.