Browsing by Subject "Robot"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item The Influence of Acute Stress on the Perception of Robot Emotional Body Language: Implications for Robot Design in Healthcare and Other High-Risk Domains(2017-07) Thimmesch-Gill, ZaneIn coming years, emotionally expressive social robots will permeate many facets of our lives. Yet, although researchers have explored robot design parameters that may facilitate human-robot interaction, remarkably little attention has been paid to the human perceptual and other psychological factors that may impact human ability to engage with robots. In high-risk settings, such as healthcare—where the use of robots is expected to increase markedly—it is paramount to understand the influence of a patient’s stress level, temperament, and attitudes towards robots as negative interactions could harm a patient’s experience and hinder recovery. Using a novel between-subject paradigm, we investigated how the experimental induction of acute physiological and cognitive stress versus low stress influences perception of normed robot emotional body language as conveyed by a physically-present versus virtual reality generated robot. Following high or low stress induction, participants were asked to rate the valence (negative/unhappy to positive/happy) and level of arousal (calm/relaxed to animated/excited) conveyed by poses in five emotional categories: negative valence-high arousal, negative valence-low arousal, neutral, positive valence-low arousal, positive valence-high arousal. Poses from the categories were randomly intermixed and each pose was presented two or three times. Ratings were then correlated with temperament (as assessed by the Adult Temperament Questionnaire), attitudes towards and experience with robots (a new questionnaire that included measures from the Godspeed Scales and Negative Attitudes about Robots Survey), and chronic stress. The acute stress induction especially influenced the evaluation of high arousal poses – both negative and positive – with both valence and arousal rated lower under high than low stress. Repeated presentation impacted perception of low arousal (negative and positive) and neutral poses, with increases in perceived valence and arousal for later presentations. There were also effects of robot type specifically for positively-valenced emotions, such that these poses were rated as more positive for the physically-present than virtually-instantiated robot. Temperament was found to relate to emotional robot body language. Trait positive affect was associated with higher valence ratings for positive and neutral poses. Trait negative affect was correlated with higher arousal ratings for negative valence-low arousal poses. Subcategories within the robot attitudes questionnaire were correlated with emotional robot poses and temperament. To our knowledge this dissertation is the first exploration of the effects of acute and chronic stress on human perception of robot emotional body language, with implications for robot design, both physical and virtual. Given the largely parallel findings that we observed for the poses presented by the physically-present versus virtually-instantiated robot, it is proposed that the use of virtual reality may provide a viable "sandbox" tool for more efficiently and thoroughly experimenting with possible robot designs, and variants in their emotional expressiveness. Broader psychological, physiological, and other factors that designers should consider as they create robots for high-risk applications are also discussed.Item Linear quadratic regulator control of an under actuated five-degree-of-freedom planar biped walking robot(2010-07) Leines, Matthew ThomasModern robotic systems are fully actuated with full information of themselves and their immediate surroundings. If faced with a failure or damage, robotic systems cannot function properly and can quickly damage themselves. A Linear Quadratic Regulator (LQR) control system is proposed to allow an under actuated (damaged) robotic system (five-degrees-of-freedom, planar, biped, walking robot) to continue to follow a human-like walking gait in a series of Matlab and Simulink simulations. The proposed LQR controller keeps joint position errors to below 4 degrees for the fully actuated system, performing the entire gait within the given step time and length. The under actuated control system can match the fully actuated system using a separate LQR controller. Use of time-varying control and Markovian jump methods can compile both controllers into a dynamically adaptive whole, capable of full to partial gait during both locked joint and free joint failures with brakes applied as needed.Item Sharing the Load - Offloading Processing and Improving Emotion Classification for the SoftBank Robot Pepper""(2021-04) Savela, ShawnPepper is a humanoid robot created by SoftBank Robotics that was designed and built with the purpose of being used for robot-human interaction. There is an application interface that allows development of custom interactive programs as well as a number of built-in applications that can be extended and used when creating other custom programs for the robot. Among the pre-installed applications are applications that will classify a person's emotion and mood using data from several data points including facial characteristics and vocal pitch and tone. Due to the Covid-19 pandemic many people have been wearing face masks in both public and private areas. Detecting emotions based on facial recognition and voice tone analysis may not be as accurate when a person is wearing a mask. An alternative method that can be used to classify emotion is to analyze the actual words that are spoken by a person. However, this feature is not currently available on Pepper. In this study we describe a software solution that will allow Pepper to perform sentiment classification based on spoken words using a neural network. We will describe the testing procedure that was used to interview participants by Pepper and compare the F1 score of each classification method with each other. Pepper was able to be programmed to use a neural network for emotion classification. A total of 32 participants were interviewed, with the NLP spoken-word analysis classification achieving an averaged F1 score of .2860 as compared to the built-in software average F1 scores of .2362 from the mood application, .1986 from the vocal tone and pitch application, and .0811 from the facial characteristics application.Item Using LED Gaze Cues to Enhance Underwater Human-Robot Interaction(2022-05) Prabhu, Aditya; Fulton, Michael; Sattar, Junaed, Ph.D.In the underwater domain, conventional methods of communication between divers and Autonomous Underwater Vehicles (AUVs) are heavily impeded. Radio signal attenuation, water turbidity (cloudiness), and low light levels make it difficult for a diver and AUV to relay information between each other. Current solutions such as underwater tablets, slates, and tags are not intuitive and introduce additional logistical challenges and points of failure. Intuitive human-robot interaction (HRI) is imperative to ensuring seamless collaboration between AUVs and divers. Eye gazes are a natural form of relaying information between humans, and are an underutilized channel of communication in AUVs, while lights help eliminate concerns of darkness, turbidity, and signal attenuation which often impair diver-robot collaboration. This research aims to implement eye gazes on LoCO (a low-cost AUV) using RGB LED rings in order to pursue intuitive forms of HRI underwater while overcoming common barriers to communication. To test the intuitiveness of the design, 9 participants with no prior knowledge of LoCO and HRI were tasked with recalling the meanings for each of 16 gaze indicators during pool trials, while being exposed to the indicators 3 to 4 days earlier. Compared to the baseline text display communication, which had a recall of 100%, the recall for most eye gaze animations were exceptionally high, with an 80% accuracy score for 11 of the 16 indicators. These results suggest certain eye indicators convey information more intuitively than others, and additional training can make gaze indicators a viable method of communication between humans and robots.