Cheng, Claire Xuan2015-02-192015-02-192014-10https://hdl.handle.net/11299/169973University of Minnesota M.S. thesis. October 2014. Major: Scientific Computation. Advisors: Caroline Hayes, Renata Wentzcovitch, Amin Massoud. 1 computer file (PDF); xi, 129 pages.The goal of this project is to improve understanding about the communication channels that assist distant collaborators to perform more effectively when collaborating in a virtual environment. The motivation is to help software developers to decide on the features that should be included in virtual collaboration tools. This work focuses on communication through voice, gestures conveyed via natural hand images, shared maps, markings on maps, and combinations of the above. The task domain studied includes joint, map-based planning tasks, which range from trip planning to traffic disaster management, such as a truck rollover on a high way. Embedded natural gestures are made with the hands or body and derive a meaning from their context, such as, a person pointing to a location on a map; in this work, we will refer to them simply as natural gestures. Surrogate gestures are electronic proxies for natural gestures and include pointing with a cursor or drawing circles, arrows, and other marks on the map. Both natural and surrogate gestures are major concerns in this work. Currently, remote collaborations between traffic experts at different agencies (for example, the state and the city) are usually carried out telephonic. Over the past twenty-five years, new tools have been developed that allow collaborators to work in a shared virtual work-space in which they can not only see shared images and mark shared drawings, but they also see the hands of their distant partners as they move over the work surface. However, few researchers have evaluated the effectiveness thereof. The primary questions explored in this work are whether embedded natural gestures or surrogate gestures provide significant advantages over voice-only communication in virtual collaborations regarding map-based tasks. The answers to these questions could help software developers decide on the features to include in virtual collaboration tools. In order to answer these questions, we recruited twenty-eight students, both undergraduate and graduate, to participate in an experiment. The participants worked in pairs to solve five map-based planning tasks using five versions of map-based work-spaces. These five versions of work-spaces were created by combining different interface features that supported diverse types of communication: voice, a shared virtual-map interface, a shared marking interface (to support surrogate gestures), and a hand-image interface (to support natural gestures). We set up five different combinations of interfaces, as follows: Face-to-Face: Collaborators sitting side-by-side share a virtual-map work-space on which they can both make marks; Voice-only: Distant collaborators can manipulate and mark separate virtual-map work-spaces, but cannot share work-spaces, and can only communicate vocally; Mark-Voice: Distant collaborators have a shared virtual-map work-space on which they can mark and share marks, and can also communicate vocally; Gesture-Voice: Distant collaborators have a shared virtual-map work-space in which they can see videos of each other's hands and arms projected on the map, and also communicate vocally; Mark-Gesture-Voice: Distant collaborators have a shared virtual-map work-space, on which they can see each other's marks and gestures, and they can communicate vocally. The pairing of the interface conditions and task scenarios was systematically varied so that the same interface condition and task scenario were not always paired together. In addition, the presentation order was systematically varied. After each condition, we asked each of the participants six questions about their workload from the NASA Task Load Index and seven questions about their collaborative experience. We found that From performance perspective, all the conditions that involved using embedded natural gestures (Gesture-Voice, and Mark-Gesture-Voice) significantly 1) reduced task completion time, 2) decreased mental demand and 3) helped participants felt more connected to their teammates; additionally, when using the Gesture-Voice condition, participants experienced significantly less frustration and collaborated significantly more seamlessly than in the Voice-Only condition. From preference perspective, Mark-Gesture-Voice was 1) the easiest to use, 2) the most fun, 3) the mostly chosen as professional collaboration tools, 4) the one that helped the user felt like most connected with their partners among all the remote conditions and 5) the favorite among all the remote conditions; even though participants still like the Face-to-Face condition better than any of remote conditions and felt it easiest to use among all the conditions. We can, then, conclude that the hand images are the element primarily responsible for the performance improvement in remote collaboration, but that users enjoy having the marking feature, regardless of whether it helps them significantly or not. Based on these findings, we recommend that software developers of virtual-collaboration tools should include hand images to improve performance, and should also consider including a shared-marking function to increase user-satisfaction.enHand imagesUser experienceVirutal spatial collaborationScientific computationHand images in virtual spatial collaboration for map-based planning activitiesThesis or Dissertation