Browsing by Subject "Visualization"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
Item Computational analysis and visualization of the evolution of influenza virus(2014-08) Lam, Ham ChingInfluenza viruses can infect a large variety of birds and mammals including humans, pigs, domestic poultry, marine mammals, cats, dogs, horses, and wild carnivores \cite{Webster2002}. Surveillance for influenza viruses circulating in humans has been gradually increased and expanded to many areas around the world. These surveillance programs have produced large amount of influenza genomic data which facilitates the study of the virus by computational methods that are efficient and cost saving.The main focus of this dissertation research is the development of visualization methods to understand the evolution of influenza viruses circulating in humans and other mammals. The methods developed have been applied to different human influenza A subtypes, swine influenza viruses, and avian influenza viruses. The methods are based on unsupervised dimensional reduction techniques which can be applied to each individual genome segments or to the complete genome sequence of the virus. These methods are a departure from the traditional phylogenetic tree construction paradigm because very large number of high dimensional input sequences can be processed and results are viewed directly in a two or three dimensional Euclidean space.We reproduced the evolutionary trajectory of the seasonal human influenza A/H3N2 virus since its introduction to humans in 1968 on a 2D PCA space. The observed pathway led us to hypothesize that vaccination serves as a primary evolutionary pressure on this virus. We provided visual, simulation results, and statistical results to support this. The North American swine influenza H3N2 viruses were also studied using the developed visualization methods. The diversity of this virus is changing since the 2009 H1N1 pandemic outbreak. Five main clusters were observed from the visualization results. The mutations at two positive selected sites on the HA gene were identified as the potential driver for clusters segregation of this virus after the pandemic.A visualization method was developed to visually detect reassortant influenza virus. A reassortant influenza virus is difficult to detect because it consists of genome segments from different parental origin. As two different strains of influenza coinfect a single cell, the capability to exchange genome segments between these two strains can lead to progeny carrying different parental segments within its genome. In order to detect such progeny, a PCA projection based visualization method that is able to examine the full genome sequence of a reference and test strains simultaneously was developed in order to detect any reassorted segments within a full genome. Besides the development of visualization methods, we have also developed a compact Markov Chain model to estimate the probability of viruses with high genetic similarity found after a very large time gap. This model is a two components model where we combined a Markov Chain with a Poisson model. The Markov model uses Hamming distance as the evolution process of the virus and a computed mutation rate as the input to the Poisson model, combined together, we simulated the evolution process of the influenza virus under the neutral evolution process. The computational results from this model led us to conclude that the existence of reservoirs preserving viruses for decades cannot be completely eliminated.In short, our primary goal has been to develop visualization based approaches to understand the evolution of the influenza viruses from different hosts. The results we have so far suggested that the power of visualization paves the way to gain deeper understanding and insight of the evolution of the virus as we utilize the rapidly growing amount of the genomic data of the virus.Item Designing effective motion visualizations: elucidating scientific information by bringing aesthetics and design to bear on science(2014-11) Schroeder, DavidThe visual system is the highest-bandwidth pathway into the human brain, and visualization takes advantage of this pathway to allow users to understand datasets they are interested in. Recent scientific advances have led to the collection of larger and more complicated datasets, leading to new challenges in effectively visualizing these data. The focus of this dissertation is on addressing these challenges and enabling the next generation of visualization systems. We address these challenges through two complementary research thrusts: "Advanced Visualization Practice" and "Visualization Design Tools."In our Advanced Visualization Practice thrust, we take steps to extend the process of interactive visualization to work effectively with complicated multivariate motion datasets. We present brushing and filtering operations that allow users to perform complicated filtering operations in a linked-window visualization while maintaining context in complementary views, including two-dimensional plots, three-dimensional plots, and recorded video. We also present the concept of "trends," or patterns of motions that behave similarly over a period of time, and introduce visualization elements to allow users to examine, interact with, and navigate these trends. These contributions help to implement Shneiderman's information seeking mantra (Overview first, zoom and filter, then details-on-demand) in the context of collections of motion datasets.During our work in Advanced Visualization Practice, we realized that there were a lack of tools enabling visualization developers to rapidly and controllably create and evaluate these visualizations. We address this deficiency by our Visualization Design Tools thrust, introducing the idea of visualization creation interfaces where users draw directly on top of data in order to effect their desired changes to the current visualization. In an application of this idea to streamline visualizations, we present a sketch-based streamline visualization creation interface, allowing users to create accurate streamline visualizations by simply drawing the lines they want to appear. An underlying algorithm constrains the input to be accurate while still matching the user's intent. In a second application of this idea, we present a Photoshop-style interface, enabling users to create complicated multivariate visualizations without needing to program. A colormap painting and dabbing algorithm allows users to create complicated colormaps by drawing colors on top of a colormap; an algorithm determines the desired locality of the user's input and updates the colormap accordingly. These interfaces show the potential for future interfaces in this direction to expand the visualization design process to include users currently excluded, such as domain scientists and artists.Through these two complementary thrusts, we help to solve problems preventing newer datasets from being fully exploited. Our contributions in Advanced Visualization Practice solve problems that are impeding the visualization of motion datasets. Our contributions in Visualization Design Tools provide a blueprint for the creation of visualization interfaces that can enable all users instead of just programmers to contribute directly to the visualization design and creation process. Together, these set the stage for future visualization interfaces to better solve our biggest visualization challenges.Item Enabling Neighborhood Health Research and Protecting Patient Privacy(2021-08) Krzyzanowski, BrittanyMaps and spatial analysis offer a more comprehensive understanding of complex neighborhood health relationships, and yet there is a remarkable lack of maps within the literature on neighborhood health. Review of the literature confirmed that only a small proportion of articles on neighborhood health (28%) contained maps. Despite this, our subsequent survey showed that the majority (63%) of investigators created maps, worked with maps, or used mapping software to explore their data at some point during their study. Neighborhood health investigators are not neglecting to explore the spatial nature of their data, but rather, they are just not publishing the maps that they are using. One of the major barriers identified by our survey was privacy regulations, such as HIPAA law, which stood as a direct barrier for 14% of survey respondents who created maps but did not share them. Many researchers find core elements of the HIPAA privacy provision specific to map data ambiguous or difficult to understand, which is reflected in disagreement and uncertainty in research and policy circles on how to enact this provision. This dissertation provides a thorough examination of the safe harbor provision and elucidates the ambiguity within the law to encourage safe and effective sharing of mapped patient data. Moreover, many scholars and policy makers have challenged this rule, saying that it is possible to share finer-grained mapped health data without jeopardizing patient privacy. One promising strategy is regionalization, or zone design, which offers a way to build finer-grained geographical units in ways that integrate the HIPAA safe harbor requirements. This dissertation explores two existing regionalization methods (Max P Regions and REDCAP) and also introduces two novel variants of these approaches (MSOM and RSOM) which we compare and contrast in terms of fitness for analysis and display of protected health information. Each regionalization procedure has its own strengths and weaknesses, but REDCAP provides the best overall performance. In general, all of the regionalization procedures produced contiguous regions that result in a better resolution map than the current standard for sharing patient data and offer to help investigators work within the bounds of privacy provisions to share maps and spatial data.Item How to Plot and Animate Data on Maps Using the R Programming Language(2021) Lilja, David JPlotting data on a map can be a powerful technique for visualizing geographical information. Animating that data -- that is, making it move -- can further enhance the understanding of the underlying data. This tutorial will teach you how to plot data on simple maps using ggplot2 (https://ggplot2.tidyverse.org/) and animate it using gganimate (https://gganimate.com/articles/gganimate.html). You also will learn how to use dplyr (https://dplyr.tidyverse.org/) to partition data into subsets and compute summary statistics of these subsets to be plotted onto a map.Item I-94 Connected Vehicles Testbed Operations and Maintenance(Center for Transportation Studies, University of Minnesota, 2019-06) Duhn, Melissa; Parikh, Gordon; Hourdos, JohnIn March 2017, the Connected Vehicle Testbed along I-94 went live. The original project was sponsored by the Roadway Safety Institute and built on the Minnesota Traffic Observatory's (MTO) existing field lab, also utilizing certain Minnesota Department of Transportation (MnDOT) infrastructure. The testbed originally consisted of seven stations, rooftop and roadside, capable of transmitting radar and video data collected from the roadway back to a database at the MTO for analysis, emulating what a future connected vehicle (CV) roadway will look like. This project funded maintenance and upgrades to the system, as well as movement of some stations due to construction on I-94. In addition, better visualization tools for reading the database were developed. The CV testbed is state-of-the-art, fully functional, and uniquely situated to attract freeway safety-oriented vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) safety application development, implementation, and evaluation projects going forward.Item An interactive design framework based on data-intensive simulations: implementation and application to device-tissue interaction design problems(2015-02) Lin, Chi-LunThis dissertation investigates a new medical device design approach based on extensive simulations. A simulation-based design framework is developed to create a design workflow that integrates engineering software tools with an interactive user interface, called Design by Dragging (DBD) \cite{Coffey:2013ko}, to generate a large-scale design space and enable creative design exploration. Several design problems illustrate this design workflow are investigated via featured forward and inverse design manipulation strategies provided by DBD. A device-tissue interaction problem as part of a vacuum-assisted breast biopsy (VAB) cutting process is particularly highlighted. A tissue-cutting model is created for this problem to simulate the device-tissue contact, excessive tissue deformation and progressive tissue damage during the cutting process. This model is then applied to the design framework to generate extensive simulations that samples a large design space for interactive design exploration. This example represents an important milestone toward simulation-based engineering for medical device prototyping. The simulation-based design framework is implemented to integrate a computer aided design (CAD) software tool, a finite element analysis (FEA) software tool (SolidWorks and Abaqus are selected in this dissertation) and a high performance computing (HPC) cluster into a semi-automatic design workflow via customized communication interfaces. The design framework automates the process from generating and simulating design configurations to outputting the simulation results. The HPC cluster enables multiple simulation job executions and parallel computation to reduce the computation cost. The design framework is first tested using a simple bending needle example, which generates 460 simulations to sample a design space in DBD. The functionality of the creative inverse and forward design manipulation strategies are demonstrated. A tissue cutting model of a VAB device is developed as an advanced benchmark example for the design framework. The model simulates the breast lesion tissue being positioned in a needle cannula chamber and being cut by a hollow cutting tube with simultaneous rotation and translation. Different cutting conditions including cutting speeds and tissue properties are investigated. This VAB device design problem is then applied to the design framework. Critical design variables and performance attributes across three main components of the VAB device (the needle system, motor system and device handpiece) are identified. 900 design configurations are generated and simulated to sparsely populate a large-design space of $10^6$ possible solutions. The design space is explored via the creative design manipulation strategies and several uses cases are established. The bending needle example demonstrates the first success of the proposed simulation-based design framework. The 460 simulations are completed with minimal manual interventions. The functionality of DBD is also demonstrated. The inverse and forward design strategies allow interacting with the design space via dragging on a radar chart widget or directly on the visualization of the simulation. Through the interactions the user is guided to the desired solutions.The VAB tissue-cutting example provides a realistic medical device application of the design framework. The 900 simulations are completed in parallel in the HPC cluster so that the computation time is significantly reduced. The simulation output data is converted to a high-efficiency data format called NetCDF so that the post-compuation for sampling this large design space is made possible. Several use cases are demonstrated. By interacting with the radar chart widget, the user gradually gains the understanding and new insights about the effect of design variable modifications. Next, the direct manipulation strategies via visualization of the simulations are used to solve three issues, including a 'dry tap', moving a leading edge of the tissue sample and narrowing a stress concentration area. These use cases successfully demonstrated the capability and the usability of the design framework.There are two major contributions of this dissertation. The first is the investigation of the new design approach that enables creative design exploration based on extensive simulation data. This success moves a step toward the simulation-based medical device engineering with big data. The second is the FEA simulation model for the VAB tissue cutting process. This model utilizes realistic breast tissue properties to predict cutting forces during the VAB sampling process, which has not been found in the literature. The studies conducted using this model extend the current understanding of the VAB tissue cutting process under different cutting conditions. All of these achievements illustrate the potential for a future medical device virtual prototyping environment.Item Palpable Visualizations: Techniques For Creatively Designing Discernible and Accessible Visualizations Grounded In the Physical World(2020-06) Johnson, SethThis dissertation investigates techniques to leverage creative processes like sketching, sculpting, and design iteration to improve the discernibility and accessibility of immersive volumetric data visualizations. Discernible visualizations support a viewer's ability to make sense of complexities such as multi-dimensional climate or engineering simulation data. Accessible data visualization both supports the contribution of previously under-utilized design expertise (i.e. artist-accessible visualization design), and subsequently provides access for a broad audience to engage with data through an emphasis on human connection and support for a wide range of displays. Such visualizations aim to provide a palpable, data-driven experience for scientists, artists, and the public. Three early works are presented as a rationale for investigating Palpable Visualizations. Bento Box, an immersive visualization system for comparing multiple time-varying volumetric simulation ensemble instances, demonstrates a current state-of-the-art for scientific data visualization. Weather Report, an interactive site-specific artwork visualizing six decades of weather data, takes an in-depth look at what can be accomplished when designing data-driven experiences in close collaboration with professional designers. And Lift-Off, a VR-based modeling program designed for artists, shows how creative sketching in both the physical and virtual worlds can result in a more accessible environment for both scientific and design-oriented tasks. Based on observations from these three prior works, we present Artifact-Based Rendering (ABR), a framework of algorithms and processes that makes it possible to produce real, data-driven 3D scientific visualizations with a visual language derived entirely from colors, lines, textures, and forms created using traditional physical media or found in nature. ABR addresses three current needs: (i) designing better visualizations by making it accessible for non-programmers to rapidly design and critique many alternative data-to-visual mappings; (ii) expanding the visual vocabulary used in scientific visualizations to enable discernment of increasingly complex multivariate data; (iii) bringing a more engaging, natural, and human-relatable handcrafted aesthetic to data visualization to make the resulting data-driven images more accessible and discernible to the viewer. Finally, we support the accessibility of visualizations through a data streaming and remote rendering pipeline, culminating in demonstrations bridging live supercomputer simulation data with untethered affordable head-mounted AR/VR displays.Item Rendering of Teeth and Dental Restorations using Robust Statistical Estimation Techniques(2016-02) Jung, Jin WooRobust image estimation and progressive rendering techniques are introduced, and these novel methods are used to simulate the appearance of teeth and dental restorations. The realistic visualization of these translucent objects is essential for computer-aided processes in the field of dentistry, because a successful dental treatment is dependent on the recovery not only of the tooth's function, but also its appearance. However, due to the heterogeneity of the tooth structure and the coupled subsurface scatterings that this causes, simulating the translucency of these objects presents a difficult computational challenge. A Monte-Carlo ray tracing system is employed to model the complex interactions of light within the material and to develop the robust image estimation and progressive rendering techniques. Because low probability samples are infrequently encountered in an image, for standard Monte-Carlo estimation these samples can become noise. Robust image estimation techniques are suggested as a way to suppress these low probability samples, and it is demonstrated that for a given sample size robust estimation techniques can produce less noisy renderings. In other words, the sample size necessary to satisfy a certain user requirement will decrease, and an improvement in rendering speed can be obtained. The robust estimation techniques are discussed in both pixel and image space, and their statistical analysis is provided. This analysis determines the inclusion rate for sample probabilities and is thus able to specify the sample probability thresholds necessary to discard or include samples. The statistical analysis also makes it possible to determine the performance boundaries in terms of the number of disclosed low probability samples in an image; as a result, a sample size for a given user requirement can be identified. A progressive approach for rendering translucent objects based on volume photon mapping is also presented. Because conventional volume photon mapping requires long preprocessing to build up a complete volume photon map, it is not able to support progressive rendering. Even worse, due to the limited memory space in a given computer system, the rendering results suffer from a potentially incorrect volume photon map. Progressive volume photon mapping uses a subset of volume photons for rendering, so it provides a high frame rate for preview rendering. In addition, by recycling the volume photons used for previous image estimation, progressive volume photon mapping does not suffer from memory restriction. It is therefore able to use a virtually unlimited number of volume photons and this makes exact rendering plausible. Although these methods were developed to realistically visualize teeth and dental restorations, they are effective in any rendering situation that suffers from noise, restricted computational performance, and limited memory space; as a consequence, these procedures are expected to be useful for many other types of realistic image synthesis including motion picture special effects and video games. The statistical interpretation developed for robust estimation is based on the pixel radiance sample probability. This allows the image synthesis sampling problem to be studied in a manner similar to how it would be treated in other established fields of science and engineering: in terms of the statistical properties of the signal to be sampled. This approach can provide the groundwork for further stochastic analyses in the context of computer graphic rendering.Item Scalable natural user interfaces for data-intensive exploratory visualization: designing in the context of big data(2014-07) Coffey, Dane M.This dissertation investigates new exploratory visualization tools in data-intensive domains that are built upon natural user interface technology, including multi-touch surfaces and virtual reality. Scalable interactions are developed and evaluated in several software systems that are targeted toward supporting design workflows that make use of big data. A new immersive and highly-interactive multi-touch workbench is presented, along with a theoretical framework and evaluation of how visualizations may be developed on it. Building upon this foundation, two different exploratory visualization software systems are presented that address distinct challenges faced by designers working in data-intensive domains. The first of these systems is called Slice World-In-Miniature (WIM), which is designed to overcome the difficulty associated with exploring large-volume data, where the complexity of the data often leads to the designers becoming disoriented. Using Overview+Detail techniques to provide context, the designer navigates inside of complex volumes using multi-touch gestures. This Slice WIM system is applied to a number of medical device design applications and evaluated by domain experts in this field. The second system is called Design by Dragging, which addresses the information overload associated with comparing and navigating many sets of interelated simulations. Design by Dragging gives the designer the power to explore high-dimensional simulation design spaces by using natural direct manipulation interactions. This system is applied to several problems in medical device design and in visual effects simulation, and a domain expert evaluation is presented. The big data paradigm is integrally tied to the future of computing. The major contribution of this dissertation is its investigation into the effectiveness of natural user interfaces as a means of working in this paradigm. Although natural user interfaces have become ubiquitous in our daily lives, they are typically used only for simple interactions. This dissertation demonstrates that these technologies can also be effective in aiding design work in the context of big data, a result that could shape the future of computing and change the way designers work with computers.Item Seeing and Working with 3D Data in Virtual Reality(2021-03) Kim, KyungyoonWith recent advances in 3D data capturing technology and simulation, such as motion capture and laser scanning, a big portion of the data analysis process in many disciplines including medical, mechanical engineering, aerospace dynamics, involves examining and working with complex 3D data. Moving on from the traditional 2D keyboard-mouse environment, visualization researchers are seeing great benefits of utilizing the VR technologies to visualize 3D data. However, designing the VR environments to help users exploit these high quality data is challenging for multiple reasons. First, seeing (i.e. visualizing) 3D data requires careful considerations depending on the tasks and types or complexity of the data. Second, although 3D user interfaces theoretically provide great advantages for manipulating 3D data more directly and intuitively, working with 3D data has lack of precision compared to keyboard-mouse interfaces, for example a hand jitter and lack of haptic feedback to support the movements. Motivated by these challenges, this dissertation advances computational tools for seeing, for working, and for combining advanced seeing and working techniques together to enable interactive analyses of complex data. Advances to the science of data visualization address the hard problem of comparing multiple 3D and 4D datasets (i.e., comparative visualization), and include new theory (a taxonomy of fundamental approaches and survey of related work) as well as a real-world application to analyzing multiple phases of ancient architecture in VR. Advances to the science of 3D user interfaces address the long-standing problem of precise and accurate 3D object manipulation in virtual environments, including an application to improving 2D-3D shape matching, as needed in clinical medical imaging. The dissertation culminates by demonstrating that 3D data analyses that require advanced seeing and working simultaneously are, indeed, more challenging than when these tasks are performed in isolation, and that by putting and evaluating advanced visualizations and advanced interaction techniques together, we can help scientists to see and work with 3D data in virtual reality more effectively, differently, and with new perspectives.Item Transit Service Frequency App: A Global Transit Innovations (GTI) Data System(Center for Transportation Studies, University of Minnesota, 2018-11) Fan, Yingling; Wiringa, Peter; Guthrie, Andrew; Ru, Jingyu; He, Tian; Kne, Len; Crabtree, ShannonThe Transit Service Frequency App hosts stop- and alignment-level service frequency data from 559 transit providers around the globe who have published route and schedule data in the General Transit Feed Specification (GTFS) format through the TransitFeeds website, a global GTFS clearinghouse. Stop- and alignment-level service frequency is defined as the total number of transit routes and transit trips passing through a specific alignment segment or a specific stop location. Alignments are generalized and stops nearby stops aggregated. The app makes data easily accessible through visualization and download tools. It allows the user to identify stop and alignment frequency at thousands of locations around the globe, as well as export data for cross analysis using GIS technology.Item Visualizing patterns in U.S. Urban population trends(2009-01) Schroeder, Jonathan PaulWith the completion of the U.S. National Historical Geographic Information System (NHGIS), it is now feasible to assemble a large dataset of historical census tract population statistics and boundary data in order to investigate patterns in long-term urban population trends. The present study makes use of this new resource to achieve a broad but concise overview of population trend patterns throughout major U.S. urban areas since 1950. This work thereby makes both methodological and substantive contributions to multiple fields of research, with much of the work dedicated to the development and assessment of new techniques to address two key methodological challenges. The first challenge is to construct a time series of census tract data, which requires linking data through time even where tract boundaries have changed. I present a few relatively simple areal interpolation techniques that can be used to address this problem. Two case studies indicate that a novel technique, cascading density weighting, should be effective both in the present setting and potentially elsewhere. The second methodological challenge is to identify an effective visualization strategy for investigating patterns in long-term trends. I present here a new conceptual framework that identifies a group of mapping techniques--trend summary maps--that should be most useful for visualizing patterns in trends. I provide an overview and assessment of several types of trend summary mapping techniques, and I introduce a novel technique, bicomponent trend mapping, which combines principal component analysis with bivariate choropleth mapping. This technique has several useful advantages not only for visualizing urban population trends but potentially in many other settings of spatio-temporal data visualization as well. Applying the new techniques to historical census tract data enables the central substantive contribution of this research: an overview of population trend variations throughout major U.S. urban cores. This overview supports the standard narrative of recent urban population dynamics--growth on the outskirts, decline in the cores, and some regrowth in centers--but it also reveals many regionally and locally unique patterns, indicating both divergence among cities and increasing heterogeneity within them.Item Visualizing Transportation Happiness in the Minneapolis-St. Paul Region(Center for Transportation Studies, University of Minnesota, 2020-03) Fan, Yingling; Ormsby, Travis; Wiringa, Peter; Liao, Chen-Fu; Wolfson, JulianThis report describes the data and methods used to generate the interactive Minneapolis-St. Paul Transportation Happiness Map at https://maps.umn.edu/transportation-happiness. The map illustrates spatiotemporal differences in travelers' happiness ratings on the streets and roads in the Minneapolis-St. Paul metropolitan region. Map users can interactively explore street and road segments that are associated with positive and/or negative emotional experiences based upon their interested travel modes and travel time periods. For policy makers who are interested in improving people's transportation happiness, the map provides important insights on road and street segments that are in need of closer investigations for future improvements.