Browsing by Subject "path planning"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Data-Driven Analysis and Insight of Human Motion(2020-01) Sohre, NicholasMotion is a central element of the human experience. Artificial Intelligence (AI) and robotics technologies continue to transform society, but work is needed to enable solutions that engage with our motion-driven reality. Critical to an understanding human motion is the ability to model and accurately simulate virtual humans. To that end, my thesis provides data-driven analysis and insight for human motion. I identify two key aspects of realistic human motion simulations: being both \textit{natural} in appearance while covering the rich \textit{variety} of motions exhibited by humans. I describe how motion data can be leveraged to both simulate realistic motion, as well as validate simulation realism through a combination of data-driven analysis and user study approaches. Computational methods for human motion are largely studied in the context of computer graphics and virtual character animation. Drawing from and expanding on work in this field, my work applies data-driven methods for simulating humans in several settings: that of facial motion, local crowd simulation, and global navigation. The methods and analysis in this dissertation present contributions to the fields of AI, robotics, and computer graphics in supporting my thesis that data-driven methods can be used to create and validate realistic simulations of human motion. In the first part of my thesis, I study the simulation of realistic human smiles by conducting a large user study to connect observer reactions to computer animated faces. The result is a rich dataset providing value beyond that of this thesis to interdisciplinary research. I use the data to train a generative model with a new machine learning heuristic (PVL) that I develop, which tunes the trade-offs in creating a variety of happy smiles. I validate the realism of the PVL results with a follow up user study. The second part of my thesis studies the simulation of realistic human navigation. I perform a data-driven evaluation of the impact of collision avoidance on user experiences in virtual reality (VR), validating its importance for enabling the feeling of presence. I leverage motion data of shoppers to drive new insights for human navigation decisions, discovering an entropy law governing item retrieval patterns. Finally, I present a deep-learning technique (SPNets) for simulating realistic human navigation behaviors in indoor settings trained on optimal paths. The resulting agents exhibit several human-like behaviors, such as intelligent backtracking, narrowing down goal locations, and environment familiarity. I validate the realism of SPNet simulations using paths from a user study on the same navigation tasks.Item Learning of Unknown Environments in Goal-Directed Guidance and Navigation Tasks: Autonomous Systems and Humans(2017-12) Verma, AbhishekGuidance and navigation in unknown environments requires learning of the task environment simultaneous to path planning. Autonomous guidance in unknown environments requires a real-time integration of environment sensing, mapping, planning, trajectory generation, and tracking. For brute force optimal control, the spatial environment should be mapped accurately. The real-world environments are in general cluttered, complex, unknown, and uncertain. An accurate model of such environments requires to store an enormous amount of information and then that information has to be processed in optimal control formulation, which is not computationally cheap and efficient for online operations of autonomous guidance systems. On the contrary, humans and animals are in general able to navigate efficiently in unknown, complex, and cluttered environments. Like autonomous guidance systems, humans and animals also do not have unlimited information processing and sensing capacities due to their biological and physical constraints. Therefore, it is relevant to understand cognitive mechanisms that help humans learn and navigate efficiently in unknown environments. Such understanding can help to design planning algorithms that are computationally efficient as well as better understand how to improve human-machine interfaces in particular between operators and autonomous agents. This dissertation is organized in three parts: 1) computational investigation of environment learning in guidance and navigation (chapters 3 and 4), 2) investigation of human environment learning in guidance tasks (chapters 5 and 6), and 3) autonomous guidance framework based on a graph representation of environment using subgoals that are invariants in agent-environment interactions (chapter 7). In the first part, the dissertation presents a computational framework for learning autonomous guidance behavior in unknown or partially known environments. The learning framework uses a receding horizon trajectory optimization associated with a spatial value function (SVF). The SVF describes optimal (e.g. minimum time) guidance behavior represented as cost and velocity at any point in geographical space to reach a specified goal state. For guidance in unknown environments, a local SVF based on current vehicle state is updated online using environment data from onboard exteroceptive sensors. The proposed learning framework has the advantage in that it learns information directly relevant to the optimal guidance and control behavior enabling optimal trajectory planning in unknown or partially known environments. The learning framework is evaluated by measuring performance over successive runs in a 3-D indoor flight simulation. The test vehicle in the simulations is a Blade-Cx2 coaxial miniature helicopter. The environment is a priori unknown to the learning system. The dissertation investigates changes in performance, dynamic behavior, SVF, and control behavior in body frame, as a result of learning over successive runs. In the second part, the dissertation focuses on modeling and evaluating how a human operator learns an unknown task environment in goal-directed navigation tasks. Previous studies have showed that human pilots organize their guidance and perceptual behavior using the interaction patterns (IPs), i.e., invariants in their sensory-motor processes in interactions with the task space. However, previous studies were performed in known environments. In this dissertation, the concept of IPs is used to build a modeling and analysis framework to investigate human environment learning and decision-making in navigation of unknown environments. This approach emphasizes the agent dynamics (e.g., a vehicle controlled by a human operator), which is not typical in simultaneous navigation and environment learning studies. The framework is applied to analyze human data from simulated first-person guidance experiments in an obstacle field. Subjects were asked to perform multiple trials and find minimum-time routes between prespecified start and goal locations without priori knowledge of the environment. They used a joystick to control flight behavior and navigate in the environment. In the third part, the subgoal graph framework used to model and evaluate humans is extended to an autonomous guidance algorithm for navigation in unknown environments. The autonomous guidance framework based on subgoal graph is an improvement to the SVF based guidance and learning framework presented in the first part. The latter uses a grid representation of the environment, which is computationally costly in comparison to the graph based guidance model.