Browsing by Subject "Grasping"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Characteristic information required for human motor control:Computational aspects and neural mechanisms.(2010-08) Christopoulos, Vassilios N.Motor behavior involves creating and executing appropriate action plans based on goals and relevant information. This information characterizes the state of environment, the task and the state of actions performed. The perceptual system gathers this information from different sources: touch, vision, audition, scent and taste. Despite the richness of environment and the sophistication of our sensory system, it is not possible to extract a complete and accurate representation of the required states for motor behavior because of noise and ambiguity. Consequently, people effectively have “limited information” and therefore may not be certain about the outcomes of specific actions. For motor behavior to be robust to uncertainty, the brain needs to represent both relevant states and their uncertainties, and it needs to build compensation for uncertainty into its motor strategy. Generating motor behavior requires the brain to convert goals and information into action sequences, and the flexibility of human motor behavior suggests that brain implements a complex control model. The primary goal of this work is to improve the characterization of this control model by studying motor compensation for uncertainty and determining the neural mechanisms underlying information processing and the control model. Part of this thesis focuses on studying human compensation strategies in natural tasks like grasping. We experimentally tested the hypothesis that people compensate for object position uncertainty by adopting strategies that minimize the impact of uncertainty in grasp success. As we hypothesized, we found that people compensate for object position uncertainty by approaching the object along the direction of maximal position uncertainty. Additionally, we modeled the grasping task within the optimal control framework and found that human strategies share many characteristics with optimal strategies for grasping objects with position uncertainty. We are also interested to understand how the brain encodes and processes information relevant to movements. To accomplish this, we studied the spatial and temporal interactions of cortical regions underlying continuous and sequential movements using magnetoencephalography (MEG). Particularly, we took data from a previous study, in which subjects continuously copied a pentagon shape for 45 s using an XY joystick. Using Box-Jenkins time series analysis techniques, we found that neural interactions and variability of movement direction are integrated in a feedforward-feedback scheme. MEG sensors related to feedforward scheme were distributed around the left motor cortex and the cerebellum, whereas sensors related to feedback scheme had a strong focus around the parietal and the temporal cortices.Item Efficient Robotic Manipulation with Scene Knowledge(2023-05) Lou, XibaiIn recent years, robots have transformed manufacturing, logistics, and transportation. However, extending the success to unstructured real-world environments (e.g., domestic kitchens, warehouses, grocery stores, etc.) remains difficult due to three key challenges: (1) assumption of structured environments (such as organized bottles in the factories); (2) hand-engineered solutions that are difficult to generalize to novel scenarios; (3) limited flexibility of action primitives, which prevents the robot from reaching target objects. In this thesis, we address these challenges by learning scene knowledge that improves the efficiency of robotic manipulation systems. Grasping is a fundamental manipulation skill that is constrained by the scene arrangement (i.e., the locations of the robot, the objects, and the environmental structures). Understanding scene knowledge, such as the robot's reachability to objects, is crucial to improve the robot's capability. We developed a reachability-aware grasp pose generator that predicts feasible 6-degree-of-freedom (6-DoF) grasp poses (i.e., approaching with an arbitrary direction and wrist orientation). Then, we extended to target-driven grasping in constrained environments and added collision awareness to our scene knowledge. When objects are densely cluttered, we improved the robot's efficiency by employing graph neural networks (GNN) to exploit the underlying relationships in the scene. To accomplish complex manipulation tasks in constrained environments, such as rearranging adversarial objects, we hierarchically integrated a heterogeneous graph neural network (HetGNN)-based coordinator and the 3D CNN-based actors. The system reasons about the relational knowledge between scene components and coordinates multiple robotic skills (e.g., grasping, pushing) to minimize the planning cost. As we anticipate an increase in the number of domestic robots, the robotics community necessitates a framework that not only commands the robot accurately, but also reasons about the unstructured scene to improve robots' efficiency. This thesis contributes to the goal by equipping robotics manipulation with learned scene knowledge. We present 6-DoF robotic systems that can grasp novel objects in dense clutter with reachability awareness, retrieve target objects within arbitrary structures, and rearrange multiple objects into goal configurations in constrained environments.Item Generalized Environment-Enabled Object Grasping using a Fixture-Aware Double Deep Q-Network(2022-06) Sasagawa, EddieThis thesis expands on the problem of grasping an object that can only be grasped bya single parallel gripper when a fixture (e.g., wall, heavy object) is harnessed. Preceding work that tackle this problem are limited in that the employed networks implicitly learn specific targets and fixtures to leverage. However, the notion of a usable fixture can vary in different environments, at times without any outwardly noticeable differences. In this work, we propose a method to relax this limitation and further handle environments where the fixture location is unknown. The problem is formulated as visual affordance learning in a partially observable setting. We present a self-supervised reinforcement learning algorithm, Fixture-Aware Double Deep Q-Network (FA-DDQN), that processes the scene observation to 1) identify the target object based on a reference image, 2) distinguish possible fixtures based on interaction with the environment, and finally 3) fuse the information to generate a visual affordance map to guide the robot to successful Slide-to-Wall grasps. We demonstrate our proposed solution in simulation and in real robot experiments to show that in addition to achieving higher success than baselines, it also performs zero-shot generalization to novel scenes with unseen object configurations.