One of the primary reasons robotics and autonomy problems are hard is that the world is incompletely observed. For example, there is no oracle telling a factory robot the exact location of a transmission relative to the engine block to which it must be mounted. This essential information must be estimated using perception techniques that may fail. In these contexts, it is insufficient just to identify the most likely state of the world; the system must be aware of what it knows and what is not known. Furthermore, when the state of the world is uncertain, the system must be capable of acting in order to gain information. We have developed a new approach to planning under uncertainty that performs well in continuous state, action, and observation spaces and over long time horizons. Although we sacrifice optimality, we still provide technical guarantees on convergence and performance.
Our approach has important implications on robot grasping, manipulation, and assembly. A big challenge in robot manipulation is developing methods that are robust. We need methods that can guarantee that a large percentage of manipulation attempts will be successful. Our work provides an avenue toward achieving this. In our approach, the robot "knows what it knows" and is capable to taking information gathering actions as necessary. As a result, we can guarantee a specified minimum likelihood of success with respect to the modeled, but unknown, variables. For example, a robot that must grasp an object would be capable of continuing to take information gathering actions until it is sufficiently confident of success.
The animation at right illustrates our approach. There are two boxes at the top and a robot end-effector moving at the bottom. Mounted to the end-effector is a laser scanner that perceives the red scan dots that are moving in the image. The objective is to simultaneously localize and grasp the box on the right. The system has a model of what the laser range finder will see in different box configurations. It also has a model of how the boxes will move (based on an assumed center of friction) when pushed. Initially, the system expects that the boxes will be separated by a large gap and it will be able to see them well. However, when the system looks, it finds that the boxes are too close together. Therefore, the system pushes the left box out of the way and, as a result, subsequently localizes the box sufficiently well. Notice that in this example, there are no predefined "pushing" action primitives. The system is actively reasoning about every aspect of the information gathering.
Platt, R., Tedrake, R., Kaelbling, L., Lozano-Perez, T., Belief space planning assuming maximum likelihood observations, Proceedings of Robotics: Science and Systems 2010 (RSS), Zaragosa, Spain, June 27, 2010.
In the last decade, Bayesian inference has been successfully applied to simultaneous localization and mapping (SLAM) problems in mobile robot contexts. However, these approaches are infrequently applied to manipulation problems. At NASA, we explored state estimation in the context of manipulating soft or flexible materials. This turns out to be an extremely important problem. Some of the most ergonomically challenging work that automotive factory workers perform involves mounting flexible materials such as cables, bags, or covers that exposes the workers to repetitive motion injury.
At NASA and working jointly with General Motors, my collaborators and I have developed strategies for using Bayesian filtering to localize buttons or grommets in flexible materials using touch sensors. The sequence of images below illustrates an application of the technique to a grommet insertion task. The key to this work was modeling how the material feels based on training data rather than attempting to analytically model the unpredictable nature of the flexible materials interaction. To our knowledge, this is the first application of Bayesian filtering to the problem of interpreting subtle tactile information. The research has resulted in an application that enables Robonaut 2 to autonomously locate a snap or grommet embedded in fabric and mate it with a fastener. It is among many capabilities that may be demonstrated aboard the international space station when Robonaut 2 travels there in December, 2010.
Platt, R., Permenter, F., Pfeiffer, J., Using touch to localize flexible materials during manipulation, IEEE Transactions on Robotics, Special issue on a robotic sense of touch. (Conditionally accepted).
As part of my graduate work at UMass Amherst, I explored a special-purpose approach to managing the partially observable nature of robot grasping problems. Rather than confronting the grasp problem directly, we reduce grasping to a fully observable projection of the original problem that is solvable using standard control methods. In the reduced problem, controller state is always measurable using fingertip force sensors. We demonstrated that solutions to the reduced problem are also solutions to the original problem and that the resulting grasp controller is guaranteed to eventually reach these solutions. The approach was validated using Dexter, a robot at UMass, and found to be very effective in practice. Essentially, the robot ``feels'' its way into a grasp configuration.
Platt, R., Fagg, A. H., Grupen, R., Null Space Grasp Control: Theory and Experiments, IEEE Transactions on Robotics, Vol 26, No 2, April 2010