Northeast Robotics Colloquium, Second Edition
October 6th 2013
Northwest Building
Harvard University
52 Oxford St
Cambridge MA 02138.


Call for Posters/Demos

Invited Speakers

Important Dates




Organizing Committee

Publicity Flyers


Prof. John Leonard

Title: A Long-term View of SLAM


This talk will provide a long-term view on the Simultaneous Localization and Mapping (SLAM) problem in Robotics. The first part of the talk will review the history of SLAM research and define some of the major challenges in SLAM, including choosing a map representation, developing algorithms for efficient state estimation, and solving for data association and loop closure. Next, we will give a snapshot of current state-of-the-art research in SLAM based on joint work between MIT and the National University of Ireland, Maynooth. We will describe a new technique for visual SLAM that uses a reduced pose graph representation to achieve temporally scalable performance. Unlike previous visual SLAM approaches that maintain static keyframes, our approach uses new measurements to continually improve the map, yet achieves efficiency by avoiding adding redundant frames and not using marginalization to reduce the graph. We demonstrate long-term mapping in a large multi-floor building, the MIT Stata Center, using approximately nine hours of data collected over the course of six months.

A major new innovation in SLAM is the development of real-time dense mapping system using RGB-D cameras. We will describe Kintinuous, a new SLAM system capable of producing high quality globally consistent surface reconstructions over hundreds of meters in real-time with only a cheap commodity RGB-D sensor. By using a fused volumetric surface reconstruction we achieve a much higher quality map over what would be achieved using raw RGB-D point clouds. The approach is based on three key innovations in volumetric fusion-based SLAM: (1) using a GPU-based 3D cyclical buffer trick to extend dense volumetric fusion of depth maps to an unbounded spatial region; (2) combining both dense geometric and photometric camera pose constraints, and (3) efficiently applying loop closure constraints by the use of an “as-rigid-as-possible” space deformation. Experimental results will be presented for a wide variety of data sets to demonstrate the system's performance for trajectory estimation, map quality and computational performance.

We will conclude the talk with a discussion of current and future research topics, including object-based and semantic mapping, lifelong learning, and advanced physical interaction with the world.

Joint work with Hordur Johannsson, Tom Whelan, Michael Kaess, Maurice Fallon, John McDonald, David Rosen, Mark VanMiddlesworth, Ross Finman and Paul Huang.

For more information see: video 1 and video 2.


John J. Leonard is Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering. He is also a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research addresses the problems of navigation, mapping, and persistent autonomy for autonomous mobile robots. He holds the degrees of B.S. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994). He is the recipient of a Thouron Award (1987), an NSF Career Award (1998), a Science Foundation Ireland E.T.S. Walton Visitor Award (2004), the Best Paper Award at ACM SenSys in 2004 (shared with D. Moore, D. Rus, and S. Teller), the Best Student Paper Award at IEEE ICRA 2005 (with R. Eustice and H. Singh) and the King-Sun Fu Memorial Best Transactions on Robotics Paper Award in 2006 (shared with R. Eustice and H. Singh).

[Back to program]