Julian Ryde

Research

Research Overview

My research focus is Mobile Autonomous System Perception. This has been principally driven by the challenges mobile robots encounter when attempting to move throughout the real world.

A example problem has been how to locate the mobile robot in the world sufficiently accurately to enable it to interact with objects and navigate to requested locations. This is difficult especially indoors without access to GPS signals. One prevalent solution is to build a map and then localise within the map, a process dubbed SLAM (Simultaneous Localisation and Mapping) in the robotics literature.

Perception is regarding as the first level of processing of sensor data, the conversion of data into information. There are elements of feedback where knowledge influences sensing but does it does not involve higher level cognitive functions.

The sensors that I have worked with to supply data to mobile platforms and algorithms.

  • 2D laser range scanners
  • Custom 3D laser range scanners
  • Hyperspectral sensors GeoSensing
  • Conventional cameras
  • RGB-D sensors

The first time cooperative 3D mapping for multiple mobile robots was achieved during my PhD in 2005. This was made possible by inexpensive augmentation of 2D laser scanners. This setup was then used to investigate cooperative mapping and mutual localisation.

Internal coherent world representations

  • Multi-Resolution occupied voxels lists
  • Non-cubic lattices
  • Edge voxel mapping

Reducing the cost of range sensing

During my PhD also experimented Robcamscan

Current Research

Conventional current computer vision approaches involve establishing correspondence with high texture regions.

  • Low texture operation
  • Mutual Localisation with cameras
  • Occlusion Edge determination
    • 3D voxel data
    • Video
    • Stereo images

Looking to the future

I along with many others are excited by the potential impact of inexpensive RGB-D sensors on robotics.