Masters Degree Academic

PROJECT DETAILS

The above photo showcases myself next to Lincoln’s robotic platform known as ‘Linda’ of whom my MSc by research project was based. She was originally one of 6 identical robots at other institutes across Europe participating in the STRANDS PROJECT, though now the family has grown to 8. At the core Linda relies on well established robust methods of spatial navigation, predominantly AMCL (Adaptive Monte-Carlo Localisation). The yellow laser at the base of the robot measures distance to obstacles across a horizontal plane and tying in information from the wheels is sufficient to approximate the location and movement of the robot in a stable rigid environment. However in a human populated environment such as that at LCAS (Lincoln Centre for Autonomous Systems) occlusion, entropy atop sensor noise caused regular deviations from the robots ground truth (real) location.

 

My project looked at utilising other sensors aboard the robot to help recover it. The Asus Kinect mounted atop a PTU (pan-tilt-unit) was the prime candidate for this as it could perceive 360 degrees about the robot in colour as-well-as depth. Unfortunately the latter (depth) was fruitless. The initial aim was to use state of the art 3D feature descriptors known as NARF to reduce the cumbersome size of an incoming pointcloud to only interesting features, inside simulation this was very affective however the noise introduced on the real system demonstrated a very low match accuracy so I chose to abandon the research, (the code can be seen below in i.). Assuming it was the structure of the features and not the depth letting down this approach I modified it to the code seen in ii. this approach used conventional image features extracting and mapping them to the robot pointcloud and transforming them about the robot. The goal for which being the robot could recover from a 3D map of 2D features. The Kinect however is designed to perceive a human from a medium distance, given the open plan glass fronted office environment this experiment took place in, the RGB features were detected yet their distances couldn’t be resolved by the sensor.

 

i. Using NARF 3D Features on a Kinect Pointcloud (Incomplete C++)
ii. 3D Point Clouds from 2D features (C++)

 

This in itself would have been sufficient to write an MSc thesis however not wanting to report on how not to recover a robot, I began project number three, which worked perfectly, giving the above described attempts a single page acting as a ‘hat tip’ to the months of research taken.

 

An Active Vision Approach To Reinitialising Robot Localisation Within Topological Space Using Local Panoramic Features And Epipolar Geometry

View Thesis (24mb)

Abstract:  

“This Thesis demonstrates an active vision recovery system using image features and Epipolar Geometry, as an attempt to relocalisation a robotic platform after becoming lost within a known topological map. Like a majority used research and commercial robots currently in use within indoor and outdoor environments, the robotic platform utilized provides data from a laser scanner and on board odometry, through which Adaptive Monte-Carlo Localisation (AMCL) is employed to navigate a metric map. Given that no sensor provides perfect data, probabilistic approaches are always subject to noise that can result in wrong assumptions about the current location of the robot, adding supplementary data from different kinds of sensors albeit subject to their own noise can assist the accuracy of the overall system, by compensating for each others shortcomings. This approach takes advantage of an on board pan-tilt-unit and a vision sensor to complement the currently used laser and odemtry data, by providing a recovery localisation system in the case of an AMCL failure.

The topological map used regarded distinctive regions of a human populated environment as waypoints or nodes. At each of which the robot rotated a camera and collected a panorama of SURF features. Datasets were collected for three distinct times of the day to handle changing lighting conditions. These sets were used to locate and predict the angular deviation of the robot, showcasing impressive robustness. Epipolar Geometry is employed to resolve the robots location between nodes, using a triangulation approximation approach between several waypoints. Again showing reliable results.”