In this thesis, we explore hierarchical map representations that improve autonomous vision-based navigation. Challenged with the task of navigating in an unknown environment, an autonomous agent must perceive the environment around it while making progress towards a goal. While incrementally constructing a map of the world based on visual sensor measurements is a popular choice, we observe that the choice of representation for the map has significant consequences on the performance of navigation. To improve the efficiency and robustness of visual navigation of a computationally limited robotic platform, we introduce three key ideas in the form of applying varying levels of abstraction to the map representation and sensor measurements.
First, we propose to apply multiple levels of abstraction to the map representation to improve the computational efficiency of on-board pose estimation on a low-cost micro air vehicle (MAV). Second, we show that multiple levels of abstraction can also apply to the sensor measurements, thereby creating multiple pseudo-measurements of lower dimensions, to mitigate the viewpoint dependency of ellipsoid-based object-level simultaneous localization and mapping (SLAM). Finally, we show that adaptively changing the level of abstraction in the map representation and sensor measurements online based on the quality of available measurements improves the accuracy of the constructed map and results in improved robustness and efficiency of autonomous vision-based navigation.
Thesis Supervisor: Prof. Nicholas Roy
To attend this defense, please contact the doctoral candidate for details at kyelok at mit dot edu