Learning and Planning with Hierarchical Stochastic Models for Robot Navigation

ICML Workshop on Machine Learning of Spatial Knowledge, July 2, 2000, Stanford University
Learning and Planning with Hierarchical Stochastic Models for Robot Navigation
Georgios Theocharous, Khashayar Rohanimanesh, Sridhar Mahadevan, Georgios Theocharous, Khashayar Rohanimanesh, Sridhar Mahadevan
Abstract

We propose and investigate a method for hierarchical learning and planning in partially observable environments using the framework of Hierarchical Hidden Markov Models (HHMMs).

Our main goal is to use hierarchical modeling as a basis for exploring more efficient learning and planning algorithms. As a case study we focus on indoor robot navigation problem and will show how this framework can be used to learn a hierarchy of maps of the environment at different levels of spatial abstraction.

We train different families of HHMMs for a real corridor environment and compare them with the standard HMM representation of the same environment. We find significant bene ts to using HHMMs in terms of the fit of the model to the training data, localization of the robot, and the ability to infer the structure of the environment. We also introduce the idea of model reuse that can be used to combine already learned models into a larger model

Another publication from the same category: Machine Learning and Data Science

WWW '17 Perth Australia April 2017

Drawing Sound Conclusions from Noisy Judgments

David Goldberg, Andrew Trotman, Xiao Wang, Wei Min, Zongru Wan

The quality of a search engine is typically evaluated using hand-labeled data sets, where the labels indicate the relevance of documents to queries. Often the number of labels needed is too large to be created by the best annotators, and so less accurate labels (e.g. from crowdsourcing) must be used. This introduces errors in the labels, and thus errors in standard precision metrics (such as P@k and DCG); the lower the quality of the judge, the more errorful the labels, consequently the more inaccurate the metric. We introduce equations and algorithms that can adjust the metrics to the values they would have had if there were no annotation errors.

This is especially important when two search engines are compared by comparing their metrics. We give examples where one engine appeared to be statistically significantly better than the other, but the effect disappeared after the metrics were corrected for annotation error. In other words the evidence supporting a statistical difference was illusory, and caused by a failure to account for annotation error.

Keywords