Publications

Publications
Publications
We strongly believe in open source and giving to our community. We work directly with researchers in academia and seek out new perspectives with our intern and fellowship programs. We generalize our solutions and release them to the world as open source projects. We host discussions and publish our results.

Publications

The Six-teenth Annual Conference on Neural Information Processing Systems (NIPS02), Vancouver, Canada,December 2002

Learning to Take Concurrent Actions

Khashayar Rohanimanesh, Sridhar Mahadevan

We investigate a general semi-Markov Decision Process (SMDP) framework for modeling concurrent decision making, where agents learn optimal plans over concurrent temporally extended actions.

Keywords
Proceedings of the Sixth International Conference on Computers and Information Technology. Dec. 2002

Incremental Singular Value Decomposition Algorithms for Highly Scalable Recommender Systems

Badrul Sarwar, George Karypis, Joseph Konstan, John Riedl

No information

Keywords
International Conference on Machine Learning (ICML), July 2002

Anytime interval-valued outputs for kernel machines: fast support vector machine classification via distance geometry

Dennis DeCoste

Classifying M query examples using a support vector machine containing L support vectors traditionally requires exactly M * L kernel computations.

We introduce a computational geometry method for which classification cost becomes roughly proportional to each query's difficulty (e.g. distance from the discriminant hyperplane). It produces exactly the same classifications, while typically requiring vastly fewer kernel computations.

Keywords
Machine Learning Journal (MLJ), Volume 46(1-3), 2002

Training invariant support vector machines

Dennis DeCoste, B. Schoelkopf

Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification problem at hand ought to be incorporated into the training procedure.

We describe and review all known methods for doing so in support vector machines, provide experimental results, and discuss their respective merits.

One of the significant new results reported in this work is our recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods.

Keywords
Science, Volume 293, pp. 2051-2055, September 14, 2001

Machine learning for science: state of the art and future prospects

E. Mjolsness, Dennis DeCoste, E. Mjolsness, Dennis DeCoste

Recent advances in machine learning methods, along with successful applications across a wide variety of fields such as planetary science and bioinformatics, promise powerful new tools for practicing scientists.

This viewpoint highlights some useful characteristics of modern machine learning methods and their relevance to scientific applications. We conclude with some speculations on near-term progress and promising directions.

Keywords
ACM WWW10 Conference. May, 2001

Item-based Collaborative Filtering Recommendation Algorithms

Badrul Sarwar, George Karypis, Joseph Konstan, John Riedl, Badrul Sarwar, George Karypis, Joseph Konstan, John Riedl

No Information

Keywords
The Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI01), August 3-5, 2001, University of Washington, Seattle, Washington, USA

Decision-Theoretic Planning with Concurrent Temporally Extended Actions

Khashayar Rohanimanesh, Sridhar Mahadevan

We investigate a model for planning under uncertainty with temporally extended actions, where multiple actions can be taken concurrently at each decision epoch. Our model is based on the options framework, and combines it with factored state space models, where the set of options can be partitioned into classes that affect disjoint state variables.

We show that the set of decision epochs for concurrent options defines a semi-Markov decision process, if the underlying temporally extended actions being parallelized are restricted to Markov options.

This property allows us to use SMDPalgorithms for computing the value function over concurrent options. The concurrent options model allows overlapping execution of options in order to achieve higher performance or in order to perform a complex task.

We describe a simple experiment using a navigation task which illustrates how concurrent options results in a faster plan when compared to the case when only one option is taken at a time.

Keywords
IEEE Conference on Robotics and Automation , (ICRA01), 2001, Seoul, South Korea

Learning Hierarchical Partially Observable Markov Decision Processes for Robot Navigation

Georgios Theocharous, Khashayar Rohanimanesh, Sridhar Mahadevan

We propose and investigate a general framework for hierarchical modeling of partially observable environments, such as oce buildings, using Hierarchical Hidden Markov Models (HHMMs). Our main goal is to explore hierarchical modeling as a basis for designing more ecient methods for model construction and useage.

As a case study we focus on indoor robot navigation and show how this framework can be used to learn a hierarchy of models of the environment at dierent levels of spatial abstraction. We introduce the idea of model reuse that can be used to combine already learned models into a larger model.

We describe an extension of the HHMM model to includes actions, which we call hierarchical POMDPs, and describe a modied hierarchical Baum-Welch algorithm to learn these models. We train dierent families of hierarchical models for a simulated and a real world corridor environment and compare them with the standard \at" representation of the same environment.

We show that the hierarchical POMDP approach, combined with model reuse, allows learning hierarchical models that t the data better and train faster than at models.

Keywords
Internet Commerce and Software Agents: Cases, Technologies and Opportunities. Rahman, S.M. and Bignall, R. eds. Idea Group. 2001

Distributed Recommender Systems: New Opportunities for Internet Commerce

Badrul Sarwar, Joseph Konstan, John Riedl, Badrul Sarwar, Joseph Konstan, John Riedl

No Information

Keywords
ICML Workshop on Machine Learning of Spatial Knowledge, July 2, 2000, Stanford University

Learning and Planning with Hierarchical Stochastic Models for Robot Navigation

Georgios Theocharous, Khashayar Rohanimanesh, Sridhar Mahadevan, Georgios Theocharous, Khashayar Rohanimanesh, Sridhar Mahadevan

We propose and investigate a method for hierarchical learning and planning in partially observable environments using the framework of Hierarchical Hidden Markov Models (HHMMs).

Our main goal is to use hierarchical modeling as a basis for exploring more efficient learning and planning algorithms. As a case study we focus on indoor robot navigation problem and will show how this framework can be used to learn a hierarchy of maps of the environment at different levels of spatial abstraction.

We train different families of HHMMs for a real corridor environment and compare them with the standard HMM representation of the same environment. We find significant bene ts to using HHMMs in terms of the fit of the model to the training data, localization of the robot, and the ability to infer the structure of the environment. We also introduce the idea of model reuse that can be used to combine already learned models into a larger model

Keywords

Pages