We strongly believe in open source and giving to our community. We work directly with researchers in academia and seek out new perspectives with our intern and fellowship programs. We generalize our solutions and release them to the world as open source projects. We host discussions and publish our results.


Technical report, Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, 2007

Towards feature selection in actor-critic algorithms

Khashayar Rohanimanesh, Russ Tedrake, Nick Roy, Khashayar Rohanimanesh, Russ Tedrake, Nick Roy

Choosing features for the critic in actor-critic algorithms with function approximation is known to be a challenge. Too few critic features can lead to degeneracy of the actor gradient, and too many features may lead to slower convergence of the learner.

In this paper, we show that a well studied class of actor policies satisfy the known requirements for convergence when the actor features are selected carefully. We demonstrate that two popular representations for value methods - the barycentric interpolators and the graph Laplacian proto-value functions - can be used to represent the actor in order to satisfy these conditions.

A consequence of this work is a generalization of the proto-value function methods to the continuous action actor-critic domain. Finally, we analyze the performance of this approach using a simulation of a torque-limited inverted pendulum.

SIAM Data Mining Conference, April, 2006

Automated Knowledge Discovery from Simulator

M. Burl, Dennis DeCoste, B. Enke, D. Mazzoni, W. Merline, L. Scharenbroich, M. Burl, Dennis DeCoste, B. Enke, D. Mazzoni, W. Merline, L. Scharenbroich

In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior.

Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise.

However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more e±cient knowledge extraction.

Journal of Machine Learning Research (JMLR): Special Topic on Machine Learning and Optimization. 7(Jul):1493–1515, 2006

Building Support Vector Machines with Reduced Classifier Complexity

Sathiya Keerthi, Olivier Chapelle, Dennis DeCoste

Support vector machines (SVMs), though accurate, are not preferred in applications requiring great classification speed, due to the number of support vectors being large. To overcome this problem we devise a primal method with the following properties:

(1) it decouples the idea of basis functions from the concept of support vectors;
(2) it greedily finds a set of kernel basis functions of a specified maximum size (dmax) to approximate the SVM primal cost function well;
(3) it is efficient and roughly scales as O(ndmax2) where n is the number of training examples; and,
(4) the number of basis functions it requires to achieve an accuracy close to the SVM accuracy is usually far less than the number of SVM support vectors.

KDD 2006

Naive filterbots for robust cold-start recommendations

Seung-Taek Park, David Pennock, Omid Madani, Nathan Good, Dennis DeCoste, Seung-Taek Park, David Pennock, Omid Madani, Nathan Good, Dennis DeCoste

The goal of a recommender system is to suggest items of interest to a user based on historical behavior of a community of users. Given detailed enough history, item-based collaborative filtering (CF) often performs as well or better than almost any other recommendation method. However, in cold-start situations—where a user, an item, or the entire system is new—simple non-personalized recommendations often fare better. We improve the scalability and performance of a previous approach to handling cold-start situations that uses filterbots, or surrogate users that rate items based only on user or item attributes.

We show that introducing a very small number of simple filterbots helps make CF algorithms more robust. In particular, adding just seven global filterbots improves both user-based and item-based CF in cold-start user, cold-start item, and cold-start system settings.

Performance is better when data is scarce, performance is no worse when data is plentiful, and algorithm efficiency is negligibly affected. We systematically compare a non-personalized baseline, user-based CF, item-based CF, and our bot-augmented user- and item-based CF algorithms using three data sets (Yahoo! Movies, MovieLens, and Each-Movie) with the normalized MAE metric in three types of cold-start situations.

The advantage of our “naïve filterbot” approach is most pronounced for the Yahoo! data, the sparsest of the three data sets.


Ensembles of Nearest Neighbor Forecasts

Dragomir Yankov, Dennis DeCoste, Eamonn Keogh, Dragomir Yankov, Dennis DeCoste, Eamonn Keogh

Nearest neighbor forecasting models are attractive with their simplicity and the ability to predict complex nonlinear behavior. They rely on the assumption that observations similar to the target one are also likely to have similar outcomes. A common practice in nearest neighbor model selection is to compute the globally optimal number of neighbors on a validation set, which is later applied for all incoming queries.

For certain queries, however, this number may be suboptimal and forecasts that deviate a lot from the true realization could be produced. To address the problem we propose an alternative approach of training ensembles of nearest neighbor predictors that determine the best number of neighbors for individual queries.

We demonstrate that the forecasts of the ensembles improve significantly on the globally optimal single predictors.

International Conference on Machine Learning (ICML), June 2006

Collaborative Prediction Using Ensembles of Maximum Margin Matrix Factorizations

Dennis DeCoste

Fast gradient-based methods for Maximum Margin Matrix Factorization (MMMF) were recently shown to have great promise (Rennie & Srebro, 2005), including significantly outperforming the previous state-of-the-art methods on some standard collaborative prediction benchmarks (including MovieLens).

In this paper, we investigate ways to further improve the performance of MMMF, by casting it within an ensemble approach. We explore and evaluate a variety of alternative ways to define such ensembles.

We show that our resulting ensembles can perform significantly better than a single MMMF model, along multiple evaluation metrics. In fact, we find that ensembles of partially trained MMMF models can sometimes even give better predictions in total training time comparable to a single MMMF model.

Encyclopedia of Information Science and Technology (II): 907-911. Idea Group. 2005

Distributed Recommender Systems for Internet Commerce

Badrul Sarwar, Joseph Konstan, John Riedl

No information

The Twenty-Second International Conference on Machine Learning (ICML05), Bonn, Germany, 7-11 August, 2005

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes

Khashayar Rohanimanesh, Sridhar Mahadevan

We study an approach for performing concurrent activities in Markov decision processes (MDPs) based on the coarticulation framework. We assume that the agent has multiple degrees of freedom (DOF) in the action space which enables it to perform activities simultaneously.

We demonstrate that one natural way for generating concurrency in the system is by coarticulating among the set of learned activities available to the agent. In general due to the multiple DOF in the system, often there exists a redundant set of admissible sub-optimal policies associated with each learned activity.

Such flexibility enables the agent to concurrently commit to several subgoals according to their priority levels, given a new task defined in terms of a set of prioritized subgoals. We present efficient approximate algorithms for computing such policies and for generating concurrent plans. We also evaluate our approach in a simulated domain.

Journal of Machine Learning Research (JMLR), Volume 6, March 2005

A finite Newton method for fast solution of large scale linear SVMs

Sathiya Keerthi, Dennis DeCoste

This paper develops a fast method for solving linear SVMs with L2 loss function that is suited for large scale data mining tasks such as text classification. This is done by modifying the finite Newton method of Mangasarian in several ways.

Experiments indicate that the method is much faster than decomposition methods such as SVM(light), SMO and BSVM (e.g., 4-100 fold), especially when the number of examples is large. The paper also suggests ways of extending the method to other loss functions such as the modified Huber's loss function and the L1 loss function, and also for solving ordinal regression.

Conference Record of the Thirty-Ninth Asilomar Conference on Signals, Systems and Computers. pp. 545-549. 2005

Data-Pattern Discovery Methods for Detection in Nongaussian High-dimensional Data Sets

Cécile Levasseur, Ken Kreutz-Delgado, Uwe Mayer, Gregory Gancarz

Many important expert system applications depend on the ability to accurately detect or predict the occurrence of key events given a data set of observations. We concentrate on multidimensional data that are highly nongaussian (continuous and/or discrete), noisy and nonlinearly related.

We investigate the feasibility of data-pattern discovery and event detection by applying generalized principal component analysis (GPCA) techniques for pattern extraction based on an exponential family probability distribution assumption.

We develop theoretical extensions of the GPCA model by exploiting results from the theory of generalized linear models and nonparametric mixture density estimation.