Support vector machines (and other ker-nel machines) offer robust modern machine learning methods for nonlinear classification. However, relative to other alternatives (such as linear methods, decision trees and neu-ral networks), they can be orders of mag-nitude slower at query-time.
Unlike exist-ing methods that attempt to speedup query-time, such as reduced set compression (e.g. (Burges, 1996)) and anytime bounding (e.g. (DeCoste, 2002), we propose a new and ef-ficient approach based on treating the ker-nel machine classifier as a special form of k nearest-neighbor.
Our approach improves upon a traditional k-NN by determining at query-time a good k for each query, based on pre-query analysis guided by the origi-nal robust kernel machine. We demonstrate effectiveness on high-dimensional benchmark MNIST data, observing a greater than 100-fold reduction in the number of SVs required per query (amortized over all 45 pairwise MNIST digit classifiers), with no extra test errors (in fact, it happens to make 4 fewer)