Alpha seeding for support vector machines

International Conference on Knowledge Discovery and Data Mining (KDD-2000), August 2000
Alpha seeding for support vector machines
Dennis DeCoste, K. Wagstaff
Abstract

A key practical obstacle in applying support vector machines to many large-scale data mining tasks is that SVM’s generally scale quadratically (or worse) in the number of examples or support vectors.

This complexity is further compounded when a specific SVM training is but one of many, such as in Leave-One-Out-Cross-Validation (LOOCV) for determining optimal SVM kernel parameters or as in wrapper-based feature selection. In this paper we explore new techniques for reducing the amortized cost of each such SVM training, by seeding successive SVM trainings with the results of previous similar trainings.

Another publication from the same category: Machine Learning and Data Science

IEEE Computing Conference 2018, London, UK

Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation

The kernel trick concept, formulated as an inner product in a feature space, facilitates powerful extensions to many well-known algorithms. While the kernel matrix involves inner products in the feature space, the sample covariance matrix of the data requires outer products. Therefore, their spectral properties are tightly connected. This allows us to examine the kernel matrix through the sample covariance matrix in the feature space and vice versa. The use of kernels often involves a large number of features, compared to the number of observations. In this scenario, the sample covariance matrix is not well-conditioned nor is it necessarily invertible, mandating a solution to the problem of estimating high-dimensional covariance matrices under small sample size conditions. We tackle this problem through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix (also known as the "target") with the aim of minimizing the mean-squared error (MSE). We propose a distribution-free kernel matrix regularization approach that is tuned directly from the kernel matrix, avoiding the need to address the feature space explicitly. Numerical simulations demonstrate that the proposed regularization is effective in classification tasks.

Keywords