Tomer Lancewicki

Tomer Lancewicki
Applied Research Scientist
Biography

Dr. Lancewicki’s work is focused on machine learning technologies, including dimensionality reduction methods.  His research in this area is published in leading international journals and conferences.  As one example, his published research as part of the Israel Metro450 Consortium was honored with a 2013 Best Research Award by the Office of the Chief Scientist, Ministry of Economy, Israel.  He has conduct peer review for the International Conference on Electronics, Communications and Networks (CECNet) and International Conference on Fuzzy Systems and Data Mining (FSDM).  He also serves as a Reviewer for the Mathematical Reviews of the American Mathematical Society and a member of the Technical Program Committee for the 8th International Conference on Electronics, Communications, and Networks.
 
In addition to his presentations to eBay’s internal community, Dr. Lancewicki was invited to present his work at the 22nd International Symposium on Mathematical Theory of Networks and Systems in Minneapolis, MN, the conference on Reinforcement Learning and Decision Making in Edmonton, Canada, and the Society of Industrial and Applied Mathematics, Southeastern Atlantic Section (SIAM-SEAS), as well as serve as an invited lecturer at the University of Tennessee.

Publications
IEEE ICMLA 2017, Cancun, Mexico

Sequential Inverse Approximation of a Regularized Sample Covariance Matrix

One of the goals in scaling sequential machine learning methods pertains to dealing with high-dimensional data spaces. A key related challenge is that many methods heavily depend on obtaining the inverse covariance matrix of the data. It is well known that covariance matrix estimation is problematic when the number of observations is relatively small compared to the number of variables. A common way to tackle this problem is through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix, with the aim of minimizing the mean-squared error. We derived sequential update rules to approximate the inverse shrinkage estimator of the covariance matrix. The approach paves the way for improved large-scale machine learning methods that involve sequential updates.

Keywords
IEEE Computing Conference 2018, London, UK

Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation

The kernel trick concept, formulated as an inner product in a feature space, facilitates powerful extensions to many well-known algorithms. While the kernel matrix involves inner products in the feature space, the sample covariance matrix of the data requires outer products. Therefore, their spectral properties are tightly connected. This allows us to examine the kernel matrix through the sample covariance matrix in the feature space and vice versa. The use of kernels often involves a large number of features, compared to the number of observations. In this scenario, the sample covariance matrix is not well-conditioned nor is it necessarily invertible, mandating a solution to the problem of estimating high-dimensional covariance matrices under small sample size conditions. We tackle this problem through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix (also known as the "target") with the aim of minimizing the mean-squared error (MSE). We propose a distribution-free kernel matrix regularization approach that is tuned directly from the kernel matrix, avoiding the need to address the feature space explicitly. Numerical simulations demonstrate that the proposed regularization is effective in classification tasks.

Keywords
Patents