Computer Interaction Analysis: Toward an Empirical Approach to Understanding User Practice and Eye Gaze in GUI-Based Interaction

Computer Supported Cooperative Work, Volume 20, p.497-528, 2011
Computer Interaction Analysis: Toward an Empirical Approach to Understanding User Practice and Eye Gaze in GUI-Based Interaction
Robert Moore, Elizabeth Churchill
Abstract

Today's personal computers enable complex forms of user interaction. Unlike older mainframe computers that required batch processing, personal computers enable real-time user control on a one-to-one basis.

Such user interaction involves mixed initiative, logic, language and pointing gestures, features reminiscent of interaction with another human. Yet there are also major differences between computer interaction and human interaction, such as computers' inability to stray from scripts or to adapt to the idiosyncrasies of particular recipients or situations.

Given these similarities and differences, can we study computer interaction using methods similar to those for studying human interaction? If so, are the findings from the analysis of human interaction also useful in understanding computer interaction?

In this paper, we explore these questions and outline a novel methodological approach for examining human-computer interaction, which we call "computer interaction analysis." We build on earlier approaches to human interaction with a computer and adapt them to the latest technologies for computer screen capture and eye tracking.

In doing so, we propose a new transcription notation scheme that is designed to represent the interweaving streams of input actions, display events and eye movements. Finally we demonstrate the approach with concrete examples involving the phenomena of placeholding, repair and referential practices.

Another publication from the same category: Machine Learning and Data Science

IEEE Computing Conference 2018, London, UK

Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation

The kernel trick concept, formulated as an inner product in a feature space, facilitates powerful extensions to many well-known algorithms. While the kernel matrix involves inner products in the feature space, the sample covariance matrix of the data requires outer products. Therefore, their spectral properties are tightly connected. This allows us to examine the kernel matrix through the sample covariance matrix in the feature space and vice versa. The use of kernels often involves a large number of features, compared to the number of observations. In this scenario, the sample covariance matrix is not well-conditioned nor is it necessarily invertible, mandating a solution to the problem of estimating high-dimensional covariance matrices under small sample size conditions. We tackle this problem through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix (also known as the "target") with the aim of minimizing the mean-squared error (MSE). We propose a distribution-free kernel matrix regularization approach that is tuned directly from the kernel matrix, avoiding the need to address the feature space explicitly. Numerical simulations demonstrate that the proposed regularization is effective in classification tasks.

Keywords