Compiling diagnostic rules and redesign plans from a structure/behavior device model

Chapter. Knowledge-based Aided Design. Academic Press, 1992
Compiling diagnostic rules and redesign plans from a structure/behavior device model
Richard Keller, Catherine Baudin, Yimi Ywasaki, P. Nayak, Kazuo Tanaka

The current generation of expert systems is fueled by special-purpose, task-specific associational rules developed with the aid of domain experts. In many cases, the expert has distilled or compiled these so-called 'shallow rules from 'deeper' models of the application domain in order to optimize task performance.

With the traditional knowledge engineering approach, only the shallow, special-purpose rules are elicited from the expert - not the underlying domain models upon which they are based. This results in two significant problems.

First, expert systems cannot share knowledge bases because they contain only special-purpose rules and lack the underlying general domain knowledge that applies across tasks. Second, because the underlying models are missing, shallow rules are unsupported and brittle.

This chapter describes a proposed second generation expert system architecture that addresses these problems by linking special-purpose rules to underlying domain models using a process called rule compilation. Rule compilation starts with a detailed domain model, and gradually incorporates various simplifying assumptions and approximations into the model, thereby producing a series of successively less general - but more task-efficient - models of the domain.

The end product of the rule compilation process is an associational rule model specialized for the task at hand.The process of rule compilation is illustrated with two simple implemented examples.

In the first, a structure/behavior model of a simple engineered device is compiled into a set of plans for redesign. In the second, the same underlying device model is compiled into a set of fault localization rules for troubleshooting.

Another publication from the same category: Machine Learning and Data Science

IEEE Computing Conference 2018, London, UK

Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation

The kernel trick concept, formulated as an inner product in a feature space, facilitates powerful extensions to many well-known algorithms. While the kernel matrix involves inner products in the feature space, the sample covariance matrix of the data requires outer products. Therefore, their spectral properties are tightly connected. This allows us to examine the kernel matrix through the sample covariance matrix in the feature space and vice versa. The use of kernels often involves a large number of features, compared to the number of observations. In this scenario, the sample covariance matrix is not well-conditioned nor is it necessarily invertible, mandating a solution to the problem of estimating high-dimensional covariance matrices under small sample size conditions. We tackle this problem through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix (also known as the "target") with the aim of minimizing the mean-squared error (MSE). We propose a distribution-free kernel matrix regularization approach that is tuned directly from the kernel matrix, avoiding the need to address the feature space explicitly. Numerical simulations demonstrate that the proposed regularization is effective in classification tasks.