Most Relevant Explanation: Properties, Algorithms, and Evaluations

Changhe Yuan and Xiaolu Liu and Tsai-Ching Lu and Heejin Lim

Abstract

Most Relevant Explanation (MRE) is a method for finding multivariate explanations for given evidence in Bayesian networks. This paper studies the theoretical properties of MRE and develops an algorithm for finding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relevance measure in automatically identifying the most relevant target variables and pruning less relevant variables from an explanation. The soft measure also enables MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, our study shows that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the solutions. A K-MRE algorithm based on these dominance relations is developed for generating a set of top solutions that are more representative. Our empirical results show that MRE methods are promising approaches for explanation in Bayesian networks.

PDF

Bibtex

@INPROCEEDINGS{Yuan09most,
author = {Changhe Yuan and Xiaolu Liu and Tsai-Ching Lu and Heejin Lim},
title = {Most Relevant Explanation: Properties, Algorithms, and Evaluations},
booktitle = {Proceedings of The 25th Conference on Uncertainty in Artificial Intelligence (UAI-09)},
address = {Montreal, Canada},
pages = {631--638},
year = {2009}
}