Most Relevant Explanation in Bayesian Networks

Changhe Yuan, Heejin Lim, Tsai-Ching Lu


A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using the states of a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, MRE relies on the conditional Bayes factor (CBF) to provide a soft measure on the degree of relevance of the variables in a new explanation in explaining the evidence given an existing explanation, where CBF is defined as the GBF of the new explanation conditioned on the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions which can be used to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.


@ARTICLE{Yuan11mostJAIR, author = {Changhe Yuan and Heejin Lim and Tsai-Ching Lu},
title = {Most Relevant Explanation in Bayesian Networks},
journal = {Journal of Artificial Intelligence Research (JAIR)},
year = {2011},
volume = {42},
pages = {309--352}