### Most Relevant Explanation: Computational Complexity and Approximation Methods

## Abstract

Most Relevant Explanation (MRE) is the problem of finding a partial instantiation of a set of target variables that maximizes the generalized Bayes factor as the explanation for given evidence in a Bayesian network. MRE has a huge solution space and is extremely difficult to solve in large Bayesian networks. In this paper, we first prove that MRE is at least NP-hard. We then define a subproblem of MRE called MRE_k that finds the most relevant k-arity explanation and prove that the decision problem of MRE$_k$ is NP^{PP}-complete. Since MRE needs to find the best solution by MRE_k over all k, and we can also show that MRE is in NP^{PP}, we conjecture that a decision problem of MRE is NP^{PP}-complete as well. Furthermore, we show that MRE remains in NP^{PP} even if we restrict the number of target variables to be within a log factor of the number of all unobserved variables.These complexity results prompt us to develop a suite of approximation algorithms for solving MRE, which may be the only feasible approach for large Bayesian networks. One algorithm finds an MRE solution by integrating reversible-jump MCMC and simulated annealing in simulating a non-homogeneous Markov chain that eventually concentrates its mass on the mode of a distribution of the GBF scores of all solutions. The other algorithms are all instances of local search methods, including forward search, backward search, and tabu search. We tested these algorithms on a set of benchmark diagnostic Bayesian networks. Our empirical results show that these methods could find optimal MRE solutions for most of the test cases in our experiments.

## Bibtex

@ARTICLE{Yuan11most, author = {Changhe Yuan and Heejin Lim and Michael L. Littman},title = {Most Relevant Explanation: Computational Complexity and Approximation Methods},

journal = {Annals of Mathematics and Artificial Intelligence},

volume = {61},

issue = {3},

pages = {159--183},

year = {2011}

}