Convergence in Markovian models with implications for efficiency of inference

Sequential statistical models such as dynamic Bayesian networks and hidden Markov models more specifically, model stochastic processes over time. In this paper, we study for these models the effect of consecutive similar observations on the posterior probability distribution of the represented proce...

Full description

Saved in:
Bibliographic Details
Published in:International journal of approximate reasoning Vol. 46; no. 2; pp. 300 - 319
Main Authors: Charitos, Theodore, de Waal, Peter R., van der Gaag, Linda C.
Format: Journal Article Conference Proceeding
Language:English
Published: Amsterdam Elsevier Inc 01-10-2007
Elsevier
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sequential statistical models such as dynamic Bayesian networks and hidden Markov models more specifically, model stochastic processes over time. In this paper, we study for these models the effect of consecutive similar observations on the posterior probability distribution of the represented process. We show that, given such observations, the posterior distribution converges to a limit distribution. Building upon the rate of the convergence, we further show that, given some wished-for level of accuracy, part of the inference can be forestalled. To evaluate our theoretical results, we study their implications for a real-life model from the medical domain and for a benchmark model for agricultural purposes. Our results indicate that whenever consecutive similar observations arise, the computational requirements of inference in Markovian models can be drastically reduced.
ISSN:0888-613X
1873-4731
DOI:10.1016/j.ijar.2006.09.011