Skip to main content

LEARNING AND INFERENCE FOR GRAPHICAL AND HIERARCHICAL MODELS, A PERSONAL JOURNEY

In this talk I will describe a personal journey and variety of results on graphical and hierarchical models. I will give a brief reminder about Belief Propagation (BP) and Gaussian models on undirected and directed graphs including a look at Gaussian BP on general undirected graphs, the close tie to inference on trees, and the crucial role that walk-summability plays in BP convergence for Gaussian models. I’ll then turn to questions that began with an initial foray, with Albert Benveniste and Michelle Basseville and motivated by the then hot topic of wavelets, into the nature of multiresolution models on hierarchical trees, where the starting point is a model that synthesizes processes rather than the usual wavelet focus of analyzing (i.e. decomposing) a process. Once our group removed the exclusive focus models on directed trees, we moved much more into the domain of graphical models as studied in AI. In this context, I will then turn to the problem of learning models (on undirected trees) with latent nodes, i.e., where we allow ourselves to learn the structure of the index set on which variables (observed and hidden) live. I’ll then describe several lines of research that move away from trees to more general graphical structures for Gaussian models including algorithms for models on graphs with small feedback vertex sets (both inference on and learning of such models) and a look at the problem of learning models with precision matrices that consist of two components, one sparse (corresponding to a sparse graph) and one low-rank.

Time: Fri 2019-01-25 11.00 - 12.00

Location: F11

Participating: Alan Willsky

Export to calendar

Department of Electrical Engineering and Computer Science

Laboratory for Information and Decision Systems

Massachusetts Institute of Technology