Graphical Modeling and Statistical Inference

Tony Jebara
Associate Professor, Center on Foundations of Data Science
Computer Science, Columbia University
Location: 
SERC 306
Date: 
Tuesday, February 17, 2015 - 11:00
Graphical models and Markov random fields are fundamental tools in machine learning with broad application in areas such as computer vision, speech recognition, and computational biology. We can cast social networks like FaceBook and Epinions as large graphical models where nodes are individuals connected by friendship edges. We can also cast power networks as large graphical models where edges indicate electrical cables between transformer nodes. There are two canonical forms of inference in graphical modeling: maximum a posteriori (MAP) inference, where the most likely configuration is returned; and marginal inference, where a probability distribution over a subset of variables is returned. Both problems are NP-hard when the graphical models have cycles and/or large tree-width. Since learning often reduces to MAP or marginal inference it is also NP-hard to learn general graphical models. How can we tractably solve these problems in practice? Heuristics like sum-product and max-product propagation often work reasonably but formal guarantees have so far been elusive. I will provide formal tractability guarantees by compiling MAP estimation and Bethe marginal inference into a classical problem in combinatorics: finding the maximum weight stable set (MWSS) in a graph. Whenever a graph is perfect, the MWSS problem is solvable in polynomial time. Example problems such as matchings, b-matchings, and submodular models are shown to give a perfect MWSS. By making connections to perfect graph theory, new graphical models are shown to compile into perfect graphs and therefore admit tractable inference. Applications include friendship link recommendation, label prediction in social networks, influence estimation in social networks, and failure prioritization of transformers in power networks.